text stringlengths 1.23k 293k | tokens float64 290 66.5k | created stringdate 1-01-01 00:00:00 2024-12-01 00:00:00 | fields listlengths 1 6 |
|---|---|---|---|
HOW CAN THE STATE SUPPORT THE INNOVATIONS TO BUILD SUSTAINABLE COMPETITIVE ADVANTAGE OF THE COUNTRY
As the crisis gets longer and deeper, growth disparities between some European regions are increasing, there is an even stronger need to accelerate innovation support and deepen it in the areas crucial to innovation, such as higher education, innovation-based entrepreneurship and demand-side measures. Europe needs fresh dynamism in its economy. Existing industries and the countries, too, need to develop new applications and new business models in order to grow and maintain their competitive advantage. This calls for an innovation-driven structural change, attracting top talent and reward innovative entrepreneurs, offering them much better opportunities to start and grow new businesses. Several studies were done exploring the innovations and their importance for the companies to achieve a sustainable competitive advantage. The article describes basic approaches and the model of relationship between key factors and their influence on the construct success of the company. As an outcome from this model it is clear that innovation orientation of the management and ability to launch innovations onto the market are central aspects of the success. The article deals with current status of innovations in Slovakia, identifying what are the preconditions for future development of the environment that is supporting innovations and how are they fulfilled in
INTRODUCTION
After decades of various transformations the companies have become smaller, simpler and much faster in reaction to market demands that ever before.They became more competitive.The same applies for nations.In the fierce competition for foreign direct investments, innovative scientists etc, nations became more competitive.There is no single recipe for nation competitiveness.One needs to take into account the specific environment, value system and cultural heritage of a country in order to define what is acceptable or not and to draw the consequences and policy implications which are advisable for particular national economy.In our approach we evaluate innovations and effectiveness of innovation process, which is becoming the key factor (and key challenge) for many political and research initiatives focused on nation competitiveness growth.Other expected results of the innovations and effectiveness of innovation process are sustainable development and steady increase of the quality of life of the society.In a quest for nation competitiveness analysis we use certain simplification -comparing nations with the firms in many places of this article, as far as 1) a competitive firm is a key factor of a competitive nation; 2) the competitiveness of firm is well studied and described.
The principal role of the corporate management is effective management of technology operations and processes, that must be adjustable to provide flexible response on customers needs within the limits of production lines."But it is not enough for a company to streamline and downsize, company must be capable to innovate -fundamentally reconceiving itself, of regenerating its core strategies, and of reinventing its industry" (Tushman et al., 1997).From the nations point of view emphasis is laid on possible modification of socio-economic processes and social models intended for preparation of sufficiently skilled manpower to implement the innovation trends searched for.The importance and the need for innovations aimed at further development of the European Union can be derived from the currently most significant political document formulating the objectives and tasks of the European Community in the future -Europe 2020 Strategy for Growth and Employment.
THE ROLE OF INNOVATIONS IN THE PROCESS OF COMPETITIVENESS GROWTH OF THE FIRMS AND NATIONS
Last fifty -sixty years in mankind development is marked with several revolutionary changes -many technological, mainly communication and information systems have brought disruptive energy and led to the overall improvement of the lives of human beings; making it easier, but at the same time more interconnected and removing previously known barriersborders and political systems.But the speed of new innovations seems to be slowing down.While markets are constantly overflowed with new products, the really NEW products (disruptive innovations) are rare.Companies everywhere are engaged in a product innovation war with ultimate goal to differentiate.The weapons of this war are the thousands of new products invading chosen marketplaces in order to secure a sustainable competitive advantage."Unlike a military war, the product innovation war is beneficial one -no deaths, no violence, and no burned buildings.The victors gain riches and fame; the losers are vanquished, merged, or disappear; and society and humankind benefit from the new products and services that previous generation did not have" (Cooper, 2005).But many companies find themselves in unusually precarious positions.The market requires continuous quality improvements, companies are not in a position to successfully raise their prices.In many cases, companies are being forced to drop heavily their prices in order to compete.Looking more closely at the situation we see that many of these companies have not systematically adjusted their revenue models to reflect the changes of recent years.This situation is described in Figure 1.
Innovations can be considered to be an inseparable part of the country's economic growth in the global economic market conditions.Their primary essence consists of new information resulting from application of natural persons' theoretical knowledge of scientific, research and development activity to the entrepreneurial activity.They can be defined as a "renovation and extension of the product and service range, as well as relating markets, formation of new methods, technologies and production ways, delivery and distribution, implementation of changes in labor management and organization, improvement of work conditions and growth of workers' qualification."The purpose of implementation of innovation activities is a constant increase of business entity performance, which in this way strives to commercialize the original invention.On the other hand, business entity performance is often resulting in wrong (slow, late, or inadequate) reaction on the market feedbacks.We have to bear in mind, that only business entity innovation activities, isolated in the goals and financing could be weakening the final ability of the companies developing and launching new products.
From The open innovation that Chesbrough describes (Figure 2.) shows the necessity of letting ideas both flow out of the corporation in order to find better sites for their monetization, and flow into the corporation as new offerings and new business models.(Chesbrough et al., 2006).
Achieving the competitive advantage of the country is possible only if all firms will learn, utilize, and develop the external knowledge and will leave the paradigm of necessity being at the initial moment of the new ideal or concept.It is sufficient that firm focuses on profound development of the information available (Sloane, 2011).Antedescant for success of open innovation approach is willingness providing public (D'Aveni, 2007) access to the own ideas and innovations, which are not fully utilized by the firm, under the condition that rules of the utilization are clearly defined and agreed.This is to avoid commercial losses and gain reasonable return coming out of the own R&D activities.
COMPETITIVENESS
Over the years, the debate has been ongoing about the meaning of this word and most citizens lacking important notions in global trade have stuck with the meaning that was most accessible and comprehensible to them, the same meaning President Clinton gave to it during his time in office: "nations are like corporations competing in the global marketplace".This definition implies many things such as the existence of a bottom line for countries and the impossibility of there being two winners in the equation.
Paul Krugman (1994) started the debate by presenting his disapproval of this commonly accepted vision.For him, countries unlike corporations don't have a bottom line in the sense that they don't try to maximize their citizen's wealth in order not to cease existing because there is nothing the least resembling to bankruptcy as an option for countries.He also denies that trade is a zero-sum argument.All countries have the possibility of being winners in the world market place through the dynamics of comparative advantage.In Krugman's views, nations are not in economic competition with each other and their problems can't be attributed to their lack of success in competing on the global platform.Indeed, since exports are only 10% of GNP, countries are not really dependent on their neighbors for success.Success, in the sense of sustainability and high standards of living, is entirely dependent on a country's domestic productivity growth.One key point Krugman wants to get through is that because trade balance is so innocuous, there is no need to build domestic polices around it.Doing so would only result in a misallocation of resources and a lack of funding for the service sector, protectionism and bad public policies.
Regardless the fact that countries do not have so called bottom line in classical view, we think that there is close relationship and similarities between successful (competitively looking) companies and countries, and therefore both managers and politicians should learn from Hamel and Prahalad breakthrough thoughts expressed in their article: "Given that change is inevitable, the real issue for managers is whether that change will happen belatedly, in a crisis atmosphere, or with foresight, in a calm and considered manner; whether the transformation agenda will be set by a company's more prescient competitors or by its own point of view; whether transformation will be spasmodic and brutal or continuous and peaceful.Place coups make great press copy, but real objective is a transformation that is revolutionary in result, and evolutionary in execution" (Hamel & Prahalad, 1994).
Open innovation platform could become nowadays an unique tool not only for reducing uncertainty which is permanently present in own R&D but simultaneously for increasing the competitiveness of the national economies.
THE NATURE OF COMPETITIVE ADVANTAGE
IMP has carried out analysis of more than 700 companies from 10 nations which makes it clear that only very few organizations manage to achieve sustainable success (Bailom et al., 2007).Despite their differences, these companies all have one thing in common: they are able to continually create unique benefits in their markets.
This unique quality is in turn the result of a specific ability to reinvent themselves from their core outwards and anticipate future trends in the market.
Peter Brabeck-Letmathe, Chairman of Nestlé, speaking for all of the top performers who were analyzed in this project, sums up this central principle of sustainable success as follows: "It is not a matter of thinking about what made us successful up to now, but more importantly what we can do to be successful in the future" (Bailom et al., 2007).
IMP has managed to clearly show the depth and interconnection of the central principles and elements of entrepreneurial success.As one can see, the set is describing the corporate success on 52%.Is it sufficient or not?48%, which are not described, are factors like chance, intuition and last, but not least luck.So far, it is the most precise description with mathematical and statistical justification.
The innovation orientation of the management is a key factor, closely followed by culture intensity and its type, core competence management and ability to bring the innovation to the market.The study of IMP is clear confirmations how important are innovations for firms, but could be similarly conceived by the regions and nations.
Only continuous quality improvements can ensure competitiveness.As a consequence, only countries that are really successful in supporting and implementing innovations are reaching the top positions in the world rank lists of various competitive indexes.
INNOVATION ACTIVITIES IN EU AND IN SLOVAKIA -LEGAL FRAMEWORK
Irrespective of other circumstances influencing the innovation process, its correct legal anchoring at the national and European level can be marked as a significant determinant of successful implementation of specific measures in practice.The suitable legal environment does not participate only in easier transfer of innovative products from the research phase to the retail sale, but also in insurance of sufficient funding, public procurement adjustment for purposes of research and development or, last but not least, in provision of legal protection to innovative products in the area of intellectual property.As a consequence of absence of adequate legal framework the request of mutual cooperation of companies during innovation activities would remain just a formal theoretical starting point, making the real implementation impossible.Elaboration of national legal regulations focusing on science and research support is a relatively complex process, not easy for coordination, professional knowledge and proper consideration of the wide range of mutual ties.It (Bailom et al., 2007) 1 Establishment of the European Research Area (ERA) results from the political and legislative obligations stated in the Amsterdam Treaty, at the same time, however, it should facilitate the increase of investments in the European research.The concept of the European Research Area is a combination of following elements: European "single market" for research, where free movement of research workers, technology and knowledge takes place; efficient coordination of intra-state and regional research activities, programs and policies at the European level; initiatives implemented and funded at the European level.
Innovation Union, which represents one of seven main programs 2 within the Europe 2020 Strategy.The aim of the Innovation Union is to harmonize the rules for provision of tax reliefs and improve the conditions for access to the financial incentives for support of development of science and research in the EU member states.In the context of proposed measures within the Innovation Union the attempt to create a unified European patent mechanism can be deemed to be significant.Inappropriate and often different regulations or procedures of states in management of rights to intellectual property usually build a severe obstacle in enforcing their research activity results abroad.
The European legislative starting points, as well as enduring, not pleasing standing of the Slovak Republic in innovation performance and innovation potential evaluation of member states of the European Union 3 , open for the national legislator a number of challenges and requests for reform of the existing science and technology system with the prerequisite of bigger openness and adaptability to new trends in the field of innovation process.Legal framework of science, technological development and innovations in the Slovak Republic can be at the same time marked at least as insufficient, and moreover, inappropriately complicated.Despite multiple repeated attempts and amendments of the legal framework, which should have implemented a conceptual management of scientific-research activities, the situation regarding support of research is split among several central state administration bodies and allowance organizations funded by them.At present there is no efficiently applied, unified, long-term innovation strategy with clearly defined parameters of its practical implementation in Slovakia.Likewise, no unified functional innovation system, which would consist of scientific institutions, policies, programs and instruments creating favorable conditions for support of innovations and increasing competitiveness of the country economy is in place.
The fundamental basis of the weak development of science and research, in particular the innovation process, originates mainly from different model of economy management before 1989, which was focused predominantly on central planning and showed a low rate of adaptability to changed market conditions.Positive moment of socialist science and research management was, however, the obligation of business entities to invest considerable funds in the support of research activities.On the other hand, science and research concentrated predominantly on heavy engineering industry, which was given also by a number of armament factories in the territory of former Czechoslovakia (Zajac, 2002).Later, however, it was mainly due to the economy transformation from central to market economy, which significantly weakened the implementation of research activity in small and medium enterprises.Consequently, in 2005 the government of the SR adopted the Competitiveness Strategy of Slovakia until 2010.In connection to the strategy the Action plan for science, research and innovations was passed.One of the crucial tasks of the action program was a program preparation to popularize science in society intended for public awareness increase regarding the researcher's work, clarification of the meaning and importance of development of new products and encouragement to participation in many interesting projects (Švec, 2012).Based on the action plan described, so-called Central information portal for science, research and innovations was established.The information portal represents a national information system for science and innovations to monitor the data on development and solutions of research-development projects, funded from public resources, as well as on the possibilities of mobility of research workers within Slovakia and whole Europe.
In connection to the funding of research projects, it is also necessary to point out the existence of the Innovation fund, established by the Ministry of Economy in accordance with the provisions of Act No. 147/1997 Coll. on Non-investment Funds.Provision of financial support from the fund reserves is performed by means of recoverable financial contribution for organisations intending to execute research activities that are highly likely to achieve the required market result.Substance of loans from the innovation fund lies predominantly in awarding highly favourable conditions compared to commercial banks (Štofkova, 2012).As the problem of innovations obviously surpasses the borders of one department and the situation described was, from the longterm point of view, unsustainable for the Slovak Republic, the Slovak government established based on Resolution of 28 September 2011 the Government Council of the SR for Innovations with the primary aim to strengthen the coordination of implemented innovation measures for the upcoming period.However, at present we are not able to state with certainty that the new body will be able to fulfill the objectives set in support of the national innovation process, mainly based on the existing experience from previous similar institutions.Thus, expected changes would repeatedly become only formal concepts without adequate application results.
CONCLUSION
It is getting harder and harder to differentiate oneself from the competitors, their products and services, resulting in fierce competition in terms of product quality and price.In the long run, survival in this competitive environment, and defense of one's position, depends crucially on continuous innovation and improved offers or apparent price advantage.
The results of several analyses confirm, that many companies in Central Europe (but the statement is not limited to this region only) are caught in a downward spiral, a situation in which they are subject to enormous pressure to constantly improve the quality of their products and services while there is little room to increase prices.
The European Union and its member states do not have a large choice while ensuring future sustainability of own economic area, just to adapt to the current global innovation trends.At the same time, the ability to innovate quickly and cheaply in the interest of market share preservation in the strongly competitive market becomes inevitable.One of possibilities how to meet the expectations is the effort of the states to support mutual cooperation of business entities using the respective legal regulations."Government's proper role is as a catalyst and challenger.It is to encourage, or even push, companies to raise their aspirations and move to higher levels of competitive performance, even though this process may be unpleasant and difficult.Government plays a role that is inherently partial, and that succeeds only when working in tandem with favorable underlying conditions in the diamond.Government policies that succeed are those that create an environment in which companies can gain competitive advantage rather than those that involve government directly in the process" (Porter, 1991).
Though the Slovak concepts of science and research development at first sight faithfully copy the requests set in different European legislations, their fulfillment, with regard to specific conditions of the SR, will represent a demanding task in the future.We assume that the basic insufficiency will still result from not existing complex access to formation and support of research activities.Establishment of various coordination authorities cannot namely remove their continuing differentiation between different central state administration bodies either.Although they can contribute to better information of the stakeholders, they, however, cannot have a direct impact on their decision-making process, as e.g.ministries will constantly act as independent administration bodies in the process of allocation of funds to concrete projects.Analogically, problem may arise while considering just the general character of all documents described, individual state official can interpret formulations differently.Thus, the solution could be the establishment of a certain "super-ministry" or another central state administration body, which would cover the entire science and research or innovation area (with regard to the current legal state, the issue of innovations could be shifted to the Ministry of Education, Science, Research and Sport of the Slovak Republic) and at the same time it would dispose of the competencies regarding observance of the national innovation strategy set.Developing shared expectations among all stakeholders to promote innovation takes time and requires absolute consistency by all responsible at all levels and constant repetition of messages.
Announcement: This contribution was elaborated within the research project VEGA 1/0900/12 titled "Increase of innovation efficiency and innovative capability of business entities using the system of open innovations with support of integrated marketing communication.".
Figure 1 .
Figure1.Phenomena of hyper competition.The battlefield is moving towards higher quality and lower price.(D'Aveni, 2007)
Figure 2 .
Figure 2. Open innovations(according Chesbrough, 2013) Figure 3. IMP model of the nature of competitive advantage(Bailom et al., 2007) of existing problems is offered by the Slovak Research and Development Agency (APVV) founded based on Act No. 172/2005 Coll. on Organization of State Support for Research and Development and on amendment to Act No. 575/2001 Coll. on Organization of Activities of the Government and Central State Administration as amended, as well as by the Scientific Grant Agency (VEGA) of the Ministry of Education, Science, Research and Sport of the Slovak Republic and of the Slovak Academy of Science.It is their task to facilitate research project implementation using the funds allocated from the state budget of the Slovak Republic.Situation in national support of science and research came noticeably forth also in 2007, when the National Strategic Reference Framework for Planned Economic and Social Development of the Slovak Republic in 2007 -2013 was adopted.The National Strategic Reference Framework was prepared under the supervision of the European Commission and was several times re-worked in line with its conditions.At present it represents the basic strategic document for use of funds from the Structural Funds of the European Union and the Cohesion Fund.In the mentioned period the government of the Slovak Republic adopted also the proposal of the Innovation Strategy of the Slovak Republic for 2007 -2013.The innovation policy of the SR 2008 -2010 and later 2011 -2013 in jurisdiction of the Ministry of Economy of the Slovak Republic focused on closer elaboration of specific measure presented in the innovation strategy.The task of the innovation policy of the SR was not only a continual insurance of volume of funds spent on science and research, but also support of society development towards innovation and creativity.Measures selected focus on removal of obstacles of cross-border cooperation and mobility of research workers, establishment of partnerships between business entities and universities; they concentrate on quality increase of master's education or promotion of scientific knowledge using the open access to publications and data from the publicly funded research (Ministry of Economy, 2010).Similarly, the innovation strategy of the Slovak Republic for 2007 -2013 was supplemented by so-called Long-term Intent of the State Scientific and Technological Policy till 2015, adopted by the resolution of the SR No. 766/2007, which has been recently amended by so-called Fenix Strategy (2011).Fenix brought again several system changes in science and research funding in Slovakia, and at the same unambiguously identified the tasks of the state machinery in this area.Besides the documents mentioned, also Minerva project 2.0 should help improve competitiveness of the Slovak Republic and support development of knowledge-based economy.The action plans of Minerva 2.0 bring after all principal reforms of methods of professional education, formation of information-based society and development
Figure 4 .
Figure 4. Public and private investments into innovations in Slovakia 2000-2011 (Statistical office of the Slovak Republic, 2013) | 5,444.6 | 2013-03-10T00:00:00.000 | [
"Economics",
"Business",
"Political Science"
] |
The First X-Ray Polarization Observation of the Black Hole X-Ray Binary 4U 1630–47 in the Steep Power-law State
The Imaging X-ray Polarimetry Explorer (IXPE) observed the black hole X-ray binary 4U 1630–47 in the steep power-law (or very high) state. The observations reveal a linear polarization degree of the 2–8 keV X-rays of 6.8% ± 0.2% at a position angle of 21.°3 ± 0.°9 east of north (all errors at 1σ confidence level). Whereas the polarization degree increases with energy, the polarization angle stays constant within the accuracy of our measurements. We compare the polarization of the source in the steep power-law state with the previous IXPE measurement of the source in the high soft state. We find that, even though the source flux and spectral shape are significantly different between the high soft state and the steep power-law state, their polarization signatures are similar. Assuming that the polarization of both the thermal and power-law emission components are constant over time, we estimate the power-law component polarization to be 6.8%–7.0% and note that the polarization angle of the thermal and power-law components must be approximately aligned. We discuss the implications for the origin of the power-law component and the properties of the emitting plasma.
INTRODUCTION
Black hole X-ray binaries (BHXRBs) harbor a stellar mass black hole in close orbit with a companion star.The matter accreting onto the central black hole forms an accretion disk which is heated by internal frictions to the point of emitting radiation that typically peaks in the X-ray band.BHXRB sources are found in different spectral states.The two main states, the high soft and low hard states (HSS and LHS, respectively), exhibit a spectrum that can be roughly described as a combination of both a soft thermal component and a harder electron-scattering component with reflection by a cold medium.In the HSS, the X-rays are dominated by the thermal accretion disk emission followed by a non-thermal tail extending beyond 500 keV.This state is often fitted with a multi-temperature blackbody model and a power law ∝ E −Γ with a photon index of Γ ∼ 2 − 2.2 (Zdziarski & Gierliński 2004).In the LHS, the X-ray emission is dominated instead by photons that Compton scatter in a hot coronal plasma, though a low-temperature disk component can still be detected (McClintock & Remillard 2006).In this state, BHXRB spectra consist of a cutoff power-law component with a typical photon index of 1.5 ≤ Γ ≤ 2.0 and an exponential cutoff at high (∼ 100 keV) energies as well as reflected emission from the corona off the disk (George & Fabian 1991;Done et al. 2007).BHXRBs can also be found in the steep power law (SPL) or very high state.The SPL state is characterized by competing thermal and power-law components-where the power-law component has a photon index of Γ > 2.4 (steeper than the higher energy tail of the HSS and the Γ ∼ 1.7 detected in the LHS) (Remillard & McClintock 2006).
The Imaging X-ray Polarimetry Explorer (IXPE, Weisskopf et al. 2022) is a space-based observatory launched on 2021 December 9. IXPE has measured the linear polarization of the 2-8 keV X-rays from several BHXRBs, giving new insights into the configuration and properties of their emitting plasmas.The IXPE observations of the BHXRB Cyg X-1 in the LHS revealed a 4% polarization aligned with the black hole radio jet, supporting the hypothesis that the jet might be launched from the black hole inner X-ray emitting region (Krawczynski et al. 2022).These results also revealed that the hot coronal plasma is extended parallel to the accretion disk plane and is seen at a higher inclination than the binary.IXPE observed a high polarization degree of ∼20% perpendicular to radio ejections of the black hole candidate Cyg X-3 suggesting that the primary source is inherently highly luminous but obscured so that only the reflected emission can be observed (Veledina et al. 2023).The IXPE observations of the low-inclination high-mass BHXRB LMC X-1 in the HSS gave only an upper limit on the total polarization degree of < 2.2% (Podgorny et al. 2023) for a combination of two main spectral components: dominant thermal emission with a modest contribution of Comptonization.
Observations of the transient low-mass X-ray binary (LMXRB) 4U 1630-47 with the Uhuru satellite were first reported in Forman et al. (1976) and Jones et al. (1976), describing four outbursts occurring every ∼ 600 days.The X-ray spectral and timing properties of the LMXRB during an outburst in 1984 suggest the compact object of 4U 1630-47 is a black hole candidate (Parmar et al. 1986) albeit with unusual outburst behavior (Chatterjee et al. 2022) indicative of a more complex system.The source spectrum tends to show strong, blueshifted absorption lines corresponding to Fe XXV and Fe XXVI transitions during the soft accretion states (Pahari et al. 2018;Gatuzz et al. 2019).Previous measurements of the 4U 1630-47 dust-scattering halo were used to estimate a distance range of 4.7-11.5 kpc (Kalemci et al. 2018).From the detection of short-duration dips in its X-ray light curve during outburst, a relatively high inclination of 60 • -75 • has been inferred (Kuulkers et al. 1998).Various reflection spectral modeling efforts have consistently measured a high spin: a = 0.985 +0.005 −0.014 (King et al. 2014), a = 0.92 ± 0.04 (Pahari et al. 2018), and a 0.9 (Connors et al. 2021).
IXPE previously observed 4U 1630-47 in the HSS where the detected emission was primarily from the thermal accretion disk (Ratheesh et al. 2023, henceforth Paper I).That observation revealed that the polarization degree increased with energy from ∼6% at 2 keV to ∼10% at 8 keV.The high polarization degree and its energy dependence cannot be explained in terms of a standard geometrically thin accretion disk with a highly or fully ionized accretion disk atmosphere (Chandrasekhar 1960;Sobolev 1949Sobolev , 1963)).While a standard thin disk viewed at inclinations 85 • would produce a sufficiently high energy-integrated polarization degree, relativistic effects would lead to a decrease of the polarization degree with energy contrary to the observed increase.Such a high inclination would also lead to eclipsing of the source which has not been detected.In Paper I we argue that a geometrically thin disk with a partially ionized, outflowing emitting plasma can explain the observations.The absorption in the emitting plasma leads to escaping emission that is likelier to have scattered only once and ends up being highly polarized parallel to the * Deceased disk surface (Loskutov & Sobolev 1979, 1981;Taverna et al. 2021).A vertically outflowing emitting plasma leads to increased emission angles in the local disk frame due to relativistic aberration resulting in a higher polarization degree (e.g.Beloborodov 1998;Poutanen et al. 2023).Including absorption effects and the relativistic motion in the models achieves proper fits of the data for a thin accretion disk of a slowly spinning (a ≤ 0.5) black hole seen at inclination i ≈ 75 • when the emitting plasma has an optical thickness of τ ∼ 7 and moves with a vertical velocity v ∼ 0.5 c.
In this letter, we report on the first measurement of the polarization properties of a BHXRB in the SPL state.The letter is organized as follows.We describe the IXPE, NICER, and NuSTAR observational results of 4U 1630-47 in Section 2 and present a comparison of the polarization of the source in the HSS and the SPL states.In Section 3, we examine our results in the context of previous IXPE X-ray polarization measurements of BHXRBs and discuss scenarios that could explain the observed polarization signature.
DATA SETS, ANALYSIS METHODS, AND RESULTS
IXPE performed a target of opportunity (ToO) observation of 4U 1630-47 between 2023 March 10 and 14 for ∼150 ks after daily monitoring of the source by the Gas Slit Camera (GSC) on the Monitor of All-sky X-ray Image (MAXI) (Matsuoka et al. 2009) reported a significant increase in flux, as shown in Figure 1a.The MAXI flux was about 0.62 ph s −1 cm −2 during the gray highlighted region of the figure which coincides with the Paper I observationhereby referred to as the HSS data.The blue and green highlighted regions have a higher flux of approximately 2.24 ph s −1 cm −2 and 2.77 ph s −1 cm −2 , respectively, signaling a change in the emission state of the source.During these later time intervals, the 4-20 keV flux shown in purple in Figure 1a increases more drastically than the 2-4 keV A comparison of the NICER and NuSTAR spectra in Figure 2a for the HSS observation (black) and Periods 1 and 2 (blue and green) reveals that the source transitioned from the HSS to the SPL state.In Paper I, the power-law component of the spectra contributed ∼ 3% of the energy flux in the IXPE energy band.In contrast, our spectral fitting (see Appendix B) reveals that in Period 1 of the SPL state the power-law emission contributed ∼17-46% of the 2-8 keV emission while in Period 2 this contribution increased to ∼40-92%.The soft HSS spectra are almost completely thermal in the form of a multi-temperature black body while the SPL spectra show an additional steep power-law component.From Figure 2a, we can see the SPL state shows an increase in 2-50 keV flux and a change in the spectral shape at energies above 5 keV.Only the HSS spectra exhibit prominent blueshifted Fe XXV and Fe XXVI lines as previously seen in past outbursts and explained in terms of over-ionization of the wind (Díaz Trigo et al. 2014) or of an intrinsic change of the physical properties of the wind itself (Hori et al. 2014) in the SPL state.Figure 2b shows a hardness-intensity diagram (HID) of 4U 1630-47 NICER data including the HSS (black) and SPL (blue and green) observations contemporaneous with the IXPE measurements, and archival data.Period 2 exhibits the highest rate corresponding to the largest relative contribution of the power-law flux.The energy flux in the 1-12 keV band increases with hardness during the transition from the HSS to the SPL state saturating at ∼ 1496 s −1 .Most astrophysical black hole candidates move through a hardness-intensity diagram counter-clockwise during outbursts (see Figure 7 of Fender et al. 2004 and Figure 1 of Homan & Belloni 2005).However, Figure 2b shows 4U 1630-47 evolving in a clockwise direction near the apex of the HID consistent with previous Suzaku observations of the source in the SPL state (Hori et al. 2014).We note that the variable motion of the source along the HID (see Figure 11 of Tomsick et al. 2005) makes it unclear if the source transitions from the HSS to the LHS through a high-intensity SPL regime or if we caught the source in an unusual pattern of motion.Furthermore, Figure 2b shows no evident bright hard state, consistent with the results of Capitanio et al. (2015) which could indicate a deviation from the standard HID Q-track shape proposed in Fender et al. (2004).Alternatively, Tomsick et al. (2014) suggest that a low large-scale magnetic field in the disk could delay the transition to the LHS.
During the entire SPL state observation, IXPE measured an energy-averaged 2-8 keV linear polarization degree (PD) of 6.8 ± 0.2% at a polarization angle (PA) 21. • 3 ± 0. • 9 (East of North) with a statistical confidence of over 30 σ.The SPL state observation has a 1.5% smaller PD than the 8.32±0.17%HSS PD reported in Paper I at a PA 3. • 5 higher Figure 3. Measured PD and PA of 4U 1630-47 in 5 logarithmic energy bins: 2.0-2.6,2.6-3.5, 3.5-4.6,4.6-6.1, and 6.1-8.0 keV.The black line and transparent contours show the polarization in the HSS reported in Paper I. The red solid line and solid contours show the polarization in the SPL state (this paper).The shaded and unshaded ellipses show their 68.3% and 99.7% confidence regions, respectively.Errors on PD and PA computed by ixpeobssim are derived from the Q and U gaussian errors according to the formalism developed by Kislat et al. (2015).
with respect to the previously observed 17. • 8 ± 0. • 6. Figure 3 shows the time-averaged polarization signature during both states in 5 logarithmic energy bands.The PA is constant within 3σ during the HSS and SPL observations.The summary of measured PD and PA in different spectral states is given in Table 1.These values have been computed using the PCUBE algorithm of the ixpeobssim analysis software (Baldini et al. 2022).Figure 4 shows linear and constant fits of PD and PA, respectively, obtained using xspec (Arnaud 1996).The HSS and SPL state observations have a similar linear dependence of the PD on the photon energy E, with a linear model PD = p 0 + α(E/1 keV).For the HSS, the reported values are p 0 = 3.47 ± 0.54%, α = 1.12 ± 0.13% with the null hypothesis probability of 3.55 × 10 −16 for a constant function.For the SPL state Period 1 observation, these parameters change to p 0 = 2.7±1.3%, α = 1.08±0.32%with the null hypothesis probability of 1.42 × 10 −2 for a constant function.For the SPL state Period 2 observation, these parameters are p 0 = 2.44 ± 0.70%, α = 0.88 ± 0.16% with the null hypothesis probability of 4.56 × 10 −7 for a constant function.Both the HSS and SPL Period 1 and Period 2 observations show relatively energy-independent PA in the IXPE band, with the fitted value of PA being 18. • 0 ± 0. • 5, 21. • 4 ± 1. • 8 and 21.• 5 ± 0. • 9 with the null hypothesis probability of 0.607, 0.854 and 0.877, respectively.To study the polarization properties of the power-law component, we performed a polarimetric fit of the data starting from the spectral analysis described in Appendix B. We included the IXPE Q and U spectra in the spectral fit and convolved the thermal and power-law spectral components with two pollin models1 .This allowed us to attribute polarization to each component separately assuming that the PD depends linearly on the photon energy E: PD = p 0 + α(E/1 keV).In Paper I, we found that the only spectral component contributing significantly to the HSS emission is the thermal one.We assumed that the polarization of this thermal component remains constant between the HSS and SPL states requiring that p 0 Thermal = 3.47% and α Thermal = 1.12% as per the HSS fit shown in Figure 4a.Due to the relatively constant PA during the HSS, SPL Period 1, and SPL Period 2 observations (Figure 4b), we further assumed that the thermal and non-thermal components have equal PA and allowed it to vary between SPL periods.Additionally, the PA appears to be energy-independent so our fits take the PA to be constant with energy: PA= ψ.As shown in Table 1, the estimates of the power-law component flux contribution depend on the model parameters used and will therefore also affect the estimate of the polarization properties of the power-law component.Figure 5 summarizes the results of our linear fits for the non-thermal component PD resulting from Fits 1 and 2 as well as the assumed thermal component PD for comparison.For Fit 1, we assumed a multi-color blackbody as the Comptonized component input radiation (Figure 6a).For the PD of the power-law component, we found that α Fit1 = 1.05 ± 0.45% and we set an upper limit on p 0Fit1 of 2.7%.The p 0Fit1 upper limit tells us that the Comptonization component could be unpolarized at 0 keV but this is just an extrapolation-the power-law PD in the 2-8 keV energy range (Figure 5) shows that the component is polarized.The computed PAs for Period 1 and Period 2 are ψ Fit1-P1 = 21.• 0 ± 3. • 4 and ψ Fit1-P2 = 21.• 7 ± 2. • 2. For Fit 2 (Figure 6b), we assume a simple blackbody as a seed for the power-law radiation.In this case, the thermal emission is the main source of flux in the 2-8 keV energy range for both Periods 1 and 2. The PD of the power-law component can be fitted with α Fit2 = 0.96 ± 0.26% and we were only able to set an upper limit on p 0Fit2 of 1.3%.The corresponding PAs for Period 1 and Period 2 are ψ Fit2-P1 = 21.• 0 ± 3. • 5 and ψ Fit2-P2 = 21.• 7 ± 2. • 1.We also calculated the 2-8 keV average PD of the power-law component from the IXPE I, Q, and U fluxes.For Fit 1, we get 7.0 ± 3.2% and 6.8 ± 2.6% in Periods 1 and 2, respectively.For Fit 2, we get 6.8 ± 3.9% and 7.0 ± 2.2% in Periods 1 and 2, respectively.
DISCUSSION
IXPE observed 4U 1630-47 in the HSS (Paper I) and in the SPL state (this paper).We find that the HSS and SPL exhibit surprisingly similar polarization properties despite their very different energy spectra.Although the PD of the HSS (increasing from 6% to 10% between 2 to 8 keV) exceeded that of the SPL observations (increasing from 5% to 8% between 2 to 8 keV), and Figure 4a shows that the PD of Period 2 decreases with respect to Period 1, we note that the PD varied as much during the HSS observations (Fig. M3 of Paper I) as it did between the HSS and the SPL observations.The change in polarization direction ∼ 3. • 5 is not statistically significant (3σ).While the HSS spectrum was dominated by the thermal component, our spectral analysis shows that the Comptonization component increased by a large factor between the HSS, SPL Period 1 and Period 2, although its exact flux contribution is model parameterdependent.Since the polarization angle stays almost the same with vastly different flux contributions of the power-law component, this component has to be polarized in a similar direction as the thermal component.Our polarimetric analysis reveals that the power-law component has an energy-integrated PD of 6.8-7.0% in both cases analyzed, i.e. using either multicolor disk blackbody or single temperature blackbody as seed photons for Comptonization.Since both cases suggest substantially different contributions of this component to the total flux, we consider this estimate to be quite independent of the model assumptions.Note that the dominating thermal component in HSS had a PD of 8.3 %, thus the Comptonized component is slightly less polarized than the thermal one by approximately 1.3-1.5%.
This congruence of the PD and directions is puzzling if the emission comes from spatially distinct regions and is produced by different physical emission mechanisms.Direct thermal emission from the disk tends to be polarized parallel to the accretion disk except for close to the innermost stable circular orbit (ISCO) where strong gravitational effects rotate the PA by about 10 • (Connors & Stark 1977;Loktev et al. 2022).Gravitationally lensed photons that scatter off the disk (known as returning radiation) are polarized perpendicular to the direct thermal radiation (Schnittman & Krolik 2009).Comptonization, commonly invoked to explain the power-law component, gives rise to a polarization perpendicular to the spatial extent of the Comptonizing plasma (Poutanen & Svensson 1996;Schnittman & Krolik 2010;Krawczynski & Beheshtipour 2022).The apparent alignment of the polarization directions of the thermal and power-law emission could imply that the Comptonizing plasma of the SPL state is extended perpendicular to the accretion disk-contrary to what we inferred for the hard state of Cyg X-1 (Krawczynski et al. 2022).However, it is worth noting that for a slab corona geometry, polarization is parallel to the disk at photon energies where the first Compton scattering dominates the flux (Poutanen et al. 2023).Since the temperature of the disk is high (kT bb ≈ 1.5 keV), the first scattering could dominate in the IXPE energy range such that the PA of the disk and the up-scattered component are aligned.
Based on the IXPE results, we posit that the HSS and SPL states could exhibit similar disk geometries and involve similar emission processes.In the scenario discussed in Paper I, an outflowing, partially-ionized accretion disk atmosphere produces the observed high PD as a result of Thomson scattering.The electrons in the outflow attain Compton temperature (a few keV) if efficient heating and acceleration mechanisms, such as shocks, magnetic reconnection, and turbulence, do not operate.Instantaneous increase of electron heating and acceleration may lead to a change of the scattering mechanism-from Thomson to inverse Compton-producing the observed power-law component.During the transitions between the soft and hard states, the observed spectra are known to be well fitted with Comptonization from low-temperature thermal or hybrid (thermal and non-thermal) electrons (Gierliński et al. 1999;Zdziarski et al. 2001;Życki et al. 2001), with a typical temperature of the Maxwellian part ∼ 10 keV.Increased electron temperature, in general, causes the reduction of the PD (e.g., Fig. 2 of Poutanen 1994); however for these low electron temperatures the effect is rather small and the polarization signatures remain similar to (albeit not exactly the same as in) the Thomson-scattering case.The observed variations of the PD during the HSS and SPL states could result from changes in the scattered fraction and/or the outflow velocities.
As mentioned in Paper I and in West & Krawczynski (2023), non-vanishing accretion disk geometrical thicknesses may play a role in explaining the high polarization fractions of the source.Spectral fitting indicates that the disk temperature kT bb increased between the HSS (in Paper I) and the SPL state.This increase in temperature is expected if a thicker accretion disk is present in the SPL state (Tomsick et al. 2005).As higher energy photons originate closer to the black hole and are more likely to scatter, this scenario naturally explains PD increasing with photon energy.In contrast, the reflection off distant features (e.g. off a wind) would give rise to rather energy-independent PD.We also note that the neutral hydrogen column density is much smaller in the SPL than in the HSS state.The similar polarization properties of the emission from both states confirm our conclusion from Paper I that scattering off the wind is most likely not the dominant mechanism explaining the high polarization of the X-ray emission.
On the other hand, we note that spectral timing studies of black hole LMXRBs suggest that their coronae contract in the hard state and then expand during the hard-to-soft state transition (Wang et al. 2022).Soft reverberation lag modeling employing a lamppost corona estimates that the corona height increases by an order of magnitude during the state transition (Wang et al. 2021).If this increase in height were to be accompanied by a decrease in width, we could expect a change in the shape of the corona from laterally extended in the LHS to vertically extended-and hence giving rise to large reverberation lags-in the intermediate states.Our polarization results could then be explained by a cone or lamppost-shaped corona in the SPL state.Future polarization measurements of the source, particularly in the LHS, could help constrain the evolution of the corona geometry as well as the polarization of the power-law component.
In other alternative scenarios, the power-law component could originate as synchrotron emission from a jet perpendicular to the accretion disk threaded by a magnetic field aligned with the jet; or from synchrotron emission from non-thermal electrons accelerated in the plunging region, gyrating in a magnetic field perpendicular to the accretion disk (Hankla et al. 2022).This model would require just the right amount of magnetic field non-uniformity to explain the rather low PD of the power-law emission for synchrotron emission.Yan & Wang (2011) propose that the SPL state originates from synchrotron radiation of magnetized compact spots near the ISCO, down-scattered by thermal electrons in the corona.Also here, some fine-tuning is required so that the combined thermal and power-law emission end up having similar polarization signatures as the thermal emission alone.(Ferland et al. 2017) absorption table was used in Paper I to model the absorption lines detected in the observation of the source in HSS, likely produced by a highly-ionized outflowing plasma (i.e. with ionization parameter ξ ≈ 10 5 and hydrogen column density N H ≈ 10 24 cm −2 ).If we use the cloudy component and assume the same ionization parameter of the HSS observation, it is possible to obtain an upper limit of N H ≤ 10 22 cm −2 on the wind column density along the line of sight.However, if the ionization parameter is allowed to vary freely it is usually fitted to unrealistically high values.Additionally, the SPL state observation shows no prominent absorption lines so this component was no longer used in the fitting procedure.We used the nthcomp component assuming either disk blackbody or blackbody seed radiation.For Fit 1, we assumed multicolor disk blackbody seed radiation (inp type parameter = 1) and fixed its temperature to the values obtained from initial modeling using diskbb (kT bb = 1.46 +0.02 −0.01 ; 1.54 +0.01 −0.02 keV in Period 1 and 2, respectively).For Fit 2, we used a single blackbody as the input radiation (inp type parameter = 0) and instead left the temperature free to vary in the fitting procedure.The nthcomp input radiation modified the fluxes contributions, as presented in Table 1, and consequently the polarization properties of the power-law component.This is due to the different low energy contributions of nthcomp when using a multicolor black body in place of a single black body, which influences the kerrbb accretion rate in the fitting procedure and consequently the thermal radiation contribution to the total flux.Figure 6 shows the unfolded spectra and data residuals for both fits.The Period 2 kerrbb contribution to the total flux in Fit 2 is significantly larger than in Fit 1 as denoted by the dashed green lines.
Additionally, following Paper I, an empirical absorption edge model was used at 2.42 and 9.51 keV to account for reported instrumental features in the NICER and NuSTAR spectra, respectively (Wang et al. 2021;Podgorny et al. 2023).The cross-calibration model MBPO employed in Krawczynski et al. (2022) was used to account for crosscalibration uncertainties between NICER and NuSTAR allowing the spectral slope and normalization to vary.For the NuSTAR focal plane module A (FPMA) we fixed the normalization to 1 for all fitting groups, corresponding to the recommended value in Madsen et al. (2022) and kept the slope fixed to zero.For the fit presented in Table 2 we obtained the normalization values of 1.035 ± 0.002 and 0.994 ± 0.001, and the slope values of 0.0664 ± 0.0033 and 0.0095 ± 0.0025, for the NICER and NuSTAR FPMB observations, respectively.The best-fit parameters of this analysis are shown in Table 2 for a χ 2 /dof = 2502.68/2399,when using a disk blackbody input radiation for the nthcomp component, and a χ 2 /dof = 2470.75/2399assuming a blackbody input for the power-law component.It is worth noting that in our simplified approach the data are consistently above the model in the high energy tail of the spectra (45-70 keV) with both models further motivating the need for a more detailed analysis of the spectral properties of this source.
Figure 1 .
Figure 1.X-ray light curves of 4U 1630-47.a) MAXI light curve between MJD 59800 (2022 August 9) and MJD 60025 (2023 March 22).The flux in the 2-20, 2-4 and 4-20 keV energy bands are reported in black, orange, and purple, respectively.The gray-shaded region corresponds to the observation reported in Paper I when the source was in the HSS while the regions shaded in blue (Period 1) and green (Period 2) correspond to the observation reported in this paper when the source was in the SPL state.b) From top to bottom: IXPE, NICER, and NuSTAR light curves from March 10 to March 14, 2023.Observations of Periods 1 and 2 are shown by the blue and green data points, respectively, with a sudden flux increase at around MJD 60014.57indicated by the vertical dashed line.
Figure 2
Figure 2. a) NICER (2-10 keV) and NuSTAR (3-50 keV) spectra of the HSS (black) from Paper I and from the current SPL Period 1 (blue) and Period 2 observations (green).The spectra were unfolded using a unit constant model for both instruments.b) Hardness-intensity diagram from NICER data of the HSS (black) and SPL state Period 1 (blue) and Period 2 (green), in 8 s intervals.Data from all previous NICER observations of 4U 1630-47 are shown in gray.Rates have been normalized as if all 52 of NICER's FPMs were pointing at the source.
Figure 4 .
Figure 4. a) PD and b) PA as a function of energy in the IXPE 2-8 keV energy range.Comparison of the 4U 1630-47 polarization properties in the HSS (black), reported in Paper I, and in the SPL Period 1 (blue) and Period 2 (green) discussed in this paper.Linear fits for PD and constant fit for PA are also shown in dotted lines (see the text for the fit details).
Figure 5 .
Figure 5. Best linear fits with respect to energy of thermal component (black), power-law component for Fit 1 (red), and power-law component for Fit 2 (yellow).The shaded regions show the 1σ confidence intervals.
Figure 6 .
Figure6.Fits of 4U 1630-47 NICER and NuSTAR X-ray spectra for Period 1 (blue) and Period 2 (green): a) Disk blackbody assumed as seed radiation for the power-law component (Fit 1).b) Single temperature blackbody assumed as seed radiation for the power-law component (Fit 2).Unfolded spectra around the best-fitting model in FE representation, the total model (solid) and the kerrbb (dashed) and nthcomp (dotted) contributions for each data set are shown in the top panels while the data-model residuals in σ are shown in the bottom panels.
Table 1 .
Polarization properties in different spectral states of 4U 1630-47.The estimated fractions of the thermal and power-law flux contributing to the 2-8 keV energy band are also given.Spectral state Polarization degree Polarization angle Thermal contribution Power-law contributionNote-Flux contributions are parameter-dependent.See Appendix B for more details on the model used.Contributions are calculated using either disk blackbody seed radiation (Fit 1) or blackbody seed radiation (Fit 2) for the power-law component of the spectra in the SPL Period 1 and 2 cases. | 6,930 | 2023-05-18T00:00:00.000 | [
"Physics"
] |
Vertex Ramsey properties of randomly perturbed graphs
Given graphs $F,H$ and $G$, we say that $G$ is $(F,H)_v$-Ramsey if every red/blue vertex colouring of $G$ contains a red copy of $F$ or a blue copy of $H$. Results of {\L}uczak, Ruci\'nski and Voigt and, subsequently, Kreuter determine the threshold for the property that the random graph $G(n,p)$ is $(F,H)_v$-Ramsey. In this paper we consider the sister problem in the setting of randomly perturbed graphs. In particular, we determine how many random edges one needs to add to a dense graph to ensure that with high probability the resulting graph is $(F,H)_v$-Ramsey for all pairs $(F,H)$ that involve at least one clique.
Introduction
For r P N, a sequence of (not necessarily distinct) graphs H 1 , . . . , H r , and a graph G, we say that G is pH 1 , . . . , H r q v -Ramsey if for every r-colouring of the vertices of G, there is some i P rrs for which G contains a copy of H i whose vertices are all coloured in the ith colour. Similarly, we say G is pH 1 , . . . , H r q-Ramsey if for every r-colouring of the edges of G, there is some i P rrs for which G contains a copy of H i whose edges are all coloured in the ith colour. In the case when r " 2 we take the convention that the colours used are red and blue. If H 1 "¨¨¨" H r " H we write e.g. pH 1 , . . . , H r q v -Ramsey as pH, rq v -Ramsey.
The classical question in Ramsey theory is to establish the smallest n P N such that the complete graph K n on n vertices is pH 1 , . . . , H r q-Ramsey. In general, by Ramsey's theorem such an n is known to exist though relatively few such Ramsey numbers are known precisely. In contrast to this, the analogous question in the setting of vertex colourings is completely trivial. Indeed, the pigeonhole principle implies K n is pH 1 , . . . , H r q v -Ramsey if n " vpH 1 q`¨¨¨`vpH r q´r`1 but not pH 1 , . . . , H r q v -Ramsey if n is any smaller. density to w.h.p. ensure the resulting graph is pK r , K s q-Ramsey for all values of pr, sq (except for the case when r " 4 and s ě 5). See [7] for other results on this topic.
In this paper, we focus on vertex Ramsey properties of randomly perturbed graphs; in particular we resolve the pH, K r q v -Ramsey problem for r ě 2 and arbitrary H. To state our results we first introduce the following notation. Definition 1.5. Fix some d P r0, 1s. Then for a pair of graphs pF, Hq, we say that p " ppnq is a perturbed vertex Ramsey threshold function for the pair pF, Hq at density d if: (i) For any qpnq " ωpppnqq and any sequence pG n q nPN of graphs of density 1 at least d with v Gn " n for each n P N, with high probability G n Y Gpn, qq is pF, Hq v -Ramsey. (ii) There exists a sequence of n-vertex graphs pG n q nPN of density at least d, such that if qpnq " opppnqq, then with high probability G n Y Gpn, qq is not pF, Hq v -Ramsey. We denote by ppn; F, H, dq, the 2 perturbed vertex Ramsey threshold function for pF, Hq at density d. If there exist C, c ą 0 such that qpnq ě Cppnq suffices for (i) and qpnq ď cppnq suffices for (ii), we say that the threshold function is sharp. If it is the case that every sufficiently large graph of density at least d is pF, Hq v -Ramsey then we define ppn; F, H, dq :" 0.
Note we can analogously define the perturbed vertex Ramsey threshold function for the r-coloured case; that is, given graphs H 1 , . . . , H r we define the threshold ppn; H 1 , . . . , H r , dq in the natural way. If H 1 "¨¨¨" H r " H we write ppn; H 1 , . . . , H r , dq as ppn; H, r, dq. Example 1.6. Casting Theorem 1.4 into this notation, we have that ppn; F, H, 0q " n´1 {m K pF,Hq , for a pair of graphs F, H with m 1 pF q ď m 1 pHq (when EpF q is nonempty and H is not a matching) and this threshold is sharp.
Before we state our main result it is instructive to consider the following result of Krivelevich, Sudakov and Tetali [19], which determines how many random edges need to be added to a dense graph to force the appearance of H as a subgraph. In our vertex-Ramsey framework, this corresponds to making the graph pK 1 , Hq v -Ramsey. The corresponding threshold probability requires the following definition. Definition 1.7. For a graph H, the appearance threshold for H in the random graph Gpn, pq is determined by the parameter Now, given any k P N, let mpH; kq :" min where the minimum is over all partitions of H into k induced subgraphs. 3 Suppose that G is a graph of density more than 1´1{pk´1q and we wish to find a copy of H in G Y Gpn, pq. Informally, we partition H into k parts H 1 , . . . , H k that are as sparse as possible, with the idea being to use the (few) edges of Gpn, pq to build the parts H i , and then find the edges between parts in the dense graph, thereby completing a copy of H. Note that mpH; kq " 0 if and 1 Here we refer to the standard density of G. That is dpGq " e G p v G 2 q . 2 As is the case in random graph theory, the threshold function is not uniquely determined but rather determined up to constants. 3 By a partition of H into k induced subgraphs, we mean there are k (possible empty) graphs H1, . . . , H k such that each Hi is an induced subgraph of H; the Hi are all pairwise vertex-disjoint; V pHq " V pH1q Y¨¨¨Y V pH k q.
only if χpHq ď k, in which case we can partition H into k independent sets. Then we do not require any random edges; the dense graph itself will already contain H.
Our main result (Theorem 1.11 below) essentially resolves the pH 1 , H 2 q v -Ramsey problem for randomly perturbed graphs for all pairs pH 1 , H 2 q involving at least one clique. To state this result we define some notation capturing the probabilistic vertex Ramsey thresholds for all pairs of graphs. Definition 1.9. Given graphs F and H, we write βpF, Hq :" That is, βpF, Hq is defined so that n´1 {βpF,Hq is the threshold for the property that Gpn, pq is pF, Hq v -Ramsey. We can now define our perturbed vertex Ramsey threshold, which is an extension of Definition 1.7. Definition 1.10. Given r P N, k ě 2 and a graph H, define m˚pK r , H; kq :" max Here the first maximum is taken over all tuples pr 1 , . . . , r k q of non-negative integers that sum to at most r´1; the minimum is over all partitions of H into k induced subgraphs; the final maximum is over all i such that H i contains at least one vertex.
Theorem 1.11. Let r P N and k ě 2 and H be a graph. Given any d ą 0 so that 1´1{pk´1q ă d ă 1´1{k, we have ppn; K r , H, dq " n´1 {m˚pKr,H;kq .
Note that Theorem 1.11 is general in the sense that it covers the full range of densities d P p0, 1q (not just small values of d). Further, Theorem 1.8 is precisely the r " 1 case of Theorem 1.11.
Although Theorem 1.11 does not always guarantee sharp thresholds, by analysing its proof one can see that, in all cases except when s P tt{2, tu, the threshold in Corollary 1.12 is determined by Theorem 1.4, which does provide a sharp result. Hence, we obtain the moreover part of the corollary. On the other hand, when s P tt{2, tu, the threshold probability in the corollary comes from the appearance of a subgraph, which does not have a sharp threshold.
Recall that in the random graph setting one needs Θpn 2´1{m K pKs,Ktq q random edges to ensure Gpn, pq is pK s , K t q v -Ramsey. Corollary 1.12 demonstrates that one needs far fewer random edges to make any dense n-vertex graph pK s , K t q v -Ramsey. However, the precise number of random edges depends (in a rather subtle way) on arithmetic properties of the pair ps, tq.
1.3. Some intuition for vertex Ramsey problems in randomly perturbed graphs. In this section our aim is to convince the reader that the vertex Ramsey problem for randomly perturbed graphs is in general more subtle than its counterpart in the random graph setting.
In Theorem 1.2 the threshold is universal in the following sense: the threshold for Gpn, pq being pH, rq v -Ramsey is the point above which every linear sized subset of Gpn, pq w.h.p. contains a copy of H. It is easy to see that this property guarantees a graph is pH, rq v -Ramsey (as one of the colour classes in any vertex r-colouring will have linear size). Thus, crucially the 'reason' for the location of the threshold is the same for every graph H (that is not a matching). Moreover, this reason is independent of the number of colours used.
Similarly, the threshold in Theorem 1.4 is universal. Indeed, given any sequence of graphs H 1 , . . . , H r as in the theorem, the intuition behind the threshold for the property of Gpn, pq being pH 1 , . . . , H r q v -Ramsey is the same: the threshold is the point at which the expected number of vertex-disjoint copies of H r is roughly the same order of magnitude as the maximal order of a H r´1 -free subgraph of Gpn, pq. (See the discussion in [17].) Again this threshold does not depend on the number r of colours.
On the other hand, the threshold for the perturbed vertex Ramsey problem can depend on the number of colours. Indeed, we saw in Corollary 1.12 that for every t ě 3 and d P p0, 1{2q the number of random edges required to ensure an n-vertex graph of density d is w.h.p. pK t , 2q v -Ramsey is significantly smaller than ppn; K t , 2, 0q; that is, significantly smaller than the number of edges needed to ensure Gpn, pq is w.h.p. pK t , 2q v -Ramsey. On the other hand, given any r ě 4, we actually have that ppn; K t , r, dq " ppn; K t , r, 0q for all d P p0, 1{2s. In fact, this phenomenon is part of a more general observation. Observation 1.13. Let r ě 4 and H be graph that is not a matching. Let d P p0, 1{2s. Then ppn; H, r, dq " n´1 {m 1 pHq " ppn; H, r, 0q.
Indeed, if p ě Cn´1 {m 1 pHq for some constant C, Theorem 1.2 shows that Gpn, pq itself will be pH, rq v -Ramsey w.h.p., and hence this is an upper bound on the perturbed vertex Ramsey threshold.
For the lower bound, take G to be a complete balanced bipartite n-vertex graph with vertex classes A and B. By Theorem 1.2, if p ď cn´1 {m 1 pHq for some constant c, then with high probability both Gpn, pqrAs and Gpn, pqrBs are not pH, 2q v -Ramsey. This therefore implies there exists a 4-colouring of the vertices of G Y Gpn, pq without a monochromatic copy of H.
One can view the threshold in Theorem 1.11 as universal, in the sense that it is governed by a single parameter, m˚pK r , H; kq (for all graphs H and r ě 3). However, the intuition behind where the parameter m˚pK r , H; kq comes from is more involved than the corresponding intuition discussed above for Theorems 1.2 and 1.4; we discuss this more when proving Theorem 1.11. Further, when proving Theorem 1.11 we were unable to establish how one can extend the statement to all pairs of graphs pH 1 , H 2 q (i.e. not just those where at least one of the H i is a clique). In general we expect the location of the vertex Ramsey threshold to depend heavily on subtle properties of the graphs under consideration and in this way a universal reason, if it even exists, will be challenging to come by.
1.4. Notation. Throughout the paper we omit floors and ceilings whenever this does not affect the argument. Further, we use standard graph theory and asymptotic notation. In particular, for a graph G, vpGq denotes the number of vertices of G and epGq the number of edges of G; note that we often condense this notation to v G and e G respectively. We say a graph is nonempty if it has a nonempty vertex set, and unless otherwise specified, we shall take the vertex set to be rv G s :" t1, 2, . . . , v G u. Given a hypergraph H and a set I Ă V pHq, deg H pIq denotes the number of edges of H that contain the set I.
Given a set X and r P N we write`X r˘f or the set of all subsets of X of size r. Similarly, if V is a set of vertices and F is a graph, we denote by`V F˘t he set of all possible copies of F supported on vertices in V . Here we consider these copies of F to be distinct if they have distinct sets of edges, where autpF q is the number of automorphisms of F .
1.5.
Organisation of the paper. In Section 2 we prove Theorem 1.11, handling the 0-statement in Section 2.1 and the 1-statement in Section 2.2. In the latter section, we shall require a 'robust' version of the 1-statement of Theorem 1.4 (Theorem 2.8), which we prove in Section 3. Finally, in Section 4 we give some concluding remarks.
Proof of the perturbed threshold
In this section we prove Theorem 1.11. We present the arguments for the 0-and 1-statements in separate subsections below.
2.1. The 0-statement. Here we will show the existence of a graph G of density at least d such that, when p " o´n´1 {m˚pKr,H;kq¯, with high probability the vertices of G Y Gpn, pq can be two-coloured without a red copy of K r or a blue copy of H.
2.1.1.
Kreuter's Theorem for families. In showing the existence of a good colouring, we will use the 0-statement of Theorem 1.4, which shows that Gpn, pq can be vertex-coloured while avoiding monochromatic subgraphs. However, in our application, we will have to avoid several subgraphs in the same colour class, and therefore need the following generalisation of Theorem 1.4 to families of graphs. If p " o`n´1 {m˘, then with high probability there is a red/blue-colouring of the vertices of Gpn, pq without a red copy of any graph F P F and without a blue copy of any H P H.
We remark that Proposition 2.1 is tight, since if p " ω`n´1 {m˘, then with high probability Gpn, pq is pF, Hq v -Ramsey by Theorem 1.4, where pF, Hq is the minimising pair in the definition of m.
The proof of Proposition 2.1 is nearly identical to the proof of the 0-statement of Theorem 1.4; that is, the case when both F and H each contain a single graph. We therefore simply sketch the key idea here and refer the reader to [17] for the details.
First we handle the degenerate cases. Suppose one of the families, say F, contains a graph F with no edges. The parameter βpF, Hq is then the appearance threshold for the graph H in Gpn, pq, and so if p " o`n´1 {m˘, we have that with high probability Gpn, pq has no copy of any graph from H, and so we can colour all its vertices blue. The other degenerate case is when all graphs in F and H have edges, but both families contain matchings, say F and H. In this case, βpF, Hq " 1. If p " o`n´1˘, then with high probability Gpn, pq is bipartite. We can thus two-colour its vertices such that each colour class is an independent set, and thus has no copy of any graph from F or H.
We may therefore assume that every graph in F has edges, and that H does not contain a matching, bringing us to the setting of Theorem 1.4. The proof now follows a similar scheme to other 0-statement proofs in random Ramsey settings, e.g. [15]. One begins by supposing for a contradiction that you cannot red/blue-colour the graph Gpn, pq avoiding red copies of graphs in F and blue copies of graphs in H. Using this fact one can define some set G of graphs obtained by 'gluing together' copies of graphs in F and in H in certain ways, and show that Gpn, pq must contain a graph in G. In [17] G is defined by way of an algorithm that finds a copy of some G P G in Gpn, pq. In order to do this they pass to a 'critical' subgraph G 1 of Gpn, pq which is minimal (in terms of copies of graphs in F and H) with respect to the property of not being able to 2-colour G 1 avoiding red copies of graphs in F and blue copies of graphs in H. In G 1 , one can see that for every copy T of a graph in F or H and every vertex v of T , there is a copy of a graph in the other family which intersects T exactly at the vertex v [17, Claim 1]. The algorithm [17, Procedure Hypertree] which builds a subgraph J of G 1 is then defined by repeatedly adding copies of some F P F or H P H, so that the copy intersects the previous copy in exactly one vertex. The proof then works by analysing this procedure and the graphs in G that can be found using this procedure. In particular, one keeps track of a function f piq which controls the exponent of the expected number of J i , where J i is the graph found in G 1 after i steps of the algorithm. The procedure will stop if the f piq gets too small or if the procedure continues for roughly log n steps. This will lead to a contradiction, as the graphs in G which are the possible outcomes of this procedure are all dense graphs and are either large or satisfy a very strong density condition [17,Claim 5] and hence are very unlikely to occur in Gpn, pq at this density. One can also bound the size of G [17, Claim 6] so that a union bound will guarantee that with high probability no such graph in G is found in Gpn, pq. In the calculations involved in the analysis [17, Claims 2,3 and 4] of the effect on f in each step of the algorithm, there are some minor changes in our setting as we have to consider the possibility of any member of our family being added by the algorithm to update the J i . However, it can be seen that adding a 'denser' graph than the graph which is in the minimising pair for the definition of m, will only help the situation, in that the function f can only decrease further, meaning that the resulting J i is at most as likely to appear in Gpn, pq as the J i obtained by adding the minimising graph as in the calculations in [17].
2.1.2.
Proof of the 0-statement. We may assume that m˚pK r , H; kq ą 0, as otherwise there is no 0-statement to prove. We take the dense graph G n to be the balanced complete k-partite graph on n vertices, which has density at least 1´1 k , and let V 1 , V 2 , . . . , V k be the k vertex classes.
Let pr 1 , . . . , r k q be the maximising vector in Definition 1.10. For each i P rks, we define a family of nonempty subgraphs of H by Proof. Suppose H R H j for some j P rks, and consider the partition which contradicts the definition of m˚pK r , H; kq.
In particular, each of the families H i is nonempty. We can now describe, for each i P rks, our colouring of the vertices in V i . Let F " tK r i`1 u and H " H i . By definition of H i , we have min F PF ,H 1 PH i βpF, H 1 q ě m˚pK r , H; kq. Since p " o´n´1 {m˚pKr,H;kq¯, it follows from Proposition 2.1 that with high probability we can colour V i such that Gpn, pqrV i s has neither a red K r i`1 nor a blue graph from H i . The following claim shows this gives a valid colouring of G n Y Gpn, pq, completing the proof of the 0-statement. Claim 2.3. With this colouring, G n Y Gpn, pq has neither a red K r nor a blue H.
Proof. By construction, the largest red clique in V i is of order at most r i . The largest red clique in V pG n Y Gpn, pqq " Y i V i therefore has at most ř i r i ď r´1 vertices, and hence the colouring is red-K r -free.
Suppose there was a blue copy of H, and let H " H 1 Y. . .YH k be the partition of H induced by the parts V i . By definition of m˚pK r , H; kq, there is some part i P rks with βpK r i`1 , H i q ě m˚pK r , H; kq (and H i ‰ H). It then follows that H i P H i , but our colouring of Gpn, pqrV i s avoids blue copies of any graph in H i , contradicting the existence of this blue copy of H.
The 1-statement.
To prove the 1-statement of Theorem 1.11, we need to show that whenever p " ω´n´1 {m˚pKr,H;kq¯, G n Y Gpn, pq will with high probability be pK r , Hq v -Ramsey. When G n is the complete k-partite graph, as it was in the proof of the 0-statement, this amounts to finding the sparse parts of the graphs in Gpn, pqrV i s, which can then be joined together since we have a complete k-partite graph.
However, in our more general setting, G n is an arbitrary graph of density d ą 1´1{pk´1q. By employing Szemerédi's Regularity Lemma [25], we shall find some structure in G n that mimics the behaviour of a complete k-partite graph. These structural results, together with probabilistic tools concerning the random graph Gpn, pq, are collected in the following subsections, before being used in the proof of Theorem 1.11 in Section 2.2.3.
2.2.1. Structure in dense graphs. Our application of the Regularity Lemma follows the standard lines. We present here the necessary definitions and properties of regular pairs, referring the reader to the survey of Komlós and Simonovits [16] for further details.
Definition 2.4. Given ε ą 0, a graph G and two disjoint vertex sets A, B Ă V pGq, the pair pA, Bq is ε-regular if for every X Ď A and Y Ď B with |X| ą ε|A| and |Y | ą ε|B|, we have |dpX, Y q´dpA, Bq| ă ε, where dpS, T q :" epS, T q{p|S||T |q for any vertex sets S and T .
In essence, the edges between a regular pair 'look random', in the sense that they are very well distributed. The next lemma showcases some beneficial properties of these regular pairs: small sets of vertices typically have many common neighbours, and subsets of regular pairs inherit a large degree of regularity. We omit the proofs of these facts, which can be found in [16].
Lemma 2.5. Let pA, Bq be an ε-regular pair in a graph G with dpA, Bq " d.
Szemerédi's Regularity Lemma then famously asserts that the vertices of any sufficiently large graph can be partitioned into a large but bounded number of parts, such that almost all pairs of parts are ε-regular. We shall not require the full strength of the Regularity Lemma, but only the following corollary, which follows in combination with Turán's Theorem [26].
2.2.2.
Probabilistic tools. While Proposition 2.6 gives us the desired structure in the dense graph, we also require a couple of results about the random graph. The first of these counts the number of copies of a fixed graph H in Gpn, pq. Following [10], we define the following parameter for H and p " ppnq, The lemma below shows that we are very unlikely to have significantly fewer copies of H than expected.
Lemma 2.7 (Janson's inequality). Let H be a nonempty graph, p " ppnq and H Ď`r ns H˘b e some family of Ωpn v H q potential copies of H on rns. Letting X be the random variable that counts the number copies of H in H which appear in Gpn, pq, we have that The proof of this lemma follows almost immediately from the main result of [11] (see also [10,Theorem 2.14]). Indeed, for each potential copy S P H of H, let X S be the indicator random variable for the event that S appears in Gpn, pq. Then X " ř SPH X S and [10, Theorem 2.14] implies that Second, we shall make use of the vertex Ramsey properties of Gpn, pq. We will need the following 'robust' version of the 1-statement of Theorem 1.4, which guarantees that we can find monochromatic copies of the desired graph that are suitably well-located. Furthermore, it shows that these Ramsey properties hold with sufficiently high probability to be applied several times. Theorem 2.8. Let F and H be graphs with 0 ă m 1 pF q ď m 1 pHq. Then there exist δ 0 , c ą 0 such that for all 0 ă δ ă δ 0 , t " tpnq ď exppn c q and η 0 ą 0, there exists a C ą 0 such that the following holds. Suppose that Then, if p ě Cn´1 {m K pF,Hq , the following holds with high probability in Gpn, pq. For any two-colouring of rns and every i P rts, there is either a red copy S P F i of F or a blue copy T P H i of H.
Notice that, unlike in Theorem 1.4, Theorem 2.8 allows for both F and H to be matchings. The proof is similar to that of the 1-statement of Theorem 1.4, but this strengthened version requires a few additional ideas and some careful analysis of the failure probabilities at each step. We defer these details until Section 3.2, and instead complete the proof of Theorem 1.11 next.
2.2.3.
Proof of the 1-statement. We first sketch the ideas behind the proof. By Proposition 2.6, we can find a k-tuple of vertex sets V 1 , V 2 , . . . , V k such that each pair is ε-regular and reasonably dense. We then hope to use the Ramsey properties of Gpn, pqrV i s to find suitable monochromatic subgraphs that can be pieced together to form a red K r or a blue H.
However, a naïve application of Theorem 1.4 will not work. Indeed, by definition of m˚pK r , H; kq, there is some vector pr 1 , . . . , r k q with ř i r i ď r´1 and some partition H " We can therefore expect that, for each i, we either find a red K r i`1 or a blue H i in any vertex colouring of Gpn, pqrV i s.
If these monochromatic subgraphs were all of the same colour, then we could hope to combine them to form a red clique (which would in fact be of size r`k´1, significantly larger than required) or a blue copy of H. However, we could well find red cliques in some parts and blue subgraphs in others, which would leave us unable to complete either of the desired graphs.
Instead, we must use the full power of Definition 1.10, which provides a suitable partition H " H 1 Y . . . Y H k not just for some vector pr 1 , . . . , r k q, but rather for all vectors pr 1 , . . . , r k q satisfying ř i r i ď r´1. We shall therefore proceed in stages, incrementally either increasing the size of a red clique or finding the next piece needed for a blue copy of H. We let r i denote the size of the largest red clique we have found in V i thus far, starting with r " 0.
Given the current vector r, we let H " H 1 Y . . . Y H k be the corresponding minimising partition of H. We then go through the parts in turn, applying the pK r i`1 , H i q v -Ramsey property of Gpn, pqrV i s to find a blue H i or a red K r i`1 . In the former case, we proceed to the next part. If we make it through each of the k parts, we will have found all the parts H i needed to build a blue copy of H.
Otherwise, in the latter case, we have increased the size of our red clique. We then update the vector r and the corresponding partition of H, return to the first part V 1 , and resume the process. Since this increases the size of our red clique, we will have built a red K r if this latter case occurs r times.
There are still technicalities that need to be dealt with -for instance, to ensure we can combine the monochromatic structures we find, we will need to restrict ourselves to the common neighbourhoods of the parts we have already found. This further requires us to only consider subgraphs with "many" common neighbours in all other parts, which is why we need the more robust 1-statement of Theorem 2.8. In the remainder of this section, we describe this algorithm in detail and show that it successfully returns one of the desired monochromatic subgraphs.
Given α ą 0 and p " ωpn´1 {m˚pKr,H;kq q, our goal is to show that for any n-vertex graph G n of density d ě 1´1{pk´1q`2α, the graph G n Y Gpn, pq is with high probability pK r , Hq v -Ramsey. Applying Proposition 2.6 to G n with some suitably small regularity parameter ε, 4 gives k pairwisedisjoint vertex sets V 1 , V 2 , . . . , V k , such that each pair pV i , V j q is ε-regular of density at least α.
At several stages in the algorithm, we will, for some i, find a (constant sized) subgraph Γ Ă Gpn, pqrV i s, and will then want to shrink all the other parts V j to the common neighbours in G n of the vertices of Γ. We shall therefore call Γ popular if its vertices have at least p α 2 q v Γ |V j | common G n -neighbours in each V j , j ‰ i. Lemma 2.5 ensures that most potential copies of Γ will be popular, 4 For our purposes, it suffices to take ε " krv H pr`2 v H q , where δ 1 is the minimum value of δ0 from Theorem 2.8 when the graph F is a clique on at most r vertices and the graph H in the theorem is a subgraph of our given graph H. and that when we shrink the sets V j to their large common neighbourhoods, the pairs will remain ε 1 -regular of density α 1 , for some larger ε 1 and some slightly smaller α 1 . By choosing the initial value of ε small enough, we can ensure that all subsequent values of ε 1 remain small, while the densities α 1 are always at least α 2 . We first find copies of the subgraphs of K r and H that are likely to appear in Gpn, pq. Let t :" maxts P rrs : mpK s q ď m˚pK r , H; kqu and let G :" tHrU s : U Ď V pHq, mpHrU sq ď m˚pK r , H; kqu. We then define the graph Γ to be the disjoint union of v H copies of K t together with one copy of each graph in G. Then, for each i P rks in turn, find a popular copy Γ i of Γ in Gpn, pqrV i s, and shrink all other parts V j to the common neighbours of V pΓ i q in V j . If Γ is an independent set, clearly one can find these popular copies. Otherwise, noting that mpΓq ď m˚pK r , H; kq, Lemma 2.7 ensures that we find these popular copies with high probability. Note that, at the end of this process, for all i the graph Γ i remains in the set V i .
We can now start the procedure sketched earlier. We shall denote by R i the largest red clique found in Gpn, pqrV i s thus far, initially setting R i " H for all i P rks. The vector r will be defined by r i :" v R i , so we begin with r " 0.
The outer loop of the algorithm runs as long as ř i r i ď r´1, which means we have not yet found a red K r . In this case, we take the minimising partition H " H 1 Y . . . Y H k for the vector r in Definition 1.10, and try to find a blue copy of H according to this partition.
The inner loop of the algorithm runs over i P rks. If H i " H, then there is nothing to find in Gpn, pqrV i s, and so we proceed to the next part. Otherwise, we shall show that Gpn, pqrV i s is robustly pK r i`1 , H i q v -Ramsey; that is, we will find a popular blue H i or a popular red K r i`1 . If we have a blue H i , we let B i be this copy of H i , shrink all other parts V j to the common neighbours of V pB i q, and then proceed to the next part.
On the other hand, if we find a red K r i`1 , then we have increased the size of our red clique. We then set R i to be this larger clique and shrink all other parts V j to the common neighbours of V pR i q. We update the vector r, replacing r i with r i`1 , and then break the inner loop and proceed to the next iteration of the outer loop (trying to find the new optimal partition of H, starting in V 1 ).
Since we shrink to common neighbourhoods at each step, we ensure that the pieces we find in Gpn, pqrV i s can be combined to form the graphs we need in G n Y Gpn, pq. In particular, if the inner loop were to run through all k steps, then Y i B i would give a blue copy of H. On the other hand, each iteration of the outer loop increases the size of our red clique, and after r iterations Y i R i would give a red K r . Thus, after finitely many steps, the algorithm returns either a blue H or a red K r , showing that G n Y Gpn, pq is indeed pK r , Hq v -Ramsey.
To complete the proof, we need to show that Gpn, pqrV i s will always be robustly pK r i`1 , H i q v -Ramsey, which we will mostly achieve through use of Theorem 2.8. However, this only applies when r i , epH i q ě 1. For the degenerate cases, we will need to make use of the graphs Γ i we found at the beginning. Suppose first that r i " 0. By definition, we have m˚pK r , H; kq ě βpK r i`1 , H i q " βpK 1 , H i q " mpH i q, and so H i appears in Γ i . Then either this copy of H i is completely blue, or we find a red K 1 , and so Γ i Ď Gpn, pqrV i s is indeed pK 1 , H i q v -Ramsey. The other case, when epH i q " 0, follows similarly. This time we have m˚pK r , H; kq ě βpK r i`1 , H i q " mpK r i`1 q, and so Γ i contains v H copies of K r i`1 . Either one of them is completely red, in which case we are done, or we have v H blue vertices, which in particular gives a blue copy of H i . This leaves us with the case when both K r i`1 and H i have edges. Again, by definition, we have m˚pK r , H; kq ě βpK r i`1 , H i q. Thus, since p " ωpn´1 {m˚pKr,H;kq q, Theorem 1.4 shows we should expect Gpn, pqrV i s to be pK r i`1 , H i q v -Ramsey. However, we need this to be true for all sets V i that could arise, and also need to find popular monochromatic copies of K r i`1 or H i , and thus we apply Theorem 2.8 instead.
Note that the sets V i that arise are the common neighbourhoods of a bounded number of vertices, and hence there are only polynomially many possibilities. Moreover, as these are always neighbourhoods of popular subgraphs, there is some constant η 0 ą 0 such that |V i | ě η 0 n.
For each such set V i , we define a triple pU j , F j , H j q P U , where we take U j :" V i , we let F j be all possible popular copies of K r i`1 in V i , and let H j be all possible popular copies of H i in V i . Lemma 2.5 (and our choice of small ε) ensures thatˇˇ`U j We therefore satisfy all the requirements of Theorem 2.8, and can conclude that with high probability, whenever we require Gpn, pqrV i s to be pK r i`1 , H i q v -Ramsey, it will be. As there are only finitely many pairs pK r i`1 , H i q to consider, it follows that the algorithm succeeds with high probability overall, completing the proof.
Robust Ramsey properties of random graphs
The aim of this section is to give a proof of Theorem 2.8. Although our proof here is similar to that of Kreuter [17], we choose to give the details as the argument is somewhat delicate and our proof departs from the original in some key steps. In particular, instead of using Turán's theorem to estimate the maximal size of a set of vertex-disjoint copies of a given graph (as done by Kreuter [17]), we use a probabilistic approach (as in [1, Lemma 7.3.1]) which allows us to analyse the relevant subgraph counts at every step of the proof and guarantee that we find monochromatic copies of the graphs on the desired vertex sets. We first give some probabilistic tools and intermediate lemmas before embarking on the proof.
Lemma 3.1. Suppose tA i : i P Iu is a finite set of events in some probability space and for each i P I, let X i be the indicator random variable for the event A i . Write i " j if the events A i and A j are not independent. Further, let X :" ř iPI X i be the sum of the indicator random variables and define ∆ : where the sum is over all ordered pairs pi, jq (including diagonal terms). Then for all t ą 0, 3.1.2. Janson's inequality for a refined random graph. Given all the copies of a fixed H in Gpn, pq, it will be useful for us to look at a random subset of the copies where each copy is selected with probability q independently of all other choices. Formally, let ρpH; qq :`r ns H˘Ñ t0, 1u be a function that randomly assigns 1 with probability q and 0 with probability 1´q to each copy of H in K n , independently of the other choices. Then let Gpn, pq H q be the random graph obtained by revealing Gpn, pq and ρpH; qq and taking all the copies of H on rns such that all the edges of the copy appear in Gpn, pq and the copy was assigned a 1 by ρpH; qq. Lemma 3.2. Let H be a graph with at least one edge, p " ppnq, q " qpnq, and H Ď`r ns H˘b e some family of Ωpn v H q potential copies of H on rns. Letting X q be the random variable that counts the number copies of H in H which appear in Gpn, pq H q on rns, we have that PrX q ď ErX q s{2s ď expp´ΩpΦ H,p qq`expp´Ωpqn v H p e H qq.
The lemma follows from an application of Lemma 2.7, which gives concentration for the number of copies S P H of H that appear in Gpn, pq. Each copy is then kept with probability q, independently of the others, and so Chernoff's inequality (see e.g. [10, Theorem 2.1]) gives concentration for the number of these copies that appear in Gpn, pq H q . 3.1.
3. An exponential upper tail bound. Janson's inequality (Lemma 2.7) allows us to conclude that the probability that the number of embeddings of a graph H in Gpn, pq is significantly smaller than its expectation is exponentially small. On the other hand, one can use Lemma 3.1 to give a bound on the probability that the number of copies of H is much higher than expected. However, the concentration given by Lemma 3.1 is not enough for our purposes. We therefore need the following bound for the upper tail of the distribution of subgraph counts in random graphs. This is a simplification of the main result in [12] and the proof is almost identical. The only departing point from the exposition in [12] is that Lemma 3. Then there exists some c " cpH, ǫq ą 0 such that the following holds. Let H Ă`r ns H˘b e some family of ǫn v H potential copies of H. Letting X be the random variable that counts the number copies S P H which appear in Gpn, pq on rns, we have that PrX ě 2ErXss ď exp´´cΦ 1 e H¯.
3.1.4.
Kim-Vu polynomial concentration. The last tool we need is the result of Kim and Vu [14] (see also [1,Section 7.8]). We state here a simplified version which is catered to our purposes. Lemma 3.4. Given k P N, let c :" 8´1p4k!q´1 {p2kq , and let H " pV, Eq be a k-uniform hypergraph with |V | " N , |E| " M . Now consider the set V 1 obtained by keeping each vertex of V with some probability q " qpN q P r0, 1s, independently of the other vertices. We are interested in the random variable Y :" e HrV 1 s and we fix µ :" ErY s " M q k . Then setting 2k˙.
3.2.
Proof of Theorem 2.8. Towards proving Theorem 2.8, we first prove some lemmas. For a fixed nonempty graph H, we define to be the set of graphs that can be obtained by taking the union of two distinct copies of H which intersect in at least one vertex. Recall also the definition of Gpn, pq H q from Section 3.1.2.
Lemma 3.5. Let H be a graph with at least one edge, and let Φ 1 :" mintn, Φ H,p u. Then there exists C " CpHq ą 0 such that the following holds for all q " qpn, pq such that q ď Φ 1 {pn v H p e H q. Let X F be the random variable that counts the number of copies of a graph F in Gpn, pq H q . Then we have that Proof. This is a simple application of Chebyshev's inequality, Lemma 3.1. The proof of [10, Theorem 3.29] contains a similar calculation.
Let us fix someH " H 1 Y H 2 P HpHq and show that XH ď C 1 ErX H s 2 {Φ 1 with probability at least 1´CΦ 1 {ErX H s 2 for some C 1 ,C ą 0. The conclusion will then follow by a union bound, as there are finitely many possibleH P HpHq. First, let us upper bound the expectation of XH as follows. Defining J " H 1 X H 2 as the intersection of the two copies of H that compriseH, we have that for some appropriately defined C 1 ą 0, using that v J ě 1 and n v J p e J ě Φ H,p if e J ‰ 0. We now turn to concentration and look to apply Lemma 3.1. In order to do this, we need an upper bound estimate on ∆ which counts the expected number of non-independent pairs of copies ofH in Gpn, pq H q . That is, it counts the number of pairs of copies ofH which overlap in at least one edge. So let us fix some graph H˚" in at least an edge. There are finitely many such H˚and our upper bound on ∆ will come from summing over all such possible intersecting pairs of copies ofH. Letx be 2 if H˚" H 1 Y H 2 " H 1 1 Y H 1 2 is a single copy ofH, 1 if H i " H 1 j in Hf or some i, j P t1, 2u and 0 otherwise. In other words,x indicates the number of 'repeated' copies of H in H˚. Swapping the indices 1 and 2 if necessary, let J 1 " H 1 X H 2 , J 2 " H 1 1 X pH 1 Y H 2 q and J 3 " H 1 2 X pH 1 Y H 2 Y H 1 1 q such that each J i contains at least one vertex. This is possible due to the fact that H 1 and H 2 intersect in at least a vertex inH and the two copies ofH intersect in at least an edge. Then we have that for appropriately defined C 2 ą 0, using that at leastx of J 2 and J 3 are copies of H, and using that ErX H s ď qn v H p e H ď Φ 1 in the final step (recall that by hypothesis q ď Φ 1 {pn v H p e H q). Thus, summing over all possible H˚, we get that ∆ ďCErX H s 2 {Φ 1 for someC ą 0 and by Lemma 3.1, forC " 4C{C 12 . Thus summing over allH P HpHq and taking a union bound on the failure probabilities, we can choose C ą 0 appropriately so that the statement of the lemma is satisfied.
Let Ů k H denote the graph obtained by taking k vertex-disjoint copies of H. We say that a set K P`r ns k˘i s a transversal of a copy S of Ů k H if K contains one vertex from each of the copies of H that comprise S. Further, given K Ď`r ns k˘, we say a copy of Ů k H on rns is K-spanning if it contains a set from K as a transversal.
Lemma 3.6. Let H be a graph with at least one edge, k P N and δ ą 0. Set δ 1 :" δpkv H q k . Then there exists a c ą 0 and n 0 P N such that if p " ppnq ě n´1 {mpHq and q " qpn, pq satisfy qn v H p e H ą plog nq 3k then the following holds for all n ě n 0 . Suppose that K Ă`r ns k˘i s such that |K| ď δn k . Letting Y be the random variable that counts the number of K-spanning copies of Ů k H in Gpn, pq H q , we have that Proof. It suffices to prove the lemma in the case when |K| " δn k . For this, we split the analysis of Gpn, pq H q into looking at the random edges given by Gpn, pq and the random function ρpH; qq : rns H˘Ñ t0, 1u separately. Firstly consider the K-spanning copies of Ů k H in the complete graph K n . There are at most δ 1 n kv H such copies and each appears with probability p ke H . Thus, in expectation, the number of K-spanning copies of Ů k H in Gpn, pq is at most δ 1 pn v H p e H q k . Moreover, Lemma 3.3 tells us that the count of such copies in Gpn, pq is at most twice this with probability at least 1´exp´´c 3.3 Φ 1 ke H¯, where c 3.3 " cpH, δq as given by Lemma 3.3 and Φ " Φ´Ů k H, p¯. Now note that where we split subgraphs J Ď Ů k H according to their subgraphs J i in the i th copy of H in Ů k H and in the second step we single out a j " jpJq such that J j Ď J has a nonempty edge set.
Applying Lemma 3.3 also to the counts of Ů k 1 H, for smaller values of k 1 , we can conclude that there exists a c 1 ą 0 so that, with probability at least 1´exp´´c 1 ΦpH, pq 1 ke H¯, there are at most 2δ 1 pn v H p e H q k K-spanning copies of Ů k H in Gpn, pq and there are at most 2pn v H p e H q k 1 copies of Ů k 1 H in Gpn, pq for all 1 ď k 1 ď k´1. On the other hand there are at least δn kv H {p2kv H q! K-spanning copies of Ů k H in K n . So by Lemma 2.7 there is a c 2 ą 0 such that with probability at least 1´exp`´c 2 ΦpH, pq˘, there are at least 2δ 2 pn v H p e H q k K-spanning copies of Ů k H in Gpn, pq where δ 2 :" δ{4p2kv H q!. Now we condition on all these events occurring in Gpn, pq and turn to analyse the effect of ρpH, qq. We know that each K-spanning copy in Gpn, pq appears in Gpn, pq H q with probability q k and we will obtain concentration via a simple application of Lemma 3.4. Indeed, consider the auxiliary k-uniform hypergraph H whose vertex set is given by copies of H in Gpn, pq and whose edge set is given by copies of H which comprise a K-spanning copy of Ů k H in Gpn, pq. From above we have that H has at most 2n v H p e H vertices (the copies of H in Gpn, pq) and between 2δ 2 pn v H p e H q k and 2δ 1 pn v H p e H q k edges. We also know, from the concentration on the number of copies of Ů k 1 H in Gpn, pq, that for any set I of i copies of H with 1 ď i ď k, the number of edges of H containing I is at most 2pn v H p e H q k´i . Thus, Lemma 3.4 tells us that conditioning on the outcome of Gpn, pq as above, with probability at least the number of K-spanning copies of Ů k H in Gpn, pq H q is at most 4δ 1 pn v H p e H qq k , where c 3.4 is the constant given by Lemma 3.4. The conclusion then follows from a simple calculation on the error probability that either the counts in Gpn, pq are not as desired or the count in Gpn, pq H q is too high, given that we get the desired counts in Gpn, pq.
We now turn to proving Theorem 2.8.
Proof of Theorem 2.8. It suffices to prove the theorem in the case when p " Cn´1 {m K pF,Hq for some sufficiently large C ą 0. We begin with a calculation. Let The upper bound on ℓ follows because Now we turn to the proof of the theorem. We first fix constants. We fix δ 0 ą 0 such that and c ą 0 such that c ă c 1 4v F e F e H . Further, for each 0 ď i ď t we fix η i :" |U i |{n so that η i ě η 0 for all i. Further, fix p1{pv H !q´δq 2 pη 0 η i q v H 16C 3.5 for all 0 ď i ď t, where C 3.5 " C 3.5 pHq is the constant obtained from Lemma 3.5. By considering a large enough constant C, we expose Gpn, pq in two rounds so that Gpn, pq " G 1 pn, p 1 q Y G 2 pn, p 2 q, with p 1 , p 2 ě C 1 n´1 {m K pF,Hq for C 1 such that C 1 ą 2 log v H γ v F 0 . Let us briefly sketch the proof which splits into proving two main claims. The first claim states that with high probability in G 1 , for each i P rts, there is a (large) subfamily D i Ă H i of pairwise vertex-disjoint copies of H, all of whose edges appear in G 1 . We define (3.5) W i :" tW Ă U i : |W X T | " 1 for all T P D i u to be the sets which can be obtained by choosing one vertex from each copy of H in D i . The second claim is that with high probability in G 2 , for each i and each set W P W i , there is a copy of F which lies in`W F˘X F i whose edges appear in G 2 . The proof then follows easily from these two claims. Indeed, consider a red/blue colouring of Gpn, pq " G 1 Y G 2 and some i P rts. If there is no blue copy of H in H i then in particular, every copy of H in D i must contain a red vertex. By choosing one red vertex in each copy T of H in D i , we get a set W P W i which is entirely red. The second claim then tells us that this set hosts a copy of F which lies in F i and so we are done. It remains to prove the two claims above. In order to prove the first claim, it will be useful to consider the refined random graph G 1 pn, p 1 q H q introduced in Section 3.1.2. As G 1 pn, p 1 q H q is a subgraph of G 1 pn, p 1 q it will suffice to find our family D i of copies of H in G 1 pn, p 1 q H q . So we fix Now we apply Lemma 3.5, observing that Φ 1 ě n ℓ due to our calculation at the beginning of this proof (Φ 1 " Φ H,p ě n ℓ if ℓ ă 1 and Φ 1 " n ℓ " n if ℓ " 1). As the expected number of H in G 1 pn, p 1 q H q is Ωpqn v H p e H q " Ωpn ℓ q, we have that with high probability (with probability at least 1´Opn´ℓq), in G 1 pn, p 1 q H q there are at most C 3.5 q 2 n 2v H p 2e H n ℓ " γ 0 n ℓ overlapping copies of H. For a given i P rts, we can conclude from Lemma 3.2 that with high probability there at least p1{pv H !q´δqq|U i | v H p e H 2 ě 2γ i n ℓ copies of H in H i which lie in G 1 pn, p 1 q H q . As this holds with probability at least 1´expp´n c 1 q, we have that this holds for all i P rts with high probability. Thus we obtain a family D i Ă H i of vertex-disjoint copies of H which appear in G 1 by taking the copies in H i that appear in G 1 pn, p 1 q H q and deleting one copy from any pair of overlapping copies. Our calculations above guarantee that with high probability, for every i P rts, D i has size at leastñ i :" γ i n ℓ and we restrict each family to one of size exactlyñ i .
We now turn to the second exposure, namely G 2 " G 2 pn, p 2 q, and look to prove that for every i P rts and every set W P W i there is a copy of F in`W F˘X F i which appears in G 2 , where W i is as defined in (3.5). Fixing an i P rts and a W P W i , we consider G 2 restricted to W . We look to apply Lemma 2.7 and so need a lower bound on the parameter which is calculated with respect to the vertex set W . As at the beginning of the proof, we set J Ă H to be the minimising subgraph in the definition of ℓ (3.1) and use that for all I Ď F with e I ą 0, we have that We conclude thatΦ i ěΦ 0 ě γ v F 0 C 1 n ℓ . As t ď exppn c q and for each i, and |W i | " pv H qñ i ď exppn ℓ log v H q, we can take a union bound and conclude from Lemma 2.7 that for all choices of i P rts and W P W i , we have that there are at leastñ v F i p e F 2 copies of F on W in G 2 with high probability. Note here that we used that . It remains to prove that for each i P rts and W one of these copies of F belongs to F i .
To this end we define B i :"`U i F˘z F i to be the copies of F which do not lie in our desired collection. We will upper bound the number of copies S of F in B i which appear in G 2 , such that each vertex of S lies in a different copy of H in D i . In order to do this, we return to analyse our construction of D i and in particular our use of Gpn, pq H q . Let K i be the collection of v F -sets in U i which host a copy of F in B i and note that |K i | ď δ|U i | v F . We say a set K P K i is dangerous if each vertex of K is contained in a distinct copy of H in D i . In order to be dangerous, a set K has to lie in a transversal of a copy of Ů v F H in Gpn, pq H q restricted to U i (see the paragraph before Lemma 3.6 for the relevant definitions). Therefore in order to upper bound the number of dangerous sets, it suffices the upper bound the number of K i -spanning copies of Ů v F H in Gpn, pq H q rU i s. It follows then from Lemma 3.6 that for all i P rts, there are at most dangerous sets with high probability, using that δ ă δ 0 . As for a fixed i P rts, this holds with probability 1´expˆ´Ωˆn ℓ 2v F e H˙˙a nd t ď exppn c q, we can conclude that there at mostñ v F i {p8v F !q dangerous sets for each i P rts with high probability.
Finally, we calculate how many copies of F in G 2 are hosted on dangerous sets. For a fixed i, we consider G 2 restricted to the vertex set D i :" Y T PD i V pT q. We have that |D i | " v Hñi and from the previous paragraph we may assume that there are at mostñ v F i {8 potential copies of F on dangerous sets in D i . Each of these appears with probability p e F and by Lemma 3.3 we have that with probability at least 1´expˆ´Ωˆn ℓ e F˙˙, there are at mostñ v F i p e F {4 copies of F in G 2 which are hosted on dangerous sets. The failure probability here follows from a calculation of the appropriate Φ F,p 2 similar to the calculation ofΦ above. Thus we can take a union bound to conclude that for all i P rts, there are at mostñ v F i p e F {4 copies of F P B i which lie in D i , whose edges appear in G 2 and whose vertices are contained in distinct copies of H in D i . Thus with high probability, for all i P rts and for all W P W i , there is a copy T P`W F˘X F i of F , whose edges appear in G 2 , as required.
Concluding remarks
In this paper, we have determined, at essentially every density d, the perturbed vertex Ramsey threshold ppn; K r , H, dq for cliques versus arbitrary graphs. One could investigate how these thresholds change with the introduction of additional colours, but the most pressing problem that remains open is to extend our results to all pairs of graphs pF, Hq, with the symmetric case F " H of particular interest. Our methods do provide lower and upper bounds on the threshold in the general case, which we discuss below.
We start with the 1-statement, where we wish to know what p ensures G n Y Gpn, pq is pF, Hq v -Ramsey when G n is a graph of density more than 1´1{pk´1q. Recall that in our algorithmic proof of the 1-statement in Theorem 1.11, we worked in an ε-regular k-tuple in G n , using the vertex Ramsey properties of the random graph in each part to iteratively grow a red clique or try to build a copy of H.
In the general setting, when we seek a red copy of F instead, we can adopt the same approach. The main difference is that there are many ways we could try to build F over the k parts. To keep track of these, we define a partial partition of F to be a partition of the vertices V pF q " where W ‰ H. This represents the stage in the algorithm where we have found red subgraphs F rU i s in the parts V i , and W represents the vertices of F that are still missing. Thus, when we try to extend this red subgraph, we will require Gpn, pqrV i s to be pF rU i Y tu i us, H i q v -Ramsey for some optimal choice of u i P W and partition H " H 1 Y . . . Y H k . In this way, we either get one vertex closer to having a red copy of F , or we find one of the parts we need for a blue copy of H. Note that when F " K r , all that matters is the size |U i | and not the set U i itself, since each induced subgraph of K r is itself a clique. Hence we recover the bound of Theorem 1.11. Unfortunately, this upper bound need not be tight. For instance, when F and H are complete bipartite graphs, it is not hard to see that m˚pF, H; 4q " 0. However, by considering sets U i that, for each i, span both colour classes of F , we can ensure that each subgraph F rU i s has edges, which results in the right-hand side of (4.1) being positive.
One issue with (4.1) is that it considers all partial partitions of F , but we need only maximise over those that could feasibly arise in the algorithm. While it may not be easy to describe these partitions explicitly, we can construct the family of feasible partial partitions recursively.
To do so formally, we define the extension function f which, given the sets U 1 , . . . , U k of a partial partition of F , returns the vertices pu 1 , . . . , u k q P W k that are used to extend the red subgraph. For each such extension function, we can build the family Fpf q of feasible partitions in the following way. We start with pH, . . . , Hq P Fpf q. Then, for each i P rks, we add pU 1 , . . . , U i´1 , U i Y tf p U q i u, U i`1 , . . . , U k q to Fpf q, provided this is still a partial (and not complete) partition of F . Note that this represents the larger red subgraph we would obtain if, in Gpn, pqrV i s, we would find a monochromatic red subgraph when applying the pF rU i Y tf p U q i us, H i q v -Ramsey property.
We then need only maximise over the feasible partitions Fpf q, and can choose the extension function f that gives the lowest possible threshold. That is, we have the upper bound Note again that in the case F " K r , the choice of extension function f is irrelevant, since all that matters are the sizes |U i |. This is strictly better than (4.1), as one can find an extension function f that shows m˚pF, H; 4q " 0 whenever F and H are bipartite. Unfortunately, even (4.2) need not be tight, as we should also have m˚pF, H; 3q " 0 for such F and H, but the right-hand size is positive when we only have three parts. It would therefore be very interesting to find a sharper bound for the 1-statement. A useful step in that direction could be to characterise which extension functions f are optimal for a given graph F .
In the other direction, we can provide lower bounds on m˚pF, H; kq by generalising the colouring we gave in proving the 0-statement of Theorem 1.11. We shall once again take G n to be a complete k-partite graph, and will describe how one can colour the vertices of the random graphs Gpn, pqrV i s to avoid both a red F and a blue H in G n Y Gpn, pq.
To this end, we call a k-tuple pF 1 , . . . , F k q of families of nonempty graphs a k-cover of F if, for any k-partition V pF q " U 1 Y . . . Y U k of the vertices of F , there is some i P rks such that F rU i s P F i . That is, a k-cover is a collection of induced subgraphs that are bound to appear in any k-partition of F .
Given this definition, we have the following lower bound. When F " K r , this recovers the bound from Theorem 1.11, since we have k-covers of the form F i " tK r i`1 , . . . , K r u, where ř i r i " r´1. To describe the colouring in the general case, fix a maximising k-cover pF 1 , . . . , F k q, let β˚be the right-hand side of (4.3), and let p " opn´1 {β˚q . For each i P rks, we define H i " tH 1 Ď H : @F 1 P F i , βpF 1 , H 1 q ě β˚u. As before, one can argue that H P H i , and so these families are all nonempty. Applying Proposition 2.1, we can colour the vertices of Gpn, pqrV i s so as to avoid any red graph from F i and any blue graph from H i .
It is now tautological that this colouring of G n YGpn, pq has neither a red F nor a blue H. Suppose for contradiction there is a red copy of F , partitioned as F " F 1 Y . . . Y F k . Since pF 1 , . . . , F k q is a k-cover of F , there is some i with F i P F i , but then there is no red F i in Gpn, pqrV i s. On the other hand, if there is a blue H, partitioned as H " H 1 Y . . . Y H k , then we must have some i P rks such that H i ‰ H and βpF 1 , H i q ě β˚for all F 1 P F i . But then H i P H i , and so there is no blue H i in Gpn, pqrV i s either.
The challenge arises from the fact that when F is not a clique, there could be many ways to partition it into k induced subgraphs, and so there will be a wide variety of complicated k-covers. | 16,952.6 | 2019-09-30T00:00:00.000 | [
"Mathematics"
] |
Non-proximity resonant tunneling in multi-core photonic band gap fibers : An efficient mechanism for engineering highly-selective ultra-narrow band pass splitters
The objective of the present investigation is to demonstrate the possibility of designing compact ultra-narrow band-pass filters based on the phenomenon of non-proximity resonant tunneling in multi-core photonic band gap fibers (PBGFs). The proposed PBGF consists of three identical air-cores separated by two defected air-holes which act as highly-selective resonators. With a fine adjustment of the design parameters associated with the resonant-air-holes, phase matching at two distinct wavelengths can be achieved, thus enabling very narrow-band resonant directional coupling between the input and the two output cores. The validation of the proposed design is ensured with an accurate PBGF analysis based on finite element modal and beam propagation algorithms. Typical characteristics of the proposed device over a single polarization are: reasonable short coupling length of 2.7 mm, dual bandpass transmission response at wavelengths of 1.339 and 1.357 μm, with corresponding full width at half maximum bandwidths of 1.2 nm and 1.1 nm respectively, and a relatively high transmission of 95% at the exact resonance wavelengths. The proposed ultra-narrow band-pass filter can be employed in various applications such as all-fiber bandpass/bandstop filtering and resonant sensors. ©2006 Optical Society of America OCIS codes: (060.2430) Fibers, single mode; (999.9999) Photonic crystal fiber References and links 1. P.St.J. Russell, “Photonic crystal fibers,” Science 299, 358-362 (2003). 2. S. Kawanishi, T. Yamamoto, H. Kubota, M. Tanaka, and S. Yamaguchi, “Dispersion controlled and polarization maintaining photonic crystal fibers for high performance network systems,” IEICE Trans. Electron. E87-C, 336-342 (2004). 3. B.J. Mangan, J.C. Knight, T.A. Birks, P.St.J. Russell, and A.H. Greenaway, “Experimental study of dualcore photonic crystal fibre,” Electron. Lett. 36, 1358-1359 (2000). 4. W.N. MacPherson, J.D.C. Jones, B.J. Mangan, J.C. Knight, and P.St.J. Russell, “Two-core photonic crystal fiber for Doppler difference velocimetry,” Opt. Commun. 233, 375-380 (2003). 5. K. Kitayama and Y. Ishida, “Wavelength-selective coupling of two-core optical fiber: application and design,” J. Opt. Soc. Am. A 2, 90-94 (1985). 6. R. Zengerle and O.G. Leminger, “Narrow-band wavelength-selective directional couplers made of dissimilar single-mode fibers,” J. Lightwave Technol. LT-5, 1196-1198 (1987). 7. E. Eisenmann and E. Weidel, “Single-mode fused biconical couplers for wavelength division multiplexing with channel spacing between 100-300 nm,” J. Lightwave Technol. LT-6, 113-119 (1988). 8. K. Thyagarajan, S.D. Seshadri, and A.K. Ghatak, “Waveguide polarizer based on resonant tunneling,” J. Lightwave Technol. 9, 315-317 (1991). 9. K. Saitoh, N. Florous, M. Koshiba, and M. Skorobogatiy, “Design of narrow band-pass filters based on the resonant-tunneling phenomenon in multi-core photonic crystal fibers,” Opt. Express 13, 10327-10335 (2005). http://www.opticsinfobase.org/abstract.cfm?URI=oe-13-25-10327 10. M. Skorobogatiy, K. Saitoh, and M. Koshiba, “Transverse light guides in microstructured optical fibers,” Opt. Lett. 31, 314-316 (2006). 11. K. Saitoh and M. Koshiba, “Full-vectorial imaginary-distance beam propagation method based on a finite element scheme: application to photonic crystal fibers,” IEEE J. Quantum Electron. 38, 927-933 (2002). 12. K. Saitoh and M. Koshiba, “Full-vectorial finite element beam propagation method with perfectly matched layers for anisotropic optical waveguides,” J. Lightwave Technol. 19, 405-413 (2001). 13. K. Saitoh and M. Koshiba, “Leakage loss and group velocity dispersion in air-core photonic bandgap fibers,” Opt. Express 11, 3100-3109 (2003). http://www.opticsinfobase.org/abstract.cfm?URI=oe-11-23-3100 14. N. Florous, K. Saitoh, and M. Koshiba, “A novel approach for designing photonic crystal fiber splitters with polarization-independent propagation characteristics,” Opt. Express 13, 7365-7373 (2005). http://www.opticsinfobase.org/abstract.cfm?URI=oe-13-19-7365 15. S. K. Varshney, N. Florous, K. Saitoh, and M. Koshiba, “The impact of elliptical deformations for optimizing the performance of dual-core fluorine-doped photonic crystal fiber couplers,” Opt. Express 14, 1982-1995 (2006). http://www.opticsinfobase.org/abstract.cfm?URI=oe-13-19-7365 16. T. Tjugiarto, G.D. Peng, and P.L. Chu, “Bandpass filtering effect in tapered asymmetrical twin-core optical fibers,” Electron. Lett. 29, 1077-1078 (1993). 17. B. Wu and P.L. Chu, “Narrow-bandpass filter with gain by use of twin-core rare-earth-doped fiber,” Opt. Lett. 18, 1913-1915 (1993). 18. B. Ortega and L. Dong, “Accurate tuning of mismatched twin-core fiber filters,” Opt. Lett. 23, 1277-1279 (1998).
Introduction
In the last decade, photonic crystal fibers (PCFs) [1], also known as microstructured optical fibers (MOFs) or holey fibers have witnessed an unpredictable attention due to the fact that they can provide unprecedented degrees of freedom in engineering their modal characteristics. Although PCFs are usually formed by a central defect region surrounded by multiple air-holes arranged in a regular triangular lattice, recent advancements in manufacturing technology of PCFs such as multiple-capillary drawing method [2] can readily realize multi-core PCFs [3], [4].
An important element in all-optical fiber communication systems and all-optical fiber measurements is apparently a wavelength-selective fiber device such as a fiber filter. A number of fabrication techniques have been used so far for realizing such devices [5]- [7]. The operational principle of conventional fiber filters typically involves in transferring energy over a coupling length between two distinct fiber cores coupled by proximity interaction. However in this case modes in closely separated individual cores are phase-matched at all wavelengths, thus making it difficult to engineer bandpass filtering characteristics. It was recently shown that efficient band pass filters can be realized based on the resonant tunneling phenomenon [8] in multicore PCFs [9].
One of the major trends in the development of all-fiber devices is the increasing number of functionalities in a single fiber. The ultimate goal is to be able to fabricate in a single draw a complete all-fiber component provisioned on a preform level. Some of the advantages of allfiber devices are: simplified packaging, absence of sub-component splicing losses, environmental stability due to the absence of free space optics. While the benefits of integrated all-fiber devices are significant enough to encourage development of increasingly complex components, the major roadblock to their realization is an unavoidable complexity of the required transverse refractive index profile. These challenges can be in a certain degree met by using a novel class of microstructured optical fiber coupler that was recently introduced in Ref. [10], which operates by resonant rather than proximity coupling, where energy transfer is realized via transverse lightguides integrated into the fiber's cross-section. Such a design allows unlimited spatial separation between interacting fibers which in turn, eliminates inter-core crosstalk via proximity coupling. Controllable energy transfer between fiber cores is then achieved on highly directional transmission through transverse lightguides. The main advantage of this coupling mechanism is its inherent scalability as additional fiber cores could be integrated into the existing fiber cross-section simply by placing them far enough from the existing circuitry to avoid proximity crosstalk, and then making the necessary inter-core connections with transverse light "wires" in a direct analogy to the "on chip electronics integration".
Based on the above mentioned benefits that the resonant coupling mechanism can introduce, we devote the present paper to describe a novel design approach for realizing resonant bandpass filters in multicore photonic band gap fibers (MC-PBGFs) based on the highly-selective resonant tunneling mechanism. The MC-PBGF consists of three identical aircores separated by two defected air-holes (resonators). By adjusting the sizes of the resonant air-holes, phase-matching at two distinct wavelengths can be achieved between the input and output cores, enabling highly-selective narrowband resonant directional coupling. Although other mature technologies such as fiber Bragg gating (FBG) with circulator have been successfully used to realize narrow-band filters, perhaps one of the appealing properties of the technology of multicore PBGFs is the exhibition of temperature and strain insensitivity in comparison to FBGs. Through an efficient modal [11] and beam propagation analysis [12], based on the finite element method (FEM), we theoretically investigate the possibility of synthesizing efficient ultra narrow-band splitters, suitable for filtering applications.
The composition of the present investigation will be as follows: in Section 2 we introduce the device concept and we give exact design guidelines for achieving the narrow bandpass filtering characteristics. Then in Section 3 we validate our design's performance by showing various numerical simulations based on FEM numerical algorithms. In Section 4 we briefly address the possibility of realizing a polarization-independent splitter operating at a single wavelength, based on the prescribed MC-PBGF technology. A final conclusion will follow in Section 5 with some suggestions for future investigations.
Schematic representation and design guidelines for engineering MC-PBGF splitters
The cross-section of the structure under investigation is shown in Fig. 1. The hollow cores are formed in a silica-based MOF with a cladding refractive index n = 1.45, by removing two rows of tubes and smoothing the resulting core edges. The pitch constant is chosen to be Λ = 2 μm, while the air-hole diameters in the cladding of the fiber is d/Λ=0.9, with a total of six hole layers in the cladding. Fundamental band gap where the core guided modes are found, extends between 1.29 μm < λ < 1.40 μm [13]. The formation of a splitter, operating at two distinct wavelengths, can be achieved by placing three hollow cores of N=5 periods apart from each other (along the x axis) as shown in Fig. 1. Two dissimilar transverse resonators with d 1 /Λ and d 2 /Λ are then introduced by reducing (high index defects) the diameters of the air-holes in the middle of the line joining the cores. By an accurate modal analysis performed using an accurate FEM solver [11], in Fig. 2 (a) we evaluate the effective indexes of the x-polarized (horizontally polarized) fundamental (blue solid curve), as well as the x-polarized excited resonant modes (red dashed curves), as a function of the operating wavelength and for several incremental values of the resonator's normalized diameter d r /Λ, ranged from 0.6 to 0.8. Notice that the computation of the effective refractive index of the fundamental mode was performed assuming the core to be isolated; while the resonant excited modes have been calculated assuming these resonators isolated from the cores. This approximation was confirmed to give accurate results when comparing with the results associated with the coupled system's performance (cores plus resonators). We can clearly see that the effective index of the fundamental mode is being crossed at certain wavelengths by the excited resonant modes. The physical interpretation of this crossing is that the excited modes at wavelengths of λ 1 , λ 2 , λ 3 , … , λ n , corresponding to different normalized resonator's diameters d r /Λ, can be effectively Topology of a three-core PBG fiber splitter utilizing a non-proximity resonant tunneling coupling mechanism. The air-holes in the cladding are arranged in a triangular configuration with pitch constant Λ and air-hole diameters d. As an input core we consider the middle core-A, while B and C are the output cores. Two dissimilar transverse resonators with diameters d1 (green colored) and d2 (red colored) are then introduced by reducing (high index defects) the diameters of the air-holes in the middle of the line joining the cores. By a judicious choice of the design parameters this multicore PBGF can act as an ultra-narrow dual bandpass filter at a very short coupling length.
transferred via resonant coupling through the dissimilar resonators, from the central input core-A into the output cores-B and C. This simply means that at a given normalized resonator's diameter d r /Λ, there exists a wavelength where resonant tunneling can be achieved between the input (core-A) and output (cores B or C) through the resonator. To identify the evolution of the resonance wavelengths as the resonator's diameter changes, in Fig. 2(b) we plot the resonance wavelength as a function of the resonator's normalized diameter d r /Λ. From the results in Fig. 2(b) we can observe that as soon as the resonator's diameter decreases the resonance wavelength increases. Due to the fact that the resonance wavelength must lie within the PBG region (shown by the grey boundaries), the possible values of the resonator's diameters are in the range: 0.62 < d r /Λ < 0.8, for the x-polarized state. The same calculations are repeated in Figs. 3(a) and (b) for the y-polarization state (vertically polarized state). In the second case of y-polarization, we can observe a significant difference in the behavior of the evolution curve of the resonance wavelengths. By comparing the results in Figs. 2(b) and 3(b) we can see a great difference between the two polarizations. From Fig. 3(b) it is evident that the range of allowable values of the normalized resonator's diameters for the y-polarized mode which will result in resonance wavelengths within the PBG of the structure is significantly larger: 0 < d r /Λ < 0.86. In addition a very interesting phenomenon occurs. This phenomenon is the insensitivity of the resonance wavelength for the following range of the normalized resonator's diameters 0.1 < d r /Λ < 0.4, since at this range the resonance wavelength appears almost constant at a value of λ res = 1.372 μm. The physical explanation for this drastic difference among the two polarization states can be given in terms of the feature of each polarization. This means that while the y-polarization has an even-type profile, the x-polarization significantly differs because its profile is anti-symmetric (odd), as will be demonstrated qualitatively later on. Therefore as a conclusion of this section we have described the basic principle of operation for this type of multi-core PBGF splitter and we have identified the impact of each polarization state to the resonance wavelength of the coupled system consisting of the input core and the resonator. In addition we have identified the allowable range that the resonators can give a resonance wavelength within the PBG capabilities of the structure.
Numerical results and device performance
After having explained the basic operational principle of the device under consideration, we proceed by investigating the spectral as well as its propagation characteristics. The operational principle of this bandpass filter can be alternatively understood in terms of the supermodes of the "three-core" directional splitter (that is the system composed of two air-cores and one resonator). If the individual cores of the splitter are single-moded, the coupler structure supports three supermodes, two symmetric and one anti-symmetric (for y-polarization), and x-polarization two anti-symmetric and one symmetric (for x-polarization), with corresponding fields defined as φ 1 , φ 2 , φ 3 and qualitatively shown in Fig. 4. Let n eff,1 , n eff,2 , and n eff,3 represent the effective refractive indices of the supermodes corresponding to the fields φ 1 , φ 2 , φ 3 , respectively, for each of the polarization states. Then assuming that initially all the energy is in the input core-A, this will correspond to the excitation of a supermode combination of the following type: After propagation over a distance-z, this excitation pattern will evolve into: where β i =2πn eff,i /λ 0 (i = 1, 2, 3) and λ 0 is operating wavelength. If we design the PBGF so that the effective refractive indexes of its supermodes satisfy the condition: complete power transfer from core A to core B or C can be achieved at the exact resonant wavelength λ 0 by choosing mode propagation length z=L c with , 0 , , , Fig. 6 we plot the obtained spectral characteristics of this novel type of PBGF splitter with total fiber length of 2.7 mm, for the y-polarized state, by using an accurate analysis based on the BPM algorithm [12]. From these results we can observe a dual band-pass transmission response centered at the prescribed wavelengths of λ 1, y = 1.339 μm and λ 2, y = 1.357 μm. The highly selectivity in the Fig. 7. Snapshots of the electric field distributions, that is y-polarization (E y ), in the multicore PBGF splitter, for (a) λ1, y = 1.339 μm, (b) λ2, y = 1.357 μm, calculated at a distance of z = 0 mm, (c) λ1, y = 1.339 μm (d) λ2, y = 1.357 μm, at a distance of z = 1.0 mm, (e) λ1, y = 1.339 μm, and (f) λ2, y = 1.357 μm, at a distance of z = 1.5 mm, (g) λ1, y = 1.339 μm, and (h) λ2, y = 1.357 μm, at a distance of z = 2.0 mm, and finally for (i) λ1, y = 1.339 μm, (j) λ2, y = 1.357 μm, calculated at the coupling length of z=Lc= 2.7 mm. We can clearly see that at the coupling length of Lc= 2.7 mm, almost complete power transfer can be achieved from the input core-A to the output cores B and C at wavelengths of λ1, y = 1.339 μm and λ2, y = 1.357 μm, respectively with a transmission level of about 95 % due to the slightly difference in the values of the exact coupling lengths at the two different wavelengths.
filter's response we could obtain in this case, indicates the potential capability of the nonproximity resonator's states to synthesize highly selective resonant coupling characteristics. The full width at half maximum (FWHM) bandwidths of this filter are 1.2 nm and 1.1 nm for the y-polarized state at wavelengths of λ 1, y = 1.339 μm and λ 2, y = 1.357 μm, respectively, while for the x-polarized state the FWHM bandwidths were found to be a bit smaller. In both cases a transmission of about 95 % at the resonant wavelengths of λ 1 and λ 2 could be achieved. The difference in the values of the FWHM bandwidths for the two different polarization states, is associated with the larger coupling length of the x-polarization in comparison to the ypolarization (see Fig. 5), a fact which in general will result in a weaker coupling between the input core-A and the output cores-B or C for the x-polarization, thus resulting in a lower FWHM.
To visualize the power splitting mechanism in our proposed PBGF splitter, in Fig. 7 we plot the coupling characteristics of the field distribution at different propagation distances, obtained by using a BPM [12]. Specifically Fig. 7 shows the snapshots of the electric field distribution, that is y-polarized mode (E y ), for (a) λ 1, y = 1.339 μm, (b) λ 2, y = 1.357 μm, calculated at a distance of z = 0 mm, (c) λ 1, y = 1.339 μm (d) λ 2, y = 1.357 μm, at a distance of z = 1.0 mm, (e) λ 1, y = 1.339 μm, and (f) λ 2, y = 1.357 μm, at a distance of z = 1.5 mm, (g) λ 1, y = 1.339 μm, and (h) λ 2, y = 1.357 μm, at a distance of z = 2.0 mm, and finally for (i) λ 1, y = 1.339 μm, (j) λ 2, y = 1.357 μm, calculated at the coupling length of z = L c = 2.7 mm. We can clearly observe that at the coupling length of L c = 2.7 mm the two different wavelengths were splitted in the output cores B and C within a power decrement of 5 % from the targeted level of 100 %, associated with the slightly difference between the partial coupling lengths corresponding to the two different operating wavelengths.
Realization of polarization-insensitive PBGF splitters operating at a single wavelength
In the previous sections we have devoted our efforts to design PBGF splitters operating at two distinct wavelengths and for two different polarization states. One of the main conclusions was that the selection of polarization is very important in terms of the device propagation characteristics. Recently there has been much effort in realizing polarization-independent splitters based on the PCF technology [14], [15]. In this section we will show that by using the prescribed multicore PBGF topology with an appropriate selection of the design parameters, polarization-independent propagation characteristics can be achieved at a single operational wavelength. We refer to the results obtained in Figs. 2(b) and 3(b), and we combine these results to generate Fig. 8, for the structure shown in Fig. 9, where we can see the evolution of the resonance wavelength curves as a function of the resonator's normalized diameter d r /Λ, for x-polarization (blue curve) and y-polarization (red curve). These two curves cross each other at a unique point corresponding to d r /Λ=0.75 with resonance wavelength of λ res = 1.325 μm. This unique selection of the resonator's diameter will lead to polarizationindependent propagation characteristics, for the structure shown in Fig. 9, at the prescribed single resonance wavelength. So by choosing the resonator's diameter as d r /Λ=0.75, the resulting structure will operate as an effectively 100 % coupler from core-A into core-B, with polarization-independent propagation characteristics, operating at a single wavelength. The corresponding coupling length in this case was calculated by the modal analysis to be L c = 22.3 mm, a relatively short coupling length, acceptable for most practical applications. In order to verify the exact coupling length, in Fig. 10 we perform a simulation of the normalized power propagation along the PBGF splitter, using an accurate BPM algorithm [12]. Specifically Fig. 10 (a) shows the normalized power propagation at the resonance wavelength of λ res = 1.325 μm, for x-polarization (blue curve), and y-polarization (red curve), in the input core-A as a function of the propagating distance in mm. The same simulation is shown in Fig. 10 (b) for the output core-B. From these results we can see that at the coupling length of L c = 22.3 mm, the power is transferred from the input core-A to the output core-B with a transmission of more than 90 %, independent of the polarization state of the input signal. The main conclusion arising from this section is that the proposed MC-PBGF technology has indeed the potential capabilities of realizing polarization-independent devices by a judicious choice of the design parameters.
Conclusions
To summarize our work, we have proposed and numerically investigated the propagation properties of a novel bandpass filter based on the resonant tunneling phenomenon in a threecore PBGF. The design strategy of realizing multi-core couplers based on the resonant tunneling effect in PBGFs, according to the best of our knowledge, is reported in the international literature for the first time. Results of a full vectorial finite element modal analysis confirmed by BPM simulations have been presented for a variety of quantities related to the fiber's propagation characteristics. The high suppression of the side-lobes in comparison to previous reported filters based on conventional fiber technology [16]-[18] as well as the ultra-narrowband response and the reasonable short coupling length are the main advantages of the proposed PBGF architecture. Additionally we have showed that by using this MC-PBGF platform we can even achieve polarization-independent propagation characteristics at a single operational wavelength. Regarding the feasibility of our proposed fiber, maybe a little bit tedious at the present stage of technology, but we strongly believe that with more advanced fabrication technologies that will be introduced in the near future, the proposed MC-PBGF splitter will be a challenging fabrication task for the experimentalists. Our three-core PBGF coupler can be employed in multifunctional all-fiber bandpass/bandstop filtering applications. A generalized wavelength splitter based on the resonant tunneling effect in multi-core PBGFs with multiple integrated resonators in its profile, is proposed for future analysis in Fig. 11. The operational principle is exactly the same as that of the splitter in Fig. 1. By an appropriate choice of the design parameters the generalized wavelength splitter in Fig. 11 can perform a four-wavelength splitting operation within reasonably short coupling length, appropriate for the realization of multi-operational all-fiber devices. We believe that the inclusion of multiple integrated components in a single fiber for multifunctional purposes is a challenging problem and currently is an active research topic in our group. D E C B A Fig. 11. Topology of the proposed five-core PBGF splitter utilizing a non-proximity resonant tunneling coupling mechanism. As an input core we consider the central core-A, while B, C, D and E are the output cores. Four dissimilar transverse resonators with diameters d 1 (yellow colored), d2 (green colored), d3 (teal colored), and d4 (red colored) are introduced by reducing (high index defects) the diameters of the air-holes across the lines joining the cores. By a judicious choice of the design parameters this multicore PBGF can perform an ultra-narrow bandpass filtering operation at four distinct wavelengths with a reasonably short coupling length. | 5,605.6 | 2006-05-29T00:00:00.000 | [
"Physics"
] |
The relationship between renewable energy consumption, international tourism, trade openness, innovation and carbon dioxide emissions: international evidence
ABSTRACT The main concerns for nations worldwide in accomplishing long-term development goals are avoiding the adverse effects of climate change, lowering emissions, and enhancing environmental sustainability. This study examines the effects of 53 nations' use of renewable energy, trade openness, global tourism, and innovation on carbon emissions from 1990–2019 by using FMOLS, DOLS, GMMSystem estimator. The results demonstrate that renewable energy, trade openness, and innovation reduce emissions, whereas global travel increase environmental degradation. The findings have major policy implications for adopting trade openness policies to enhance environmental quality. Innovation that enhances the environment's quality for both country groups may be a remarkable discovery for the authorities. Findings suggest that renewable energy and technological innovation are essential for sustainable development.
Introduction
The consumption of energy and economic expansion are closely related.Energy use causes a rise in glasshouse gas (GHG) emissions.According to United Nations climate change studies, fossil energy is the main contributor to global climate change, responsible for approximately 90% of all CO 2 emissions and more than 75% of all glasshouse gas emissions. 1As a result, relying on energy use to fuel economic expansion could harm the environment.Recently, growing worries about the close link between climate change and economic growth have been sparked by the acceleration of GHG emissions in developing nations.Hope (2006) argued that although climate change might originally have some positive influences for many developed nations, it will be harmful to the environment in the long run.However, according to Grossman and Krueger (1995) and Hitz and Smith (2004), economic growth can bring an initial degradation period.Still, because of the adoption of better technologies may enhance the environmental quality.There have been many studies examining the progress and success of Government programs related to investment and development of innovative technology to mitigate the effects of climate change.Some studies, such as Raihan and Voumik (2022) for India; Ibrahim and Mohammed (2022) for Gulf countries, suggest that technological innovation has a detrimental impact on carbon emissions.Increasing the number of patents may enhance environmental quality.However, some researchers such as Demircan Çakar et al. (2021) claim that innovation positively affects carbon emissions.According to certain research, innovation has no effect on CO 2 emissions (Álvarez-Herránz et al. 2017;Amri, Bélaïd, and Roubaud 2018;Khattak et al. 2020).Furthermore, innovation is one of the major factors that promote the economic growth of many countries (Khan et al. 2022).According to Anakpo and Oyenubi (2022), innovation positively impacts economic growth.However, economic development is often accompanied by environmental degradation because some policies that promote economic growth lead to increasing CO 2 emissions (Raihan and Tuspekova 2022a;Raihan and Tuspekova 2022b).This is harmful to the quality of the environment and stimulates climate change.This study focuses on the world's 53 countries and will examine whether innovation helps lessen negative environmental impacts.
In addition, there are also many studies conducted on carbon dioxide emissions and their different determinants, such as financial development, renewable energy consumption, urban population growth, trade openness, foreign direct investment, international tourism, economic growth and innovation for many different regions and countries such as Ișik et al. (2020) for G7 countries, Raihan and Voumik (2022) for India, Raihan and Tuspekova (2022c) for Russia, Zhang et al. (2022) for China Furthermore, the connection between renewable energy consumption, trade openness, innovation and carbon dioxide emissions is one of the key concerns for the researchers and policymakers to combat the climate change.According to Li and Haneklaus (2022b), Usman et al. (2022) trade openness has a positive influence on carbon emissions.Trade openness is considered a major factor in promoting economic growth.However, activities related to export and import also cause environmental degradation in some countries.The rising trade openness, foreign direct investment, and urban population growth lead to the reduction of environmental quality in some countries due to such countries encouraging economic growth through poor policies.Moreover, many studies also argued that tourism is a vital factor in enhancing economic growth (Isik, Dogru, and Turk 2018;Dogru, Isik, and Sirakaya-Turk 2019;Işık et al. 2021;Işık et al. 2021;Işık et al. 2022), but tourism-related activities lead to increasing the climate change (Ișik et al. 2020).In addition, according to Batool et al. (2022), financial development and economic growth positively impact energy consumption in South Asian countries.This means economic growth leads an increase in energy consumption.However, energy consumption from fossil fuels might cause environmental degradation because of the increase in carbon emissions and negatively influence the quality of the environment in some countries (Teng et al. 2021).According to Cao et al. (2022), Teng et al. (2021) and Khan et al. (2021), renewable energy consumption make to decrease carbon dioxide emissions and help to improve the quality of the environment.When using energy consumption from fossil fuels is replaced by using renewable energy to help reduce carbon emissions (Khan, Khan, and Binh 2020;Khan et al. 2020).
Our study contributes to the existing literature in several ways.First, previous studies on factors affecting CO 2 emissions only focused on areas such as OECD countries, BRICS, Mediterranean countries, G-20 countries, South Asian countries, ASEAN countries, Sub-Saharan African (SSA) countries, Gulf countries, China and Indian, and China and Indonesia countries.This study is the first international evidence.Second, few previous studies are limited by factors affecting CO 2 emissions for the world countries.For example, Raihan and Voumik (2022) only focused on financial development, renewable energy use, technological innovation, economic growth, and urbanisation on carbon dioxide (CO 2 ) emissions in India, Adebayo et al. (2022) focused on renewable energy consumption and trade openness on carbon emissions in Sweden, Azam, Rehman, and Ibrahim (2022) only concentrated on the relationship between industrialisation, urbanisation, trade openness, and carbon emissions for OPEC economies.Chhabra, Giri, and Kumar (2022) only focused on the nexus between technological innovations, trade openness and CO 2 emissions for middle-income countries.Wang, Zhang, and Li (2023) concentrated on trade openness, human capital, renewable energy, natural resource rent and CO 2 emissions in 208 counties.While Weili, Bibi, and Khan (2022) investigate the effect of carbon dioxide, energy consumption, and economic growth on innovations.Therefore, this research will comprise an all-inclusive analysis considering a range of factors that contribute to CO 2 emissions, including renewable energy consumption, trade openness, urban population growth, international tourism, foreign direct investment, economic growth, and innovation.Finally, in this study, we examine the difference in factors affecting CO 2 emissions in two groups of developed and developing countries.Furthermore, the research results can help managers have a better overview of the factors affecting CO 2 emissions in countries, thereby some economic development policies should be suggested to ensure environmental sustainability.
This study examines the effects of 53 country use of renewable energy, trade openness, global tourism, and innovation on carbon dioxide emissions.Using FMOLS, the DOLS model, and the GMM-System estimator, panel data from 1990 to 2019 were employed.The results demonstrate that the consumption of renewable energy, trade openness, and innovation negatively influence carbon emissions, whereas global travel and economic expansion have a positive impact.
The structure of this research will be presented as follows: the empirical literature will be shown in Section 2, Section 3 will present the study models and methods, the results and the discussions of the findings will be illustrated in Section 4, and Section 5 is the study conclusion.
Literature review
There are much research conducted on carbon dioxide emissions and its different determinants such as renewable energy consumption, urban population growth, trade openness, foreign direct investment, international tourism, economic growth and innovation.Many studies show that energy consumption help to rise economics growth, but it also influences environmental quality and using renewable energy is one of the beneficial solutions for environmental quality.Likely, Many types of research indicate that trade openness, foreign direct investment, international tourism, and innovation negatively affect environment degradation while some argue that such factors lead to increase carbon emissions.We will conduct a step-by-step literature review to examine the relationship between the factors in this study as follows: 2.1.Renewable energy consumption (REC), foreign direct investment (FDI), economic growth, and carbon dioxide emissions Economic growth is the top goal of all countries, but its impact on environmental quality is also a matter of concern for policy makers (Banerjee 2022).The improvement of economic growth (GDP) leads to the increase in CO 2 emissions (Ali, Abdul-Rahim, and Ribadu 2017;Mensah et al. 2018;Xu et al. 2016;Ali, Law, and Zannah 2016;Raihan andTuspekova 2022b, 2022a).However, Khan et al. (2021) illustrate that economic growth negatively affect CO 2 emissions.According to Batool et al. (2022), financial development and economic growth have a positive impact on energy consumption in South Asian countries.This means economic growth influences the quality of environment due to the increase in energy consumption in such countries.Besides, Khan et al. (2022) explore that carbon neutrality has a positive relationship with green economic growth (GEG).Furthermore, there have also been many studies that analysed the effect of renewable energy on carbon dioxide emissions.For instance, Liu, Zhang, and Bae (2017) found that there is a negative impact of renewable energy and agriculture on CO 2 emissions, while non-renewable energy has a positive effect on carbon.Moreover, renewable energy consumption leads to mitigating CO 2 emissions in OECD countries found in the research of Mensah et al. (2018).According to Hu et al. (2018), the rising share of renewable energy consumption makes the decreasing of CO 2 emissions.Zhang, Ajide, and Ridwan (2021) suggest that the use of non-renewable energy should be limited towards environmental sustainability and human capital development should be carried out to promote environmental quality across all levels of education.Furthermore, renewable energy use hurts CO 2 emissions is also found in the research of Ișik et al. (2020) (2023) explore that renewable energy promotion plays an important role to solve the growth of oil dependence.
In addition, much research have also been done to explore the relationship between FDI and CO 2 emissions.For instance, Khan et al. (2021) indicated that FDI positively affect CO 2 emissions in developed countries and the opposite was true in developing countries.They suggest using renewable energy should be encouraged to enhancing environmental quality in such countries.The positive impact of foreign direct investment on CO 2 emissions is also found in the research of Muhammad et al. (2021) and Teng et al. (2021).In contrast, according to Muhammad et al. (2021) foreign direct investment hurts CO 2 emissions, this factor causes a decreasing of environmental degradation in developed countries, while it leads to increase environmental degradation in BRICS, developing countries.Besides, economic growth also leads to an increase in the environment degradation by applying the model of fixed effect and GMM for the period of 1991-2018.A negative impact of FDI on CO 2 emissions is also found in the study of Zhang et al. (2022).Energy consumption and industrialisation have a positive impact on CO 2 emissions (Raihan and Tuspekova 2022c).They suggest that the use of renewable energy sources, green industry, and sustainable forest management should be done to ensure environmental sustainability.
Tourism and carbon dioxide emissions
Many studies have explored that tourism development leads to reducing the environmental depression.For example, Balogh and Jámbor (2017) indicate that tourism arrivals and international trade allow improve environmental quality.A similar result was found in the study of Alam and Paramati (2017).They have shown that trade openness and tourism help to reduce carbon dioxide emissions by employing the FMOLS model.A negative association between tourist arrivals and carbon dioxide emissions was also found in the research of Lee and Brahmasrene (2013), Aïssa, Jebli, and Youssef (2014), Katircioglu, Feridun, and Kilinc (2014), Leità and Shahbaz (2016), Ben Jebli, Youssef, and Apergis (2019), Balsalobre-Lorente et al. (2020).According to Ben Jebli, Youssef, and Apergis (2019), tourism, renewable energy, and FDI hurt carbon emissions.Besides Dogru et al. (2020) tourism development hurts carbon dioxide emissions in Canada, Czechia, and Turkey.
On the other hand, Khan et al. (2021) have found tourism is one of the factors that cause the increase of CO 2 emissions.In addition, according to Muhammad et al. (2021), tourism and foreign direct investment positively affect CO 2 emissions, while governance helps to improve the quality of the environment.A positive effect of tourism on carbon emissions was also found in research by Shakouri, Yazdi, and Ghorchebigi (2017), Işik, Kasımatı, and Ongan (2017), Sharif, Afshan, and Nisha (2017), Ișik et al. (2020), Ibrahim and Mohammed (2022).Ișik et al. (2020) show that the receipt of international touristm positively impacts on CO 2 emissions in Italy.They suggest renewable energy consumption should be considered to ensure tourism and environmental sustainability.
Trade openness and carbon emission
There are some studies analysed the impact of trade openness on CO 2 emission.Some of them found that trade openness help to reduce CO 2 emission, but others stated that trade openness is harmful to environmental quality.For example, Ali, Law, and Zannah (2016) show that trade openness negatively affect carbon emissions, while economic growth and energy consumption positively affect carbon emissions.Similarly, Bernard and Mandal (2016) indicate that trade openness enhances the quality of the environment.According to Zhang et al. (2017), trade openness has a negative impact on CO 2 emissions.The study was conducted in 10 industrialised countries from 1971 to 2013.According to Khan et al. (2021), trade openness plays a vital role in economics growth and it reduces CO 2 emissions in developed countries.A similar result is also found in the study of Cole, Elliott, and Okubo (2010); Roy (2017); Yu et al. (2019); Sun et al. (2019); Du, Li, and Yan (2019); Leitão and Balogh (2020);Adebayo et al. (2022); Li and Haneklaus (2022a); Wang, Zhang, and Li (2023).
On the other hand, according to Le, Chang, and Park (2016) trade openness leads to reduce the quality of the environment.However, there are different results between high-income countries and middle-and low-income countries.Trade openness negatively affects environmental degradation in high-income countries, while it is harmful to the environment in middle-and low-income countries.Likely, Ertugrul et al. (2016) show a similar result in Turkey, India, China and Indonesia countries.They found that Trade openness leads to a rise in carbon emissions in Turkey, India, China and Indonesia countries.Furthermore, Mahrinasari, Haseeb, and Ammar (2019) show the positive impact of trade liberalisation on carbon emission by employing FMOLS and DOLS models.A positive impact of trade openness on CO 2 emissions is indicated in the findings of Li and Haneklaus (2022b) for G7 countries; Usman et al. (2022) for Pakistan; Chhabra, Giri, and Kumar (2022) for selected middle-income countries; Li and Haneklaus (2022c) for China; Azam, Rehman, and Ibrahim (2022) for six OPEC countries;
Innovation and carbon dioxide emissions
There are many types of research investigated the nexus between innovation and environment degradation.Some researchers found that innovation does not have an impact on CO 2 emissions (Álvarez-Herránz et al. 2017;Amri, Bélaïd, and Roubaud 2018;Khattak et al. 2020).Others explored that innovation makes to decrease CO 2 emissions.For instance, Irandoust (2016) indicates the important role of renewable energy and technological innovation in improving the quality of environment.According to Zhang et al. (2017), resource innovation, knowledge innovation, and environmental innovation negatively affect CO 2 emissions.The same result was found in the research of Samargandi (2017).Similarly, Mensah et al. (2018) indicated that innovation can reduce CO 2 emissions in 28 OCED countries.Likely, Danish (2019) has shown that the ICT reduce carbon emissions in the 59 countries along Belt and Road for the period of 1990-2016.Moreover, Du, Li, and Yan (2019) found that green technology innovations do not make to decrease CO 2 emissions for countries whose level of income is below the threshold.However, there is a reducing effect on carbon emissions for the economies whose income level is above the threshold.It was found technological innovations negatively affect carbon emissions in China (Shahbaz et al. 2020).In addition, Shahbaz et al. (2020) showed that technological innovations negatively affect CO 2 emissions.According to Nguyen, Pham, and Tram (2020), energy price, FDI, and trade openness, technology and spending on innovation make to decrease CO 2 emissions by researching 13 selected G-20 countries for the period of 15 years from 2000 to 2014.A similar result is also found in the study by Wen et al. (2020).
On the other hand, some studies show that there is a positive relationship between innovation and CO 2 emissions.According to Su and Moaniba (2017), the rising of carbon emissions leads to more innovations linked to climate change.They suggest that public funds should be used for innovative activities to struggle with the change of climate.Moreover, Demircan Çakar et al. (2021) examined the effect of innovation on CO 2 emissions in 8 developing countries and 6 developed countries in Mediterranean countries by applying the PMG and DFE methods for the period of 1997-2017 and 2003-2017, respectively.They have found that there is a positive relationship between the level of innovation and CO 2 emission in both developed and developing countries.In addition, Weili, Bibi, and Khan (2022) show that CO 2 emissions and economic growth increase technological innovations while FDI reduces innovations.
Urban population growth and carbon dioxide emissions
There have been some studies that analysed the impact of urbanisation on CO 2 emission.Some of them found that urbanisation helps to reduce CO 2 emission, but others stated that urbanisation leads to increase CO 2 emission.For instance, Ali, Abdul-Rahim, and Ribadu (2017) show that urbanisation leads to decrease CO 2 emissions in Singapore.Furthermore, Li et al. (2018) show that urbanisation negatively affect carbon dioxide emissions efficiency.Wang et al. (2021) have investigated the effect of urbanisation on CO 2 emissions in OECD countries by employing the autoregressive distribution lag (ARDL) model.They indicated that urbanisation negatively affects carbon emissions for developed countries.
On the contrary, Anwar, Younis, and Ullah (2020) indicate that urbanisation, GDP, and trade openness have a positive impact on CO 2 emissions.Their study suggests the government should support green and sustainable urbanisation and encourage using renewable energy to help economic development and reduce the depression of the environment.The same result is also found in the research of Raihan and Voumik (2022).Furthermore, Lee et al. (2022) have evaluated the effect of urbanisation on carbon dioxide emissions in China from 1996 to 2018.They showed that increasing urbanisation causes rising of carbon dioxide emissions.However, their finding also found that after having a certain level of foreign capital, this negative effect gets weaker.when the technology, finance, and government become more developed, urbanisation can be accelerated to reduce CO 2 emissions.
A review of existing studies on CO 2 emissions reveals several research gaps that this study searches to fill.First, previous studies on factors affecting CO 2 emissions only focused on areas such as OECD countries, BRICS, Mediterranean countries, G-20 countries, South Asian countries, ASEAN countries, in Sub-Saharan African (SSA) countries, in Gulf countries, China and Indian and China and Indonesia countries.Second, Little is known about the factors affecting CO 2 emissions in available studies for the world countries.The few previous studies are limited by factors affecting CO 2 emissions for the world countries.For example, Raihan and Voumik (2022) only focused on financial development, renewable energy use, technological innovation, economic growth, and urbanisation on carbon dioxide (CO 2 ) emissions in India, Adebayo et al. (2022) focused on renewable energy consumption and trade openness on carbon emissions in Sweden, Azam, Rehman, and Ibrahim (2022) only concentrated on the relationship between industrialisation, urbanisation, trade openness, and carbon emissions for OPEC economies.Chhabra, Giri, and Kumar (2022) only focused on the nexus between technological innovations, trade openness and CO 2 emissions for middle-income countries.Wang, Zhang, and Li (2023) concentrated on trade openness, human capital, renewable energy, natural resource rent, and CO 2 emissions in 208 counties.while Weili, Bibi, and Khan (2022) investigate the effect of carbon dioxide, energy consumption, and economic growth on innovations in the world's 181 countries.Finally, in this study, we examine the difference in factors affecting CO 2 emissions in two groups of developed and developing countries.Therefore, this research will comprise an all-inclusive analysis considering a range of factors that contribute to CO 2 emissions including renewable energy consumption, trade openness, urban population growth, international tourism, foreign direct investment, economic growth, and innovation for the world's 53 countries.
Data and variables
This study investigates the effect of renewable energy consumption, trade openness, urban population growth, international tourism, foreign direct investment, economic growth, and innovation on carbon dioxide emissions in the world's 53 countries for the period of 1990-2019.The data of all variables have been collected from World Bank.In this research, carbon dioxide emissions are the dependent variable, renewable energy consumption, trade openness, urban population growth, international tourism, foreign direct investment, economic growth, and innovation are explanatory variables.Table 1 displays variables, measurement, and data sources for this study.
Empirical model
Before analysing the impact of all the above variables on carbon dioxide emissions, we use different sets of panel unit root tests such as Levin-Lin-Chu, Im-Pesaran-Shin, Breitung and Fisher to check for the stationarity of all variables at the level or first differences for the period of 1990-2019.If the variables have the unit root (not stationary) at level, the studies should be confirmed if the variables are integrated in the first differences I(1).The next step, checking the cointegration relationship in long run among variables will be confirmed by Westerlund, Kao, and Pedroni tests.If there is a cointegration relationship between the variables in the model, FMOLS (fully modified ordinary least square) and DOLS (dynamic ordinary least square) models will be employed to evaluate the impact of all variables on carbon dioxide emissions in the world's 53 countries from 1990 to 2019.Moreover, estimation results of OLS-ordinary least squares, fixed effects, and random effects models often have problems with series correlation and endogeneity of independent variables (Leitao 2012;Leitão 2013;Jambor and Carlos Leitão 2016).The estimator of the system generalised moments method (GMM-System) allows solving econometric problems such as serial correlation and the endogeneity of independent variables.
Research of Adebayo et al. ( 2022), Azam, Rehman, and Ibrahim (2022), Chhabra, Giri, and Kumar (2022), Raihan and Tuspekova (2022a); Raihan and Tuspekova (2022b), Cao et al. (2022), Raihan and Voumik (2022), Lee et al. (2022), Ibrahim et al. (2022), Ibrahim and Mohammed (2022), Zhang et al. (2022) constitute the theoretical foundation of this research, which focuses mainly on the relationship between renewable energy consumption, trade openness, urbanisation, international tourism, foreign direct investment, economic growth, and innovation on carbon dioxide emissions.Renewable energy consumption plays an important role in reducing environmental degradation, if renewable energy use is stressed, then renewable energy consumption may have an impact on the quality of environment (Raihan and Tuspekova 2022c).Moreover, trade openness, urbanisation, and economic growth are interrelated, urbanisation, and trade openness may contribute to economic growth, which can affect CO 2 emissions (Anwar, Younis, and Ullah 2020; Raihan and Tuspekova 2022c).Furthermore, tourism development plays an important role in promoting economic growth, if tourism development is done properly, it will promote economic development and may improve environmental quality (Işık, Sirakaya-Turk, and Ongan 2020).Besides, attracting foreign direct investment is one of the ways to help economic development but the quality of the environment may be affected as many industrial parks and manufacturing enterprises are established.Technological innovation may help to promote environmental sustainability, climate change is one of the major problems that countries around the world are facing, technological innovation is one of the effective ways to reduce the impact of climate change (Zhang et al. 2022).The CO 2 emission model built in this study is as follows: (1) where CO 2 refers to carbon dioxide emissions.REC, UPG, TO, FDI, IT, GDP, and INNOVA are the explanatory variables, representing renewable energy consumption, urban population growth, trade openness, foreign direct investment, international tourism, Gross domestic product per capita (constant 2015 US$), and number of patent applications, respectively.
i indicates country (i = 1, … , N ) t indicates time (t = 1990 … 2019) The trade openness variable is determined as follows: where X signifies exports of goods and services (constant 2015 US$), M represents Imports of goods and services (constant 2015 US$), and GDP gross domestic product (constant 2015 US$).
The empirical model of the variables is represented by the following equation: The econometric model is presented as follows, Equation ( 2) is substituted for where a 0 and є it are intercept and error term.Besides, a1, a2, a3, a4, a5, a6 and a7 indicate the coefficients.The econometric model of the system GMM to explore the dynamic influence of renewable energy consumption, trade openness, urban population growth, international tourism, foreign direct investment, economic growth, and innovation on carbon dioxide emissions as follows: where gross domestic product per capita (constant 2015 US$) are the control variables of the study and are deputised by X it .
Based on the literature review, we construct the following hypothesis: Hypothesis 1 (H1): Renewable energy consumption leads to reduce the environmental degradation.
Renewable energy consumption plays an important role in reducing environmental degradation, if renewable energy use is stressed, then renewable energy consumption may have a positive impact on the quality of environment (Cao et al. 2022; Raihan and Voumik 2022; Raihan and Tuspekova 2022c).Hypothesis 4 (H4): Innovation has an impact on CO 2 emissions.
Technological innovation may help to promote environmental sustainability, climate change is one of the major problems that countries around the world are facing, technological innovation is one of the effective ways to reduce the impact of climate change (Wen et al. 2020;Shahbaz et al. 2020;Zhang et al. 2022).
Results and discussion
In this part, we indicate the outcome of the relationship between renewable energy consumption, urban population growth, trade openness, foreign direct investment, international tourism, economic growth, innovation and carbon dioxide emissions.Firstly, the outcome of analysing the general statistics is presented at Tables 2 and 3 shows the outcome of panel unit root test at level and first different for all variables.Fourth line: The outcome from Table 3 indicate that there is cointegration relationship between variables in this research, please correct as: The outcome from Table 4 indicate that there is cointegration relationship between variables in this research.The tests of Levin-Lin-Chu, Im-Pesaran-Shin, Breitung, and Fisher are applied in this study to consider the variables stationary at level or at the first difference.The next step, we consider the cointegration relationship between variables by applying Kao, Pedroni, and Westerlund test.The outcome from Table 3 indicates that there is cointegration relationship between variables in this research.Therefore, the panel fully modified least squares (FMOLS) and panel dynamic least squares (DOLS) are considered to estimate the model.Then we also use the generalised method of moments (GMM) to ensure the result of the study.Description statistics of all variables in this empirical study are shown in Table 2.The mean values of the variables show large differences.Moreover, the maximum values of the variables of international tourism (IT), economic growth (GDP), and innovation (INNOVA) are higher than other variables.
Table 5 presents the correlation between the variables in the model, renewable energy (Re), urban population growth (UPG), and innovation are negatively relative to CO 2 emissions (CO 2 ).In contrast, trade openness (TO), foreign direct investment (FDI), international tourism (IT), economic growth (GDP) are positively relative with CO 2 emissions (CO 2 ).
The outcome of the panel unit root test is indicated in Table 3 by applying the test of Levin-Lin-Chu, Im-Pesaran-Shin, Breitung, and ADF-Fischer chi-square.The results in Table 5 show the variables are not stationary in level I(0) such as carbon dioxide emissions, renewable energy consumption, and economic growth, but all variables (renewable energy consumption, urban population growth, trade openness, foreign direct investment, international tourism, economic growth, innovation, and carbon dioxide emissions) are stationary at the first different I(1).
The testing of Westerlund, Kao, and Pedroni panel cointegration is illustrated in Table 4.The results illustrate that there is a cointegration association between variables in the model.The outcomes confirm a long-run relationship among the study variables.These values are converted in $US trillion for ease of interpretation.
The data is collected by the World Bank dataset.The outcomes of fully modified least squares (FMOLS) and dynamic least squares (DOLS) are shown in Table 6.The coefficients estimated by FMOLS and DOLS models are mostly the same.Based on the outcomes in FMOLS and DOLS estimators, using renewable energy help to reduce carbon dioxide emissions, with a 1% rise in renewable energy consumption decrease of 0.112% and 0.120%, respectively.Our outcomes are the same as the findings of Cao et al. (2022) for 36 OECD countries as well as those discovered by Raihan and Voumik (2022) for India.Based on the outcomes of this study, which suggest that the use of renewable energy should be considered as a beneficial policy tool to reduce CO 2 emissions and combat climate change in the world's 53 countries.Furthermore, as in previous studies Ali, Abdul-Rahim, and Ribadu (2017) for Singapore; Li et al. (2018) for China; Wang et al. (2021) for OECD countries; the outcome obtained by urban population growth shows a negative influence on carbon emissions.Wang et al. (2021) suggest that it is necessary to accelerate the process of urbanisation for developing countries, combined with improving energy efficiency, applying scientific and technological advances may help to reduce carbon emissions.In FMOLS and DOLS estimators of this study, 1% increase in urban population growth makes CO 2 emissions decrease 0.491% and 0.341%, respectively and significant at 1% level.This result is different from those of Raihan and Voumik (2022) for India, they show that UPG has a positive impact on environmental degradation in India.Because the increase in the number of urban people leads to an increase in the demand for cars and personal vehicles.When the demand for cars and personal vehicles increases, the demand for fossil fuels will increase.This negatively affects the quality of the environment in India.Furthermore, Lee et al. (2022) illustrate that increasing urbanisation cause rising in carbon dioxide emissions but when technology, finance, and government become more developed, urbanisation can be accelerated to reduce CO 2 emissions.Therefore, to develop the nation`s economy in terms of minimising negative impacts on environmental quality, urbanisation should be promoted in combination with technological innovation and environmentally friendly energy use.
In addition, it has also been found that the increase in trade openness will reduce CO 2 emissions for both FMOLS and DOLS estimators.This result can be explained by some environmental-related Wang, Rehman, and Fahad (2022) for G-7 economies.The management authorities need to control environmental pollution to achieve the goal of sustainable economic development with environmentally friendly policies.Moreover, our results show that economic growth positively effects on carbon dioxide emissions at 1% and 5% significant levels for FMOLS and DOLS estimators, respectively.The explanation for this is that fossil fuels are used for production activities which lead to damage to the quality of the environment.Moreover, urbanisation causes a shift from rural to urban activities, which also affects the quality of the environment in some countries.The finding is similar to the results of Muhammad et al. (2021); Teng et al. (2021); Raihan and Voumik (2022); Raihan andTuspekova (2022a, 2022b); Raihan and Tuspekova (2022a) who indicated a positive nexus between GDP and CO 2 emissions.Therefore, an important task for countries around the world is economic development accompanied by the improvement of environmental quality.To achieve these goals, the government should have appropriate policies to reduce dependence on fossil fuel sources in production and living activities.
Finally, We also found that innovation is a negative relationship with CO 2 emissions, the increase in the number of patent applications will reduce CO 2 emissions.Because countries have increased spending on research and development (R&D), which helps to increase production efficiency and resource using efficiency.This outcome is similar to the findings of Zhang et al. (2022) for China; Ibrahim et al. (2022) for BRICS countries; Ibrahim and Mohammed (2022) for Gulf countries; Raihan and Voumik (2022) for India and oppose Demircan Çakar et al. (2021).The outcomes of this research suggest that technological innovation development can be an effective policy for mitigating environmental degradation.
To validate the research results, the authors continue to use OLS, FEM, and GMM system estimators to evaluate the effect of renewable energy consumption, urban population growth, trade openness, foreign direct investment, international tourism, economic growth, innovation on carbon dioxide emissions.The results from OLS and FEM models have heteroskedasticity and autocorrelation are illustrated in Table 7. From the results of the Hansen test, AR1 and AR2 p-values show the suitability of the GMM model.According to the results, except UPG variable, all explanation variables are significant at 1% level.The lagged variable of carbon dioxide emissions assesses the impact in the long run.The coefficient of the lagged variable of carbon dioxide emissions shows a positive sign in the long term and has significant at 1% level.These results found the same trend as the study of Leitão and Lorente (2020).Furthermore, it is similar to the results estimated by DOLS and FMOLS methods, renewable energy consumption, trade openness, innovation hurt CO 2 emissions and GDP has a positive effect on environmental degradation.Moreover, FDI has a negative impact on CO 2 emissions, a similar result to the findings of Zhang et al. (2022) who also indicate the inflow of foreign direct investment lead to reduce carbon emissions, this may be due to the number of projects use the renewable energy is high.It is beneficial to the quality of the environment.Finally, the coefficient of international tourism (IT) presents a positive effect on carbon emissions, it is confirmed by previous research such as Ișik et al. (2020) for G7 countries; Ibrahim and Mohammed (2022) for Gulf countries.This result can be explained by attracting tourists contributing to economic development, however, some tourism-related activities have not yet paid attention to the environmental quality.Increasing the number of international tourists contributes to income improvement for middle and low-income countries but will reduce the quality of the environment.In the future development of tourism, therefore, the government needs to improve to include environmentally friendly activities.
Results in Table 8 show the impact of renewable energy consumption, trade openness, international tourism, and innovation on carbon dioxide emissions in high-income countries and middle-low-income countries.
The outcomes of system GMM for high-income countries and middle-low-income countries in Table 8 indicate the applicability of the model because the lag of the dependent variable is statistically meaningful and the p values of AR1, AR2, and Hansen test also show model applicability.Results in Table 8 show that there is a negative relationship between renewable energy consumption and carbon emission in both groups.The consumption of renewable energy allows to reduce harmful effects on the environment in both high-income countries and middle-low-income countries.Likely, international tourism has a positive impact on carbon emission at 1% significant level in both groups.Our outcome is the same with the findings of Muhammad et al. (2021); Ibrahim and Mohammed (2022) who also indicate rising the number of international tourism lead to reduce environmental quality.This result can be explained due to attracting tourists contributes to economic development, however, some tourism-related activities have not yet paid attention to the environment quality.Similarly, trade openness in high-income countries hurts carbon emission at the significant level of 1%.
The results show that trade openness help to improve the environmental quality in high-income countries.Our results are similar to the findings of Yu et al. (2019) for CIS countries;Adebayo et al. (2022) for Sweden; Li and Haneklaus (2022a) for India; Wang, Zhang, and Li (2023) for the world 208 countries.They confirmed that trade openness decrease carbon emission and help to improve environmental degradation, this result can be explained by some environmental-related policies well have been implemented in high-income countries.On the contrary, trade openness in middle and low countries has a positive impact on carbon emissions.This result can be explained due to energy consumption related to the import and export of goods activities leading to an increase in carbon emissions in middle and low-income countries.The explanation for this is that the activities for production and trade use more resources which cause a negative impact on the environmental quality.This outcome is supported by the findings of Li and Haneklaus (2022b); Usman et al. (2022);Chhabra, Giri, and Kumar (2022); Li and Haneklaus (2022c); Azam, Rehman, and Ibrahim (2022); Wang, Rehman, and Fahad (2022).The management authorities need to control environmental pollution to achieve the goal of sustainable economic development with environmentally friendly policies.Similarly, it is found that innovation negatively impacts on carbon emissions at 1% and 10% significant levels in high-income countries and middle and low-income countries, respectively.The results show that increasing number of patent applications by residents helps to improve the environmental quality.This is supported by the research of Zhang et al. (2022) for China; Ibrahim et al. (2022) for BRICS countries; Ibrahim and Mohammed (2022) for Gulf countries; Raihan and Voumik (2022) for India and oppose Demircan Çakar et al. (2021).The outcomes of this research suggest that technological innovation development can be an effective policy for mitigating environmental degradation.
Conclusion
This study investigates the nexus between renewable energy consumption, trade openness, international tourism, innovation, and carbon dioxide emissions in the world's 53 countries for the period of 1990-2019 by employing the FMOLS model, DOLS model, and GMM-System estimator.The empirical outcomes indicated that the variables are integrated in the first differences I(1) based on the results of the unit root test of Levin-Lin-Chu, Im-Pesaran-Shin, Breitung, and Fisher.The results of Kao, Pedroni, and Westerlund cointegration test illustrated that the variables in this study are cointegrated in the long run.The results of FMOLS model, DOLS model, and GMM-System estimator are similar.The results of the GMM-System estimator illustrated the suitability of the GMM model based on the Hansen test, AR1 and AR2 p-values.The lagged variable of CO 2 emissions presents a positive effect, it shows that climate change gets a rise in the long run.
The empirical results of the study indicate that renewable energy consumption, trade openness, urban population growth, foreign direct investment, and innovation negatively effect on carbon emissions while international tourism and economic growth positively influence on CO 2 emissions.The findings of results present that increasing of the consumption of renewable energy, trade openness, urban population growth, foreign direct investment, and the number of patent applications which may improve the quality of the environment.On the contrary, the rising of international tourism and economic growth lead to reduce environmental quality.Further, we also have compared the research results for both groups of high-income countries and middle-low-income countries.It is found that the consumption of renewable energy and innovation show a negative effect CO 2 emission in both groups.On the contrary, international tourism has a positive impact on carbon emission in high-income countries and middle-and low-income countries.Besides, trade openness in high-income countries negatively affect CO 2 emission, but the opposite is true in middle-and low-income countries.
From the research results, we have considered giving out some suggestions such as The government should invest in renewable energy resources and encourage the use of renewable energy, it can be a vital factor to improve the environmental quality for all countries.Additionally, the findings show a negative connection between innovation and carbon emissions giving us to look that countries should encourage to invest in innovation development such as creating many patent applications to promote economic growth as well as help reduce environmental degradation and combating climate change.Besides, the negative relationship between CO 2 emissions and trade openness for high-income countries suggests that some environmental-related policies well have been implemented in such countries, as a result of more effective transportation industries.In contrast, for middle-and low-income countries, trade openness witnessed a positive influence on environment damage.Because environmental policies have not been effectively implemented in these countries, therefore, in these countries, it is necessary to encourage the use of renewable energy as an alternative to fossil fuels for commercial development activities.The efficient use of natural resources and promotion of the efficient use of energy resources is essential for sustainable economic development and climate change reduction in these countries.Lastly, the findings indicated a positive impact of tourism and environmental degradation.Therefore, tourism development-related policies could be encouraged to enlarge the use of renewable energy for touristrelated activities which can be beneficial for the quality of the environment and nation economy development.In addition, countries should reorient their tourism development policies and environmental protection policies in the post-COVID-19 era, promoting sustainable tourism is essential for achieving sustainable economic growth.In short, economic growth and attracting foreign direct investment are necessary for each country's economy development, but to achieve the goal of sustainable economic development, the policymakers need to promote energy efficiency and renewable sources consumption to reach COP26 agreements.Besides promoting technological innovation should be encouraged for climate change reduction.
Although this study has found empirical evidence about the impact of renewable energy consumption, trade openness, international tourism, innovation on carbon dioxide emissions in the world's 53 countries, this study still has certain limitations.This research is only done for 53 countries in the world, further studies can be done for more countries.In addition, future studies may repeat this study to determine whether a causal relationship exists between the variables for the two groups of high-and middle-to low-income countries.Furthermore, future research should also investigate the impact of institutional quality and human capital on greenhouse gas emissions.Assessing two-way effects between variables can also be considered for future studies.Finally, in the context of the Covid 19 epidemic is still unpredictable, the economic policy uncertainty index (EPU) has a certain effect on the number of tourists, so future studies may also investigate the impact of the economic policy uncertainty (EPU) index on CO 2 emissions through tourism demand. Notes ; Teng et al. (2021); Muhammad et al. (2021); Khan et al. (2021); Cao et al. (2022); Raihan and Voumik (2022).Besides, Yu et al.
Hypothesis 2 (
H2): Trade openness promotes the environmental quality.Research ofAdebayo et al. (2022),Li and Haneklaus (2022a) andWang, Zhang, and Li (2023) found a negative effect of trade openness on environmental degradation, trade openness encourages environmental sustainability.In some countries, the standard environmental policy and the common commercial policy encourage sustainable practices to decrease CO 2 emissions and the change of climate.Hypothesis 3 (H3): International tourism effect to CO 2 emissions Many studies have explored the impact of international tourism on the environmental quality.The studies ofBen Jebli, Youssef, and Apergis (2019),Balsalobre-Lorente et al. (2020) andDogru et al. (2020) found a negative nexus between international tourism and CO 2 emissions.Others support a positive relationship(Ișik et al. 2020;Ibrahim and Mohammed 2022).
Table 1 .
Variables, measurement, and data sources.
Table 2 .
Description of the variables.
Table 3 .
Test of panel unit root.
Table 4 .
Azam, Rehman, and Ibrahim (2022).The standard environmental policy and the common commercial policy encourage sustainable practices to decrease CO2 emissions and the change of climate.The result is similar to the findings ofYu et al. (2019)for CIS countries;Adebayo et al. (2022)for Sweden;Li and Haneklaus (2022a)for India;Wang, Zhang, and Li (2023)for the world 208 countries and opposeLi and Haneklaus (2022b)for in G7 countries;Usman et al. (2022)for Pakistan;Chhabra, Giri, and Kumar (2022)for selected middle-income countries;Li and Haneklaus (2022c)for China;Azam, Rehman, and Ibrahim (2022)for six OPEC countries;
Table 6 .
Long run estimation of FMOLS and DOLS models.
Table 8 .
Generalised Method of Moments (GMM)-System Estimator for high-income and middle and low-income countries. | 9,982 | 2023-04-02T00:00:00.000 | [
"Environmental Science",
"Economics"
] |
Ablation of Dicer from Murine Schwann Cells Increases Their Proliferation while Blocking Myelination
The myelin sheaths that surround the thick axons of the peripheral nervous system are produced by the highly specialized Schwann cells. Differentiation of Schwann cells and myelination occur in discrete steps. Each of these requires coordinated expression of specific proteins in a precise sequence, yet the regulatory mechanisms controlling protein expression during these events are incompletely understood. Here we report that Schwann cell-specific ablation of the enzyme Dicer1, which is required for the production of small non-coding regulatory microRNAs, fully arrests Schwann cell differentiation, resulting in early postnatal lethality. Dicer−/− Schwann cells had lost their ability to myelinate, yet were still capable of sorting axons. Both cell death and, paradoxically, proliferation of immature Schwann cells was markedly enhanced, suggesting that their terminal differentiation is triggered by growth-arresting regulatory microRNAs. Using microRNA microarrays, we identified 16 microRNAs that are upregulated upon myelination and whose expression is controlled by Dicer in Schwann cells. This set of microRNAs appears to drive Schwann cell differentiation and myelination of peripheral nerves, thereby fulfilling a crucial function for survival of the organism.
Introduction
Proper myelination is essential for the efficient saltatory conduction of action potentials, the trophic support of axons, and the maintenance of axonal integrity in the peripheral nervous system (PNS). Defective PNS myelination occurs in hereditary peripheral neuropathies [1]. Sporadic peripheral neuropathies can arise due to a wide variety of factors, including metabolic disorders (e.g. diabetes mellitus), intoxication (e.g. alcohol), and autoimmune disorders (e.g. Guillain-Barré syndrome) [2]. Treatment of peripheral neuropathies is still unsatisfactory in most cases, and is likely to benefit from increased knowledge about peripheral myelin development and maintenance.
The mature myelin sheaths wrapped around the large-diameter axons of peripheral neurons arise from neural crest cells which subsequently develop into Schwann cell precursors (SCPs), immature Schwann cells, and finally mature myelinating or nonmyelinating Schwann cells. Each stage of Schwann cell development is associated with a set of specific protein markers, the expression of which is thought to be driven primarily by axonderived signals at the SCP stage and by the secretion of autocrine survival factors at the Schwann cell stage (reviewed in [3]). This precise developmental program requires tightly regulated transcriptional and post-transcriptional control of protein expression, the details of which are still incompletely understood.
One post-transcriptional mechanism that appears to be critical for the proper development of numerous tissues is the microRNA (miRNA) system [4]. miRNAs are short (20-30 nucleotides) noncoding RNAs which are processed from endogenously expressed pri-miRNAs by the enzyme Drosha. The resulting stem-loop pre-miRNAs are exported to the cytoplasm where they are further processed by the enzyme Dicer [5], unwound into single-stranded miRNAs, and loaded into the RNA-induced silencing complex (RISC). The miRNA-loaded RISC then binds to complementary miRNA recognition sequences in the 39untranslated regions (UTRs) of specific target mRNAs. The primary function of the miRNA system appears to consist of mRNA silencing. Target mRNAs are either degraded, or their translation is inhibited. A single miRNA can have multiple mRNA targets, which allows for broad miRNA-mediated regulation of expression programs (reviewed in [6]). Genetic ablation of Dicer in mice is embryonic lethal, illustrating the indispensable role that miRNAs play during development [7]. Furthermore, studies utilizing tissue-specific expression systems have revealed a vital role for miRNAs in the development of specific organ systems (reviewed in [8]). In the nervous system, miRNAs appear to be important for the development of Purkinje [9] and forebrain [10] neurons, oligodendrocyte differentiation and central nervous system (CNS) myelination [11,12,13], and as shown more recently, also for peripheral myelination by Schwann cells [14].
In order to determine which miRNAs might be required for peripheral myelination, we created a mouse line undergoing Schwann cell-specific deletion of Dicer1, by crossing mutants whose endogenous Dicer1 sequences were flanked by floxP sites (Dicer1 fl/fl ) to mice expressing the Cre recombinase under the control of the desert hedgehog promoter (Dhh-Cre).
Dicer depletion in Schwann cells leads to arrest at the pro-myelin stage and impairs myelination
In order to achieve Schwann cell-specific deletion of the enzyme Dicer, we bred Dicer fl/fl mice to mice expressing Crerecombinase specifically in Schwann cells (tgDhh-Cre, henceforth termed Dhh-Cre + ). The Cre recombinase in these mice is already active in Schwann cells of the precursor stage at embryonic day 12/13 (E12/13) [15]. In contrast to their littermates, Dicer fl/fl Dhh-Cre + mice lacking Dicer expression in Schwann cells exhibited a severe behavioral phenotype characterized by ataxia and hind limb paresis. In compliance with animal welfare regulations, mice were euthanized at the age of 25 days. Electron microscopy (EM) of 18-day old Dicer fl/fl Dhh-Cre + sciatic nerves revealed a severe myelination defect when compared to control littermates Dicer wt/fl Dhh-Cre + and Dicer fl/fl Dhh-Cre 2 . In Dicer fl/fl Dhh-Cre + sciatic nerves, most fibers remained unmyelinated; the few myelin sheaths present were abnormally thin ( Fig. 1). Most Dicer-depleted Schwann cells properly sorted axons, resulting in the typical 1:1 Schwann cell to axon ratio. However, some bundles of Dicer mutant nerves containing axons .1mm, which would normally be sorted and myelinated, remained unsorted ( Fig. 1c and 1f).
We did not observe normal Remak bundle formation in Dicer mutant nerves. Small-caliber axons remained in groups that also contained large caliber axons. In contrast to normal Remak bundle formation [16], groups of small-caliber axons and were engulfed by Schwann cells as a whole, and axons were not individually ensheathed and separated from each other by Schwann cell processes (mesaxons). In addition, the number of these immature axon bundles was far lower than the number of Remak bundles in control nerves, indicating that unmyelinated nerve fiber development was also severely disturbed in these nerves. To determine at which stage myelin development was blocked in Dicer fl/fl Dhh-Cre + mice, we compared myelin markers between Dicer fl/fl Dhh-Cre + and Dicer wt/fl Dhh-Cre + nerves with immunohistochemical and biochemical techniques (Fig. 2). Components of mature non-compact myelin like 29,39-cyclic nucleotide 39-phosphodiesterase (CNPase), and of compact myelin, like myelin basic protein (MBP) and peripheral myelin protein 22 (PMP22), were nearly undetectable in Dicer fl/fl Dhh-Cre + nerves via immunoblot (Fig. 2d). Schwann cells were not absent in Dicer fl/fl Dhh-Cre + nerves, as evidenced by EM and by positive S100 immunoreactivity in nerves of 22-day old mice (Fig. 2a). Furthermore, EM analysis of Schwann cells in Dicer mutant nerves showed basal lamina formation by Schwann cells (Fig. 1f). These findings indicate that Schwann cells in Dicer mutant nerves developed at least until the stage of immature Schwann cells. Proper sorting of the majority of axons suggested an arrest at the pro-myelin stage. Development of myelinating Schwann cells is known to be accompanied by cessation of Schwann cell proliferation. In contrast to Dicer wt/fl Dhh-Cre + nerves, Dicer mutant nerves showed evidence of mitotic events and a significantly increased percentage of Schwann cells expressing proliferation marker MIB1 (Ki67; 11.860.9% in Dicer mutant Schwann cells versus 1.2260.07% in controls, p = 0.0003; Fig. 2b). Based on immunohistochemistry, we determined that sciatic nerves of Dicer mutant mice contained no B-cells (B220) and only rare T-cells (CD3; 1.660.6% positive cells per total number of nuclei in Dicer fl/fl Dhh-Cre + nerves versus 1.660.2% in controls, p = 0.96, Fig. 2c). An increased prevalence of macrophages was detected in Dicer mutant nerves (CD68; 11.363.5% positive cells per total number of nuclei in Dicer fl/fl Dhh-Cre + nerves versus 7.262.4% in controls, p = 0.04, Fig. 2c). Although we observed an increased number of macrophages in Dicer mutant nerves, the percentage was in a similar range in Dicer mutant and controls. Based on this and based on their morphology with elongated nuclei, we conclude that the proliferating cells were indeed Schwann cells. In parallel to increased proliferation, Dicer mutant nerves at p22 showed increased cell death as determined by TUNEL staining (,0.05% TUNEL positive nuclei in all control nerves and ca. 2% in Dicer mutant nerves). Erk and Akt signal transduction pathways which are known to regulate myelination were significantly altered in 18-day old mutant nerves. Both Akt and Erk phosphorylation were significantly increased. Furthermore, Ras and NFkB protein expression was significantly lower in Dicer mutant nerves compared to controls (Fig. 2d). Error bars indicate standard deviation, p = 0.0003, p value was determined using unpaired two-tailed student's t-test (B). Few CD3-positive T cells and an increased percentage of CD68-positive macrophages infiltrated the nerves of Dicer fl/fl Dhh-Cre + mice (C). Biochemical analysis of signal transduction pathways and myelin components by Western blot (D). Compared to control Dicer wt/fl Dhh-Cre + littermates, phospho-Akt and phospho-Erk were significantly increased in sciatic nerves of 18-day old mice lacking Dicer in Schwann cells, while total Akt and Erk protein levels were unchanged compared to controls, and NFkB was significantly decreased. In agreement with the histological findings, components of non-compact (CNPase) and compact myelin (MBP, PMP22) were nearly absent from Dicer mutant nerves. In addition, Ras levels were significantly lower in Dicer mutant nerves. GAPDH and b-actin served as loading controls. doi:10.1371/journal.pone.0012450.g002 Next, we performed a time course analysis of changes in Dicer mutant nerves (Fig. 3). At four days of age (p4), EM analysis of sciatic nerves showed myelin formation in control Dicer wt/fl Dhh-Cre + . In contrast, no myelinating Schwann cells were observed in Dicer mutant nerves of age-matched Dicer fl/fl Dhh-Cre + littermates. As in the 18-day old mice, proper radial sorting with 1:1 Schwann cell-to-axon ratio was observed in most fibers at p4 (Fig. 3a). In contrast, EM analysis of sciatic nerves from Dicer fl/fl Dhh-Cre + versus Dicer wt/fl Dhh-Cre + mice at the age of 17 embryonic days (E17) revealed no structural difference (Fig. 3a). Dicer expression is upregulated upon myelination in control p4 nerves compared to control E17 nerves. In Dicer fl/fl Dhh-Cre + at E17 and at p4, Dicer depletion was found by quantitative RT-PCR (Fig. 3e). Furthermore, presence of the inactivated Dicer flox allele was shown by PCR (Fig. 3f).
The expression of specific miRNAs is altered in Dicer-deficient peripheral nerves
To identify specific miRNAs involved in peripheral nerve myelination, we performed a differential microarray analysis of miRNA extracted from Dicer fl/fl Dhh-Cre + and Dicer wt/fl Dhh-Cre + nerves. Since the Dhh promoter is specifically active in Figure 3. Time course analysis of defective myelination following Schwann cell specific Dicer depletion. Sciatic nerves of embryos (E17) and newborn mice (p4) were analyzed by electron microscopy for morphological effects of Schwann cell-specific Dicer depletion (A), and by quantitative RT-PCR for mRNA expression of factors known to regulate myelination. Number of biological replicates used: Dicer fl/fl Dhh-Cre + E17 n = 4, Dicer wt/fl Dhh-Cre + E17 n = 3, Dicer fl/fl Dhh-Cre + p4 n = 4, and Dicer wt/fl Dhh-Cre + p4 n = 4 (B-D). No morphological difference was observed before the onset of myelination at E17 between Dicer wt/fl Dhh-Cre + and Dicer fl/fl Dhh-Cre + nerves. By p4, myelination had begun in Dicer wt/fl Dhh-Cre + mice, but not in Dicer fl/fl Dhh-Cre + nerves. Some unsorted fibers were visible in Dicer fl/fl Dhh-Cre + nerves. Scale bars = 2 mm (A). Quantitative RT-PCR for mRNA expression in Dicer mutant nerves showed lack of developmental upregulation for activators of myelination. P values for comparisons between p4 controls and mutants were: p = 0.0026 (Oct6), p = 0.0028 (Egr2), p = 0.0003 (Brn2), p = 0.0028 (Sox10; B), no significant difference in expression of suppressors of myelination Sox2 and c-jun (C) and altered expression of Notch signaling components and p75NTR. P values for comparisons between p4 controls and mutants were: p = 0.061 (Delta1), p = 0.043 (Jagged1), p = 0.0001 (Jagged2), p = 0.02 (Notch3), p = 0.0061 (p75NTR; D). Dicer mRNA expression was significantly upregulated upon myelination in Dicer wt/fl Dhh-Cre + sciatic nerves (p4 in comparison with E17, p = 0.0004). At both time points, E17 and p4, significant Dicer mRNA depletion in Dicer fl/fl Dhh-Cre + was shown (E17: p = 0.0007; p4: p,0.0001; E). Presence of the recombined Dicer allele in E17 and p4 mice was demonstrated by PCR using primers that differentiate between wild type Dicer and the recombined allele (F). The wild type allele is 1.3 kb in length, and the recombined allele is approx. 500bp (G). Akt phosphorylation and expression were analyzed by Western blot in Dicer wt/fl Dhh-Cre + compared to Dicer fl/fl Dhh-Cre + at p4. P-Akt in relation to total Akt was reduced to 62614% of control level in Dicer fl/fl Dhh-Cre + at p4. All p values were determined using an unpaired two-tailed student's t-test. All error bars indicate standard deviation. doi:10.1371/journal.pone.0012450.g003 Schwann cells, we would expect Dicer to be ablated exclusively in this cell type (not neurons or other cell types in the analyzed nerve) and the microarray to identify miRNAs specifically expressed in Schwann cells. We chose two different developmental time points for analysis. The first time point was before the onset of myelination and before structural differences could be observed between Dicer fl/fl Dhh-Cre + and Dicer wt/fl Dhh-Cre + nerves (at E17). The second time point was p4, when myelination had already started in control mice and was already evidently impaired in Dicer fl/fl Dhh-Cre + mice.
Among the 216 miRNAs expressed in peripheral nerves, we identified a total of 109 miRNAs which were either significantly developmentally up-or downregulated (p4 compared to E17) or significantly different between controls and mutants (Fig. 4). For the observed phenotype, however, the miRNAs of major interest were those which displayed an upregulation upon the onset of myelination and were also significantly downregulated in p4 Dicer mutant nerves compared to controls. The sixteen miRNAs which fulfilled these two criteria mentioned above are listed ( Table 1). For nine of these miRNA we could confirm the differential expression by real time PCR using miRNA specific Taqman probes (Fig. 5). Since downregulation of Schwann cell miRNAs might also control important steps of proper myelination, we analyzed the microarray dataset for those miRNAs that were significantly downregulated upon myelination and significantly reduced by ablation of Dicer from Schwann cells (either at E17 or at p4). Only three microRNAs met these criteria: miR-9, miR-455, and miR-1224.
Altered myelination signals in Dicer-deficient peripheral nerves
In order to determine the effect of miRNA depletion from Schwann cells on the expression of molecules involved in peripheral nerve myelination, we performed quantitative RT-PCR on mRNA isolated from E17 and p4 Dicer fl/fl Dhh-Cre + and Dicer wt/fl Dhh-Cre + nerves. The mRNA expression of several transcription factors, cell surface receptors, and other molecules known to be involved in peripheral nerve myelination was significantly altered in p4 Dicer fl/fl Dhh-Cre + nerves compared to Dicer wt/fl Dhh-Cre + (Fig 3b-d). The factors known to promote myelination, Oct6, Egr2, Brn2, and Sox10, were all significantly downregulated in Dicer mutant nerves. Apart from Brn2, dicerless nerves from 4-day old mice expressed these factors at levels similar to embryonic nerves from E17 mice (Fig. 3b). Inhibitors of myelination, including Sox2 and c-Jun were not altered (Fig. 3c). We also observed dysregulation of components of the Notch signaling pathway, including significant downregulation of Notch3 and Jagged2, as well as a somewhat lower Delta1 expression (which did not attain statistical significance) in Dicer mutant nerves. In contrast, Jagged1 was upregulated in Dicer mutant nerves. Therefore, loss of miRNA expression in peripheral nerves dramatically alters the balance between pro-and anti-myelin signals.
Discussion
Here we show that Dicer expression in developing Schwann cells is crucially involved in peripheral myelination. By using a different, independently generated conditional Dicer knockout mouse strain [17,18], we confirm recently published data [14]. The morphological and ultrastructural changes in dicerless peripheral nerves points to a crucial role for miRNAs in the transition of Schwann cells from the pro-myelin stage to the myelinating stage. Although Dicer is depleted at earlier develop-mental stages in Schwann cell precursors of Dicer fl/fl Dhh-Cre + mice, Dicer-deficient Schwann cells are nevertheless able to reach the immature Schwann cell stage, as evidenced by positive S100 expression and basal lamina formation in Dicer fl/fl Dhh-Cre + nerves and despite an altered microRNA expression profile already evident at E17. Proper sorting of the majority of axons indicate that Schwann cell differentiation into the myelinating phenotype is mainly arrested at the pro-myelin stage at the time when Dicer expression begins to increase in control sciatic nerves (Fig. 6). The reduction in mature myelin by ultrastructural or biochemical analysis (CNPase, MBP, PMP22) supports this conclusion. Some fibers appeared to overcome the myelination block, possibly due to incomplete recombination or due to the presence of residual Dicer protein that persists after genetic ablation. However, the myelin sheaths formed around these nerves are abnormally thin, confirming previous findings [14]. Dicerdeficient Schwann cells not only failed to myelinate, but also were unable to form normal Remak bundles of unmyelinated smallcaliber axons. Cre expression itself has been shown to be toxic to certain cell types [19]. To exclude that toxicity of Cre expression in Schwann cells induced myelination defects or miRNA expression changes, we used Dicer wt/fl Dhh-Cre + littermates as controls. We did not see any evidence for spurious effects on myelination due to Cre expression.
Myelin formation is known to be associated with cessation of Schwann cell proliferation. In 22-day old Dicer fl/fl Dhh-Cre + nerves, we observed mitotic events within Schwann cell nuclei and an increased proliferation rate as determined by Ki67 staining. The increased cell proliferation observed in arrested immature Schwann cells was not seen by others, as determined by BrdU incorporation rates in younger mice only [14]. This difference may reflect the age difference of the analyzed mice. In contrast to the result of the previous study, our data suggests that increased Schwann cell proliferation is indeed a consequence of Dicer ablation from Schwann cells and accompanies the defect in myelination, at least at an older age. Consistently with Pereira et al., we also observed a slight increase in the number of TUNELpositive cells in sciatic nerves of Dicer mutant mice.
How can the above findings be mechanistically explained? In Dicer-less Schwann cells, a global reduction of all miRNAs may directly lead to both Schwann cell proliferation and death, maybe because certain miRNAs are necessary for exiting the cell cycle, terminal differentiation, and cell survival (Fig. 6). It is not necessarily contradictory that both cell death and cell proliferation are stimulated in the absence of Dicer, since its global effect on all miRNAs is expected to produce pleiotropic phenotypes. Alternatively, in the absence of Dicer, arrested pro-myelin Schwann cells may become prone to degeneration. Cell death might induce compensatory proliferation, either of Schwann cells lacking Dicer or of Schwann cells in which recombination of the Dicer flox allele has failed. However, it is not obvious which signalling pathway might trigger such a hypothetical compensation. Also, the lower percentage of dying cells (2%) compared to the higher percentage of proliferating cells (11%) argues against compensatory proliferation.
By microRNA microarray, we identified numerous miRNAs that are expressed in peripheral nerves during development at E17 and p4. Among the 216 expressed miRNA, 109 were either up-or downregulated upon differentiation of immature Schwann cells at E17 into myelinating Schwann cells at p4, or differentially expressed as a consequence of Dicer depletion at E17 or at p4. Unexpectedly, for a number of miRNAs, Dicer mutant Dicer fl/fl Dhh-Cre + nerves showed a higher expression when compared to control Dicer wt/fl Dhh-Cre + nerves. This suggests that other endoneural cells like fibroblast and endothelial cells or axonally transported miRNAs may contribute to the overall pool of miRNAs which are regulated in response to Schwann cell-specific lack of miRNA expression.
It is plausible to assume that miRNAs crucially involved in myelination should be upregulated as Schwann cell development progresses towards the myelinating phenotype. We therefore selected those miRNAs that were both (1) significantly upregulated upon myelination and (2) significantly decreased upon Dicer depletion in Schwann cells. Only 16 miRNAs met these criteria ( Table 1). Furthermore, we identified miR-9, miR-455, and miR-1224 as microRNAs downregulated upon myelination and reduced following Dicer ablation from Schwann cells. MiR-9 has previously been shown to regulate PMP22 expression in oligodendrocytes [13]. Assuming that PMP22 is also regulated by miR-9 in Schwann cells, the downregulation of this negative Figure 4. Heat map of developmentally and/or Dicer dependently regulated miRNAs in sciatic nerves. Expression of miRNAs in sciatic nerves of E17 and p4 Dicer fl/fl Dhh-Cre + and control Dicer wt/fl Dhh-Cre + littermates was analyzed by miRNA microarray. Four biological replicates for each group were analyzed on separate arrays. Of the 216 miRNAs expressed, 109 miRNAs (listed on the right side of the heat map) were differentially expressed, either in an age-dependent manner or in a manner dependent on the expression of Dicer in Schwann cells (p#0.05, log 2 ratio$0.5). Differential expression in log 2 ratio is color coded as indicated in the legend below the heat map (red = upregulation, green = downregulation). Based on hierarchical clustering, five groups, each containing miRNAs of similar expression pattern are indicated by gray bars on the left side. Group 1 contains miRNAs which are upregulated in Dicer fl/fl Dhh-Cre + compared to Dicer wt/fl Dhh-Cre + nerves at E17 and at p4. Group 2 contains miRNAs that are downregulated in both genotypes at p4 compared to E17. Group 3 includes miRNAs that are downregulated in Dicer wt/fl Dhh-Cre + nerves at p4 when compared to E17 and downregualted in Dicer fl/fl Dhh-Cre + compared to Dicer wt/fl Dhh-Cre + nerves at E17. Group 4 contains miRNAs which are downregulated in Dicer fl/fl Dhh-Cre + compared to Dicer wt/fl Dhh-Cre + nerves at p4 and are abundantly expressed also in Dicer wt/fl Dhh-Cre + nerves at E17. Group 5 includes miRNAs that are upregulated in Dicer wt/fl Dhh-Cre + nerves at p4 compared to both, Dicer fl/fl Dhh-Cre + and Dicer wt/fl Dhh-Cre + nerves at E17. doi:10.1371/journal.pone.0012450.g004 Table 1. miRNAs significantly upregulated in sciatic nerves during myelination log 2 ratio $0.5 and significantly decreased as a consequence of Schwann cell specific Dicer depletion log 2 ratio #20.5. regulator of the major myelin protein PMP22 upon myelination should promote myelin formation. Although downregulation of miRNAs might also be crucial during the development of peripheral myelin, such miRNAs are unlikely to be responsible for the observed phenotype following Dicer depletion in our study.
Another study analyzed and compared the miRNA expression pattern in proliferating and differentiated rat Schwann cells in vitro [21]. This study focused on downregulation of rno-miRNA-29a expression in differentiated Schwann cells and the negative regulatory effect of miRNA-29a on PMP22 expression. In our study, miRNA-29a was slightly but not significantly upregulated upon myelination, indicating that its inhibitory effect on PMP22 expression plays no major role in mice in vivo at this time point of development. The miRNAs identified by Verrier et al. to be upregulated in differentiated Schwann cells showed no overlap with our main candidates in vivo [21]. This is most likely caused by the differences in the experimental design. Under specific conditions, e.g. in nerve regeneration or at later time points of development, other miRNAs might be involved.
The process of myelination requires the specific upregulation of pro-myelinating proteins and the coordinated downregulation of anti-myelinating proteins at specific stages of myelin maturation.
Interestingly, we found that several transcripts (Oct6, Egr2, Brn2, Sox10) encoding pro-myelinating proteins fail to be upregulated in Dicer fl/fl Dhh-Cre + nerves, with expression levels of Oct6, Egr2, and Sox10 in Dicer fl/fl Dhh-Cre + p4 nerves closely matching those in Dicer wt/fl Dhh-Cre + E17 nerves. The only exception was Brn2, which was further downregulated in Dicer fl/fl Dhh-Cre + p4 nerves than in E17 nerves. This is in contrast to the recent study by Pereira et al., where Sox10 expression was not significantly reduced and Oct6 expression was only marginally reduced. The dramatic downregulation of Egr2 reported by Pereira et al., on the other hand, was consistent with our study [14]. In addition, we found elevated levels of p75NTR mRNA in Dicer fl/fl Dhh-Cre + nerves, which is normally down-regulated after onset of myelination.
miRNA expression is normally associated with silencing of target mRNAs, either through enhanced mRNA degradation or translational inhibition of target transcripts. Therefore the Schwann cell differentiation defect observed in nerves of Dicer fl/fl Dhh-Cre + mice might reflect the failure of Dicer fl/fl Dhh-Cre + Schwann cells to downregulate anti-myelin signaling molecules. We tested the mRNA expression of the anti-myelinating factors Sox2 and c-Jun in Dicer fl/fl Dhh-Cre + versus Dicer wt/fl Dhh-Cre + nerves. We did not detect any significant difference in Sox2 or c-Jun mRNA expression between these two groups at the mRNA level; however, Pereira et al. reported elevated Sox2 in Dicer fl/fl Dhh-Cre + nerves [14]. miRNA-34a was recently shown to act as a tumor suppressor in human glioma cells by inhibiting cell proliferation [22,23]. It has recently been shown that miRNA-34a is also downregulated in tumors of peripheral nerves called malignant peripheral nerve sheath tumors (MPNST) where it may act as a tumor suppressor as well [24]. MiRNA-34a was a main candidate in our screen and a major histological observation in Dicer mutant nerves was an increased Schwann cell proliferation. It is likely that miRNA-34a drives Schwann cell differentiation by shutting down their proliferation during development; failure of this regulatory circuit may be involved in the histogenesis of schwannomas. Besides miR- 34a, other miRNAs that were identified in our screen were previously found to be associated with an inhibitory effect on proliferation of non-neural tumor cells, including miR-24 in HeLa cells [25] and miR-100 in oral squamous cell carcinoma [26].
Furthermore, we observed an increased phosphorylation of Erk in Dicer mutant nerves at p20. Activation of the Erk pathway is known to induce dedifferentiation of Schwann cells and Schwann cell proliferation, and might contribute to the observed failure of Schwann cells to myelinate [27,28]. Also, overexpression of Ras protein can induce Schwann cell differentiation and proliferation arrest, in contrast to its proliferation-promoting effects on other cell types [29,30]. In Dicer mutant nerves, we observed a significantly lower Ras expression compared to control nerves. Therefore, low levels of Ras might also contribute to the observed phenotype.
We also found altered mRNA expression of transcripts involved in Notch signaling, including Jagged1, Jagged2, and Notch3. In addition, Pereira et al. observed an increased expression of Notch1, and Hes1, as well as a reduced level of ErbB2 in Dicer fl/fl Dhh-Cre + nerves [14]. Of note, Jagged1 and Notch1 which were upregulated at the mRNA level in Dicer fl/fl Dhh-Cre + nerves compared to Dicer fl/fl Dhh-Cre + , were previously identified as a target of miR-34a, which we identified in our miRNA screen [22,31]. Deregulated Notch and/or neuregulin signaling may therefore partly explain the failure of immature Schwann cells to upregulate some pro-myelin transcripts, such as Egr2. Consistently with Pereira et al. we observed impaired Akt phosphorylation at Ser-473 in Dicer mutant nerves at the onset of myelination (at p4/p5) [14]. In contrast, at p18 we observed increased Akt phosphorylation in Dicer mutant nerves. Hence in young mice deletion of Dicer led to a reduction of the pro-myelinating phosphorylation of Akt. Since Akt activation promotes myelination [32], increased Akt phosphorylation may reflect a compensatory upregulation of pro-myelinating signals in response to abnormally high levels of anti-myelinating factors in the absence of miRNA regulation at an older age. In any case, it seems as if the compensatory response in Dicer fl/fl Dhh-Cre + nerves is unable to override anti-myelination signals, as the nerves nevertheless fail to myelinate.
It will be interesting to determine in the future whether miRNAs that are upregulated in both CNS and PNS myelination play analogous roles and/or target the same proteins in these cell types. Conversely, miRNAs that are distinctly upregulated in either oligodendrocytes or Schwann cells may target proteins and/or control processes that are specific to either PNS or CNS myelination.
Clearly, miRNAs play a crucial role in the myelination process both in the CNS and the PNS. An important task that remains for future studies will be to positively identify Schwann cell-specific target transcripts of and the mode of regulation by miRNAs that are specifically upregulated in peripheral nerves during PNS myelination.
Mice and ethical statement
We housed mice and performed animal experiments in accordance with the Swiss Animal Protection Law and in compliance with the animal welfare regulations of the Canton of Zurich. The Committee on Animal Experimentation of the Cantonal Veterinary Office of the Canton of Zurich has specifically approved this study under license number 200/2007. Dicer fl/fl mice were obtained from Jackson Laboratory (strain name: Dicer1 tm1Bdh /J; stock number: 006001) [17]. Dhh-Cre mice were kindly provided by Dr. Dies Meijer [33]. Dicer fl/fl mice were crossed to Dhh-Cre, and subsequently, F1 mice were bred again to Dicer fl/fl mice to obtain Dicer fl/fl Dhh-Cre + mice. For identifying Dhh-Cre transgene positive mice, the following primers were used: Cre fw: ACC CTG TTA CGT ATA GCC GA, Cre rev: CTC CGG TAT TGA AAC TCC AG. For distinguishing Dicer floxed and Dicer wild-type alleles, the following primers were used: DF1: CCT GAC AGT GAC GGT CCA AAG and DR1: CAT GAC TCT TCA ACT CAA ACT, producing a wild-type allele-specific product of 350bp and a floxed allele-specific product of 420bp. The recombined allele was amplified using DF1 primer and Ddel primer: CCT GAG CAA GGC AAG TCA TTC, the same primer set recognized also the wild-type allele (product size 1.3 kb).
Electron microscopy
Mice at the age of p4 or older were anesthetized and transcardially perfused with PBS followed by 3.9% glutaraldehyde in 0.1 M phosphate buffer, pH 7.4. Sciatic nerves of embryos (E17) were fixed with glutaraldehyde in situ for at least 5 minutes. At least n = 4 mice of each genotype and age group (E17, p4, and p18, mixed gender) were analyzed. All nerves were then postfixed in glutaraldehyde in a test tube. Tissues were embedded in Epon using standard procedures. Semithin sections were stained with toluidine blue. Ultrathin sections were mounted on copper grids coated with Formvar membrane and contrasted with uranyl acetate/lead citrate. We examined the specimens using a Hitachi H-7650 transmission electron microscope operating at 80 kV. We took pictures with a digital CCD camera. miRNA microarray miRNA was extracted from sciatic nerves as described in the next section. Sciatic nerves of embryos at the gestational age of 17 days (E17) or newborns at the age of 4 days (p4) were used. For each time point, 4 pairs of Dicer fl/fl Dhh-Cre + littermates and Dicer wt/fl Dhh-Cre + were used and analyzed separately on individual arrays (Dicer fl/fl Dhh-Cre + E17 n = 4: 1 male, 3 females, Dicer +/fl Dhh-Cre + E17 n = 4: 1 male, 3 females, Dicer fl/fl Dhh-Cre + p4 n = 4: 1 male, 3 females, and Dicer +/fl Dhh-Cre + p4 n = 4: 2 males, 2 females). Purity and quality of the isolated total RNA was determined using a NanoDrop ND 1000 (NanoDrop Technologies) and a Bioanalyzer 2100 (Agilent) respectively. Only those samples with a 260 nm/280 nm ratio between 1.6-2.1 and a 28S/18S ratio within 1.5-2 were further processed. Fluorescent miRNA with a sample input of 100ng of total RNA was generated. This method involves the ligation of one Cyanine 3-pCp molecule to the 39 end of an RNA molecule using a miRNA Complete Labeling and Hyb Kit (Agilent). The quality of Cy3-RNA was determined using a NanoDrop ND 1000. Only RNA samples with a dye incorporation rate .2 pmol/mg were considered for hybridization. Cy3-labeled RNA samples were mixed with an Agilent Blocking Solution and resuspended in Hybridization Buffer using a miRNA Complete Labeling and Hyb Kit (Agilent). Target RNA Samples (45ml) were hybridized to Mouse miRNA 8x15k OligoMicroarrays (Agilent P/N G4472A, Design ID 019119y) for 20h at 55uC. Arrays were then washed using Agilent GE Wash Buffers 1 and 2 (Agilent), according to the manufacturer's instructions. An Agilent Microarray Scanner (Agilent p/n G2565BA) was used to measure the fluorescent intensity emitted by the labeled target. Raw data processing was performed using the Agilent Scan Control and the Agilent Feature Extraction Software Version 10.5.1.1. Quality control measures were considered before performing the statistical analysis. These included inspection of the array hybridization pattern (absence of scratches, bubbles, areas of non hybridization), proper grid alignment and number of green feature non-uniformity outliers (below 100 for all samples). Expression data was analyzed using R/ Bioconductor. Briefly, median spot signals were log2-transformed and normalized using quantile normalization. Differential expression was assessed using t-test and fold-change analysis. miRNAs flagged as absent by the Feature Extraction Software in more than 50% of the samples in each of the 4 conditions were excluded from the results. All data is MIAME compliant and the raw data is available in the GEO archive under the accession GSE22023. For hierarchical clustering of the heatmap, we used as distance measure the euklidean distance of the normalized expression profiles of the miRNAs. Clusters were linked using the Ward's linkage rule that minimizes intra-cluster variance. A simplified version of the clustering tree is visualized on the left side of the heat map, including five groups of miRNAs.
RNA extraction and real time PCR
RNA was extracted from sciatic nerves using miRNeasy (Qiagen) as described by the manufacturer using a polytron PT 3100 (kinematica). CDNA from mRNA was synthesized with QuantiTect, Reverse Transcription kit (Qiagen) and analyzed by real-time PCR using QuantiFast SYBR Green PCR kit (Qiagen). CDNA from miRNA was synthesized with TaqMan MicroRNA RT kit (Applied Biosystems) and analyzed using TaqMan MicroRNA Assays (Applied Biosystems) and Taqman Universal Master Mix II (Applied Biosystems). All samples were analyzed using a 7900HT Fast Real-Time PCR system (Applied Biosystems). Three to four biological replicates were used for each group analyzed (Dicer fl/fl Dhh-Cre + E17 n = 4: 3 males, 1 female, Dicer +/fl Dhh-Cre + E17 n = 3: 3 males, Dicer fl/fl Dhh-Cre + p4 n = 4: 3 males, 1 female, and Dicer +/fl Dhh-Cre + p4 n = 4: 2 males, 2 females). Of each biological replicate, two (mRNA) or three
Immunohistochemistry
Sciatic nerves were fixed in 4% formalin and embedded in paraffin. For each genotype, at least four nerves were analyzed (22-day old Dicer fl/fl Dhh-Cre + and control Dicer wt/fl Dhh-Cre + littermates). Longitudinal paraffin or frozen sections were incubated with the following antibodies: anti-S100 (Dako), anti-MIB1 (Dako), B220/CD45R for B-cells (Pharmingen), CD3 for Tcells (clone SP7, NeoMarkers), CD68 for macrophages (Serotec) or stained with haematoxylin-eosin. For detection of primary antibodies, a Ventana machine was used according to the manufacturer's protocol. Mounted slides were analyzed on an Axiophot microscope (Zeiss), equipped with a JVC digital camera (KY-F70; 3CCD). Rabbit immunoglobulin fraction (Dako) served as negative control for S100 staining (data not shown). For assessing proliferation, 940-1400 nuclei per mouse were counted and the ''MIB1 index'' was determined as the percentage of nuclei positive for MIB1 immunohistochemistry. CD3-and CD68positive cells were quantified in relation to total number of nuclei. | 8,194.2 | 2010-08-27T00:00:00.000 | [
"Biology"
] |
Characterization of eucalyptus clones subject to wind damage
The objective of this work was to test a new methodology to assess the resistance of trees to wind damage and determine the characteristics that increase clone resistance to winds. Tree resistance to breakage, basic density, ultrastructure, anatomy, mechanical properties, and wood growth stress have been evaluated in seven Eucalyptus grandis × Eucalyptus urophylla clones, collected from a region with a high incidence of wind damage. The Pearson correlation coefficient between the tree resistance to breakage and the ratio between the area damaged by the winds and the total planted area was -0.839, showing the efficiency of the methodology adopted and that high breaking strength results in a smaller area affected by wind damage. Trees with a high basic density, cell wall fraction, modulus of elasticity of the middle lamella and fibers, fiber hardness, modulus of rupture, growth stress and low microfibril angle and height and width of the rays showed greater resistance to wind damage. Therefore, the selection of clones with these features may reduce the incidence of damage by winds in Eucalyptus plantations.
Introduction
The forestry segment is important for Brazilian economy (IBÁ, 2014).Eucalyptus wood is the main raw material in this industry and can be used to produce pulp, energy, panels, and lumber.In Brazil, the planted forests of Eucalyptus produce an average of 39 m 3 ha -1 year -1 in a cutting cycle of seven years (IBÁ, 2015).These results are due to climate conditions and investment in research.Despite the favorable outlook, environmental factors, such as wind damages, can limit or restrict the Eucalyptus wood production (Braz et al., 2014;Boschetti et al., 2015).
Wind is a phenomenon that causes disturbances in natural and planted forests, with damage recorded since 1940 (Mitchell, 2013) and reports worldwide (Allen et al., 2012;Mitchell, 2013;Moore et al., 2013).In Brazil, the damage caused by wind in the forest plantations of Eucalyptus spp.occurs mainly between Pesq.agropec.bras., Brasília, v.52, n.11, p.969-976, nov. 2017 DOI: 10.1590/S0100-204X2017001100002 24 and 36 months after planting, and depending on the material, this damage can exceed 20% of the planted area (Braz et al., 2014;Boschetti et al., 2015).The lack of alternatives for this material leads to its use for energy (Guerra et al., 2014).
The winds can bend or break the trees (Mitchell, 2013).In the first case, bending results in the loss of apical dominance (Panshin & Zeew, 1980), reducing wood production.In the second case, breaking trees affects the entire supply chain, as harvesting trees with smaller diameters reduces the efficiency of this operation and increases costs (Spinelli et al., 2009); in addition, a new cultivation has to be prepared.The trees broken by wind need to be removed from planted forests, thus, it is possible to get this wood for low price, allowing their use in the production of small objects (Vieira et al., 2010) and in the furniture industry (Lopes et al., 2011).
The objective of this work was to test a new methodology for assessing the resistance of trees to breakage, and to evaluate the properties of Eucalyptus wood, relating them to tree resistance to wind damage, and assisting in management of eucalyptus wind damage.
Materials and Methods
Seven two-year-old Eucalyptus grandis × Eucalyptus urophylla clones from municipality of Belo Oriente (42º22'30"S, 19º15'00"W), state of Minas Gerais, Brazil, were selected in February 2014.This region and this Eucalyptus age were chosen because they have a high incidence of wind damage.
Four trees per clone were cut and two discs were removed at 1.3 meter height to evaluate the anatomy, basic density and ultrastructure of the wood, and a three-meter log was removed above these discs for characterization of the wood's mechanical properties.Resistance to breakage and growth stress in trees was conducted in four other trees, per clone.The selected trees had height and diameter evaluated at the height of 1.3-meter, and grew under the same weather conditions and soil.The data of the area broken by the wind and the total planted area were catalogued for each clone (Table 1).
A rope was tied at 85% of the total height of the tree to evaluate the force required to break the tree in the field.A pulley was attached in a rope between two nearby trees, 12 meters away from each other.The rope tied in the tree to be tested passed through this pulley, forming an angle of approximately 45°.Another pulley was coupled with a dynamometer to measure the force necessary to break the tree.At the end of the rope, a motor was used to pull the tree, according to Braz et al. (2014) (Figure 1).
The force to break the tree and the ratio between the damaged area and the total planted area, per clone, was analyzed by the Pearson correlation coefficient, to assess the quality of the tree resistance test.
The wood basic density was determined as the ratio between the wood dry mass and the wood green volume in one of the 5 cm discs removed from the tree, at 1.3 meter height, according to standard NBR 11941 (ABNT, 2003).
For anatomical characterization, the sample was obtained from the intermediate position, from pith to bark, from one of the 5 cm discs removed 1.3 meter above the ground level, for anatomical characterization.Histological sections were made and the macerated material was prepared.The microscopic description of the wood was done according to the International Association of Wood Anatomists -IAWA (Wheeler, 1989).The fiber cell wall thickness was obtained by the difference between the width of the fiber and the lumen diameter, divided by two.The cell wall fraction was calculated according to the following equation (Wheeler, 1989): CWF = [(2CWT)/FW]×100, in which: CWT, cell wall thickness (µm); FW, fiber width (µm); CWF, cell wall fraction (%).
The microfibril angle (MFA) of the S2 layer was determined in the sample used for anatomical characterization.After saturation, the blocks were cut into 10 µm thick sections with a microtome, in the tangential plane, and were macerated with hydrogen peroxide solution and glacial acetic acid in a 2:1 ratio at 55°C for 24 hours (Leney, 1981).Next, the fibers were washed in distilled water and temporary slides were prepared to measure the microfibril angle.
The measurement of microfibril angle was performed by polarized light microscopy (Leney, 1981), using an Olympus BX 51 microscope (Olympus Corporation, Shinjuku, Tokyo, Japan) adapted with a rotary stage, graduated from 0° to 360°, connected to the image analysis program, Image Pro-plus (Media Cybernetics Inc., Rockville, MD, USA).The microfibrillar angle was measured in 30 fibers per sample.The ratio between the basic density and the microfibril angle was calculated according to Hein et al. (2013).
Pesq. agropec.bras., Brasília, v.52, n.11, p.969-976, nov.2017 DOI: 10.1590/S0100-204X2017001100002 For a nanoindentation, the sample was removed from the opposite position to that used for anatomical characterization.A 3×3×3 mm specimen was made from this sample and embedded in epoxy resin solution to determine the modulus of elasticity and hardness of the S2 layer of the fiber and the middle lamella.The nanoindentation was performed using the TriboIndenter Hysitron TI-900®.The maximum applied load was 100 μN for 60 seconds, with discharge performed in 20 μN s -1 (Muñoz et al., 2012).The elastic modulus was determined according to the equation: in which: MOE, modulus of elasticity (GPa); and according to instructions from the device manufacturer, vi, 0.07; vm, 0.35; and Ei, 1,140 GPa.The reduced modulus (Er) was obtained from the load-displacement curve from the initial slope of the unloading wherein the elastic response was generated (Muñoz et al., 2012).
The hardness (H) was determined by the maximum load supported by the specimen divided by the contact area (Muñoz et al., 2012), according to the equation: H = Pmax/A in which: Pmax, maximum load of indenter penetration; A, projected contact areas at maximum load.
A central plank was removed from the three-meter log removed above 1.3 meter height.With a saw , 1997).
The growth stress was evaluated in the standing trees.The longitudinal displacement (LD) was obtained with the CIRAD's sensor.This method consisted of installing two nails, separated by 45 mm, in the longitudinal direction of the debarked wood (Dassot et al., 2015).Theses nails were connected to a sensor, to record the longitudinal displacement, and a hole was made between these two nails.The longitudinal displacement was recorded and the growth stresses calculated according to Trugilho et al. (2002), using the equation: GS = (LD × MOE)/45, where: GS, growth stresses; LD, in mm; and MOE, in MPa.
The variance homogeneity analysis (Bartlett's test at 5% probability) and normality test were performed (Shapiro-Wilk test at 5% probability).The means of the treatments were compared with the Scott-Knott test at 5% probability.The Pearson correlation coefficient between the wood properties and the ratio between the damaged and the total area planted, per clone, was generated to assess the characteristics that best related to clone resistance to wind damage.
Results and Discussion
The area of wind damage per clone varied between 0.1 and 15.1% of the total planted area, and the force required to break the trees varied between 16.2 and 45.6 kgf (Figure 2).Clones B, C, and D had a smaller area affected by winds and a higher force required to break its trees, resulting in a correlation coefficient between these variables of -0.839.This value shows that the higher force needed to break the trees results in lower damage area and this technique can be used to evaluate the Eucalyptus clones resistance to wind damage.Braz et al. (2014) reported values between 39.76 and 196.36 kgf to break three-years-old Eucalyptus trees, whose diameter at 1.3 m height varied between 10.82 and 13.34 cm.This difference occurred due to the age of the trees, because the older ones had larger diameter and greater resistance to breaking.
There was no relationship between the diameter at 1.3 meter height and the area damaged by wind.Trees with larger diameter have greater resistance to this type of phenomenon; however, due to the little variation between the diameters of the evaluated clones, this conclusion does not apply to this work.
There was a relationship between wind damage and: the basic density, microfibril angle, and the ratio between these two parameters (Table 2).Higher basic density resulted in higher wood material per unit volume and greater resistance to breakage (Niklas & Figure 2. Relationship between the force needed to break the trees and the ratio between damaged area by the winds and total planted area, evaluated in seven two-year-old Eucalyptus grandis × Eucalyptus urophylla clones from municipality of Belo Oriente, state of Minas Gerais, Brazil.-0.8475 0.7089 -0.869 (1) Means followed by same letter in the column do not differ by the Scott-Knott test at 5% probability. (2)Bd/Ma, ratio between basic density and microfibril angle.
(3) Pearson correlation coefficient (r) between the variable and the ratio between wind damaged area and the total planted area.Values between parentheses represent the coefficient of variation.
Trees with high basic gravity and low microfibril angle showed greater resistance and, therefore, should be recommended for areas with high incidence of wind damage.The basic density/MFA ratio has the best relationship with the area damaged by the wind, but, for practical purposes, it is possible to use only the basic density to evaluate the resistance of clones to wind, because its determination is quicker and easier than the MFA (Hein et al., 2013).
A high basic density hampers wood processing (Moura et al., 2011) and impregnation of white liquor during the cellulosic pulp manufacturing process, reducing the yield and increasing the rejects (Severo et al., 2013).In general, this parameter increases with age (Wassenberg et al., 2015), so, besides the resistance to wind, the end use of wood should also be considered when selecting clones for areas with a high incidence of wind damage.
All the evaluated anatomical parameters varied between the clones (Table 3).Among the fiber classification parameters, the lumen diameter, cell wall thickness, and cell wall fraction had the highest coefficient of variation.In the evaluation of the histological sections, the highest values of the coefficient of variation were found for the height and width of the rays, demonstrating the anatomical constituents with higher variation in wood.For the nanoindentation tests, the fibers showed higher values for modulus of elasticity, while the middle lamella had higher hardness values (Table 3).The same tendency was found for the anatomical characteristics in Eucalyptus ssp.(Longui et al., 2014) and nanoindentation tests in Ricinus communis (Li et al., 2014).
Table 3. Anatomical analysis, and modulus of elasticity (MOE) and hardness (H) of the fibers and middle lamella (ML) obtained by nanoindentation of seven two-year-old Eucalyptus grandis × Eucalyptus urophylla clones from municipality of Belo Oriente, state of Minas Gerais, Brazil (1) .
(9) r = Pearson correlation coefficient between the variable and the ratio between wind damaged area and the total area planted.CV, coefficient of variation.
Pesq. agropec.bras., Brasília, v.52, n.11, p.969-976, nov.2017 DOI: 10.1590/S0100-204X2017001100002 Materials with a lower lumen diameter and higher cell wall thickness corresponded to clones that had smaller damaged area by winds.The fibers have structural function in hardwoods and their morphology influenced the mechanical properties of wood (Slater & Ennos, 2013).The lumen diameter and cell wall thickness had a different influence on the mechanical properties of wood.These two parameters can be related to one variable, the cell wall fraction, an anatomical variable that had the best relationship with the area damaged by winds, having a Pearson's correlation coefficient of -0.783.
The vessels conduct water in plants and do not have a structural function.This explains the lack of relationship between this structure and wind damage.The ray cells have thin walls, resulting in low mechanical resistance (Panshin & Zeew, 1980;Longui et al., 2014), and for this reason, they offer little resistance to wind.This was evidenced by the Pearson correlation coefficient between the height and width of the rays, of 0.572 and 0.587, respectively, and the area damaged by winds.
Thus, among the anatomical characteristics of the wood, the cell wall fraction showed a better relationship with the resistance of trees to winds and should be considered when selecting clones for areas with high incidence of wind damage.
Higher cell wall fraction and lower microfibril angle resulted in a higher modulus of elasticity of the fiber (Gindl et al., 2004;Borrega & Gibson, 2015), which increased its resistance to winds.A smaller microfibril angle resulted in a better arrangement of these structures (Hein et al., 2013), which increased the fiber resistance per unit area, and thus, its hardness (Li et al., 2014).The middle lamella connects the adjacent cells (Panshin & Zeew 1980), being important in the tree structure; thus, there was a relationship between the modulus of elasticity of this structure and the resistance to wind damage.Finally, there was no relationship between the hardness of the middle lamella and the resistance to winds.
The mechanical properties of wood and the growth stress of trees showed a relationship with the area of wind damage (Table 4).The modulus of rupture showed a better relationship with the area damaged by the winds per clone, followed by the modulus of elasticity and compression parallel to grain, respectively.The modulus of rupture showed a relationship with the cell wall fraction and basic density (Longui et al., 2014;Dixon et al., 2015), parameters that also showed high correlation with the area damaged by the winds per clone.This showed how wind resistance was the result of a set of wood characteristics.
The Pearson correlation coefficient between the growth stress and the area damaged by winds was -0.625.Growth stresses result from the internal forces that keep the trees standing (Jullien et al., 2013), being important in areas with wind damage.However, a high growth stress increases the incidence of defects such as cracks and warping, reducing the wood value (Chauhan & Walker, 2011).Thus, plants with high growth stress are suitable for being planted in areas with high wind damage, but this may compromise wood use for sawmill.
The fact that the clones B, C, and D had the smallest area damaged by winds is due to the greater cell wall -0.4021 -0.7590 -0.6525 -0.6099 -0.6252 (1) Means followed by the same letter in the columns do not differ by the Scott-Knott test at 5% probability (2) r, Pearson correlation coefficient between the variable and the ratio between wind damaged area and the total area planted.Values between parentheses represent the coefficient of variation.
Pesq. agropec.bras., Brasília, v.52, n.11, p.969-976, nov.2017 DOI: 10.1590/S0100-204X2017001100002 fraction of these materials (Table 2).This results in the higher basic density and better mechanical properties of fiber (Vincent et al., 2014) and wood (Slater & Ennos, 2013) and, with a smaller microfibril angle, improves the resistance of wood to breakage and tree resistance to wind.On account of these characteristics, these materials are more suited for areas with a high incidence of wind damage.
Conclusions
1.The methodology used is adequate to evaluate the resistance of Eucalyptus clones to winds, with the clones B, C, and D showing lower damaged area and requiring higher force to break its trees.
2. Higher basic density, cell wall fraction, modulus of elasticity of the middle lamella and fibers, fiber hardness, modulus of rupture, growth stress, and lower microfibril angle and height and width of the rays make trees more resistant to breakage, and therefore, are suitable for areas with a high incidence of wind damage.
Table 1 .
Total planted area, diameter and height average, and average wind broken area for each Eucalyptus grandis × Eucalyptus urophylla clone from municipality of Belo Oriente, state of Minas Gerais, Brazil., samples were made from the peripheral region of the plank for evaluation of the wood's mechanical properties.The compression parallel to the grain, modulus of elasticity (MOE), and modulus of rupture (MOR) were determined according to American Society for Testing and Materials (ASTM Figure 1.Representation of test for resistance of the trees to breakage.Pesq.agropec.bras., Brasília, v.52, n.11, p.969-976, nov.2017 DOI: 10.1590/S0100-204X2017001100002 blade
Table 2 .
Basic density, microfibril angle and the ratio between these parameters in the seven two-year-old Eucalyptus grandis × Eucalyptus urophylla clones from municipality of Belo Oriente, state of Minas Gerais, Brazil(1). | 4,186 | 2017-12-18T00:00:00.000 | [
"Environmental Science",
"Materials Science"
] |
Towards More Reliable Deep Learning-Based Link Adaptation for WiFi 6
The problem of selecting the modulation and coding scheme (MCS) that maximizes the system throughput, known as link adaptation, has been investigated extensively, especially for IEEE 802.11 (WiFi) standards. Recently, deep learning has widely been adopted as an efficient solution to this problem. However, in failure cases, predicting a higher-rate MCS can result in a failed transmission. In this case, a retransmission is required, which largely degrades the system throughput. To address this issue, we model the adaptive modulation and coding (AMC) problem as a multi-label multi-class classification problem. The proposed modeling allows more control over what the model predicts in failure cases. We also design a simple, yet powerful, loss function to reduce the number of retransmissions due to higher-rate MCS classification errors. Since wireless channels change significantly due to the surrounding environment, a huge dataset has been generated to cover all possible propagation conditions. However, to reduce training complexity, we train the CNN model using part of the dataset. The effect of different subdataset selection criteria on the classification accuracy is studied. The proposed model adapts the IEEE 802.11ax communications standard in outdoor scenarios. The simulation results show the proposed loss function reduces up to 50% of retransmissions compared to traditional loss functions.
Abstract-The problem of selecting the modulation and coding scheme (MCS) that maximizes the system throughput, known as link adaptation, has been investigated extensively, especially for IEEE 802.11 (WiFi) standards. Recently, deep learning has widely been adopted as an e cient solution to this problem. However, in failure cases, predicting a higherrate MCS can result in a failed transmission. In this case, a retransmission is required, which largely degrades the system throughput. To address this issue, we model the adaptive modulation and coding (AMC) problem as a multi-label multiclass classi cation problem. The proposed modeling allows more control over what the model predicts in failure cases. We also design a simple, yet powerful, loss function to reduce the number of retransmissions due to higher-rate MCS classi cation errors. Since wireless channels change signi cantly due to the surrounding environment, a huge dataset has been generated to cover all possible propagation conditions. However, to reduce training complexity, we train the CNN model using part of the dataset. The e ect of di erent subdataset selection criteria on the classi cation accuracy is studied. The proposed model adapts the IEEE 802.11ax communications standard in outdoor scenarios. The simulation results show the proposed loss function reduces up to 50% of retransmissions compared to traditional loss functions.
Index Terms-Link adaptation, IEEE 802.11ax, Machine learning, Deep learning, WiFi 6 I. I Nowadays, dynamic resource allocation and link adaptation techniques have been incorporated into di erent wireless standards to support the quality of service (QoS) requirements while serving the increased number of users [1]. Link adaptation represents a key element in determining the system's latency and throughput performance [2]. Fortunately, machine learning is anticipated to provide viable solutions to the link adaptation challenges in wireless systems [3].
In the literature, the link adaptation problem has been modeled either as a reinforcement learning problem [4], [5], or as a multiclass classi cation problem where the class labels represent di erent modulation and coding scheme (MCS) combinations [6]- [10]. According to this modeling, each data point can belong to a single class and a supervised machine learning model can be trained to select the ideal MCS based on the training data. However, supervised models, generally, have a certain level of accuracy [11]. In this case, failing to predict the ideal MCS has unpredictable implications on the system throughput. In fact, predicting a higher-rate MCS will result in a failed transmission and, consequently, a retransmission is required which largely degrades the system throughput. These problems come from the fact that modeling the problem as a multiclass classi cation has no control over what the model can predict in failure cases. Now the question is, if the model failed to predict the optimal MCS, can we train it to predict a suboptimal one?
To answer this question, we model the link adaptation problem, for the rst time, as a multi-label multi-class classi cation. In this modeling, a datapoint is allowed to belong to more than one class at the same time (all the successful MCS in AMC problem). Therefore, the model learns to predict not only the optimal MCS, but also all suboptimal ones. Such modeling approach gives more control to what the model learns from the training phase and what it can predict in the failure cases. However, we need to enforce the model to avoid predicting higher-rate MCSs that may produce retransmissions. To solve this issue, we propose a new loss function that adds more penalization to such cases. The proposed loss function reduces the number of retransmissions compared to traditional crossentropy loss function, which widely employed in the literature. Fig. 1 shows an overview for the proposed system.
As wireless channels vary signi cantly according to the surrounding environment, a huge dataset is required to cover all possible channel variations. However, it is computationally expensive to utilize all the samples for training. In this work, we examine di erent selection criteria for the training dataset. The selection criteria are based on the domainknowledge and our understanding for the nature of wireless channels. For orthogonal frequency-division multiplexing (OFDM)-based systems, we assume an interference-free, noise-free, single-user, and single-input single-output setup. In this case, the delay dispersion of the channel is the decisive factor on the MCS selection. Hence, instead of randomly selecting the training subdataset, we select the subdataset that comprises a uniform (or as close as possible to a uniform) distribution of the channels delay dispersion behaviors. Given that the channel dispersion behavior is not easy to be fully characterized, for such selection to take place, we employ well-know criteria characterizing the delay dispersion such as root-mean-square delay spread and window delay spread. The contributions of this work can be summarized as follows: • We modeled the problem of AMC as a multi-label multi-class classi cation problem. The model trained to predict all the possible labels for successful transmission (including the optimal MCS and suboptimal ones). • We employed a convolutional neural network (CNN) with an innovative loss function. The proposed model allows to control what transmission parameters combination to predict when failing to predict the optimal one. • We studied the impact of training subdataset selection criteria on AMC problem and highlighted the corresponding e ect in the classi cation accuracy.
A. Problem Formulation
Assume we have C di erent combinations of MCS and guard intervals, GI, each of them called a transmission mode, (TM). The TMs are indexed as i ∈ I ⊂ N, where the cardinality of I is the number of available combinations. The index, i, thereafter referred to as the class distinctly maps to a combination of MCS and GI. We adopt the IEEE 802.11ax standard for single-input single-output system at 0.8 and 3.2 guard intervals with a xed bandwidth of 20 MHz as shown in table I. Therefore, in terms of multi-label multi-class classi cation, link adaptation is the problem of selecting all the class labels, i, to which a certain channel realization belongs. Thus, for a certain channel realization ch n , the classi er selects all the labels, i, corresponding to all valid transmission modes T M i . Then, we can express the classi er function as a function F that maps a channel realization ch n to a set of labels y ⊂ {1, 2, . . . , C} as: where T X(ch n , T M i ) = 1 when transmitting a packet through a channel given by ch n with transmission con guration given by T M i is successful, and zero otherwise. From the predicted TMs, we select the TM corresponding to the highest data rate. As shown in Fig. 1, a user station (STA) sends the estimated channel state information (CSI) to the access point (AP). The AP then uses the received CSI to adapt the transmission parameters for the next transmission.
B. Datasets Generation
We selected four scenarios with diverse delay dispersion characteristics: urban micro-cell, suburban macro-cell, urban macro-cell and rural macro-cell. Using the Matlab WINNER II toolbox [12], for each scenario, 50,000 channels are generated. For each channel, we use the Matlab IEEE 802.11ax toolbox to simulate transmitting a packet using all available TMs. We split the generated channels to 80% training and 20% testing.
C. Selection of Training Subdatasets using Di erent Delay Dispersion Criteria
The training subdatasets are constructed using two approaches: random selection criteria and di erent delayspread-based selection criteria. Based on the random approach, Cases 1 & 2 are identi ed, and based on the delayspread approach, Cases 3, 4, & 5 are identi ed.
1) The random selection criteria (Cases 1 & 2): The random approach is applied in the following two ways: • Case 1, Random Full Dataset (RandomFD): all data points (i.e., a total of 160,000 data points; 40,000 points from each of the four scenarios) are used for training. • Case 2, Random Partial Dataset (RandomPD): the training subdataset is composed of data points selected randomly and equally from each scenario. RandomFD represents a reference case where all data points are used for training, and RandomPD is the typical widely-used way of reducing the number of data points through random selection.
2) The delay-spread-based criteria (Cases 3, 4, & 5): The delay-spread-based selection approach is applied to select di erent training subdatasets each of which has the same number of data points as RandomPD. Unlike RandomPD, the data points of the built subdatasets are selected to represent the full delay dispersion behaviour of RandomFD. Using this approach, from the total 160,000 available data points, we select the subdataset points such that the distribution of the delay dispersion metric will be as close as possible to uniform.
Lets assume RandomF D i to be the i th data point in the RandomFD dataset; S(RandomF D i ) is its corresponding delay dispersion evaluated based on a speci c metric of interest, S; i = 1, 2, ..., I (where I is the total number of data points in RandomFD), and min S(RandomF D) & max S(RandomF D) are the minimum and maximum obtained delay dispersion values, respectively, among all the data points of RandomFD. We assume the interval [min S(RandomF D) , max S(RandomF D)] to be divided into Z equal disjoint sub-intervals. We de ne the histogram of S(RandomF D) as the function that counts the number of delay-spread observations, n z , that fall into the z th subinterval, where z = 1, , 2, ..., Z, and n min & n max are the minimum and maximum number of observations, respectively, obtained per sub-interval using the full dataset i.e., RandomFD.
Our proposed delay-spread-based approach to select a subdataset from RandomFD given a histogram, m z , is as follows.
where T is the total number of data points in the selected subdataset.
The value of x determines the maximum number of data points at each of the Z intervals, which results in selecting a subdataset with a histogram that exhibits a tendency toward having a uniform distribution of the delay dispersion behaviour over the [minF D, maxF D] range. The possibilty of ending up with a perfect uniform distribution increases as the number of data points in RandomFD increases.
Based on the applied delay-spread metric (i.e., S), which is our design criterion, we can now de ne the di erences among Case 3, Case 4, and Case 5 of the studied cases.
• Case 3, root-mean-square delay spread Partial Dataset (rmsPD). In this case, the training dataset is selected using the delay-spread metric de ned as the normalized second-order moment of the delay pro le of the channels. • Case 4, window (40%) delay spread Partial Dataset (W40%PD). In this case, we characterize the delay dispersion using the delay window parameter which is de ned as "the length of the middle portion of the power delay pro le containing a certain percentage of the total power found in that impulse response" (p. 4, [13]). Here we use the 40% as our design criterion. • Case 5, window (70%) delay spread Partial Dataset (W70%PD). In this case, we use the same de nition of the delay dispersion metric as in Case 4; however, here we use the window that contains 70% of the power of the delay pro le.
III. P D L A AMC
The convolutional neural networks (CNNs) have showed superior performance in di erent domains including computer vision, natural language processing, speech synthesis, etc [3]. One main advantage of CNNs is its proven capabilities in processing raw data. This advantage eliminates the burdens of data pre-processing. Inspired by this, we propose a CNN-based approach for AMC in IEEE 802.11ax.
A. CNN Model
The proposed deep convolutional neural network (DCNN) includes convolutional layers, average pooling layers, and fully-connected layers. Typically, the rst hidden layer is a convolutional layer with 20 lters. The second hidden layer is a convolutional layer of 32 lters, followed by an average pooling layer with pool size of 4. Then, another convolutional layer is added with 64 lters followed by an average pooling layer with pool size of 2. A convolutional layer consisting of 32 lters is added, followed by an average pooling layer with pool size of 2. For the all convolutional layers, every lter has a size of 10 × 2, with ReLU activation, F (x) = max(x, 0). After the 4 convolutional layers, there are 2 fully-connected layers. The fully-connected layers contain 50 and C neurons respectively, where C is the number of available TMs. Since one channel can belong to many classes at the same time, we used Sigmoid activation function (3) in the output layer to approximate the multinomial distribution of the class labels. To relieve the e ect of over tting, an l2 regularizer is added to the last two layers.
For training the model, an Adam optimizer [14] is adopted along with our customized loss function (section IV). The DCNN is trained for 1000 epochs with batch size of 128. After training the DCNN, it is deployed for predicting the appropriate TMs.
B. Dataset Description
Consider a labeled dataset consisting of pairs of x and y where x represents di erent CSI in di erent selection cases described in subsection II-C. The label vector y is a vector in {0, 1} C where C is the number of the available T M s (i.e., the number of classes). If the i th position in the label vector of the j th data instance is set to one, this indicates that a transmission over a channel with CSI equal to j th CSI in the dataset using the i th transmission mode will result in a successful transmission. In the same way, 0 indicates a failed transmission. In our experiments, the label vector is 24 th -dimensional vector representing the di erent available combinations of MCS and GI.
C. Evaluation Metrics
To evaluate the proposed model in the context of communication systems e ciency, we applied two system-speci c evaluation metrics, namely, data-rate loss (DRL) and number of retransmissions (NR). We de ne δ as: where R(·) is a function that maps a TM to the data rate associated with this TM, T M i is the optimal TM given in the dataset, and T M i is the predicted TM. A positive value of δ means predicting TM with a rate higher than the optimal one. This implicitly incurs a retransmission. The number of retransmissions is given by NR metric. A negative value of δ implies that the model predicts suboptimal TM, which leads to a rate loss. The di erence between the data rates of T M i and T M i is given by DRL.
IV. P C L
A. Why we need a customized loss
The traditional loss function used in multi-label multi-class classi cation problems is crossentropy (5).
where C is the total number of classes, which equals to the dimension of y. We can see that the function in (5) treats all wrong predictions equally which is not relevant to the considered AMC problem. We can see that equation In the problem under consideration, a false positive in a higher-rate MCS may lead to a retransmission, which is very costly in terms of bandwidth resources. However, a false negative indicates selecting a lower-rate T M , which can be tolerated than a retransmission. For this reason, we aim to design a loss function that emphasis on false positives more than false negatives.
B. Proposed Loss
We propose a new customized loss function that adds more penalization on false positive predictions. Since the proposed loss function emphasis on false positives, we named it Crossentropy+, CE + . The new loss given by: where CE (y,ȳ) is the traditional crossentropy given in (5) and φ (y,ŷ) is an extra penalization term for false positive predictions given by: where C is the total number of classes and β is a weight term added to control the credit assigned for the traditional crossentropy term and the newly added term. Setting β to a large value may lead the model to predictŷ = {0} C vector which minimizes the second term and completely ignores the rst term. In the other hand, if we set β ≤ 1, the model may ignore it and learns parameters that minimize only the rst term of (6). We set β = 1.3 for all the experiments in this work. However, in the future, we can learn a value for β to meet di erent QoS requirements (may be di erent for a WiFi public network than for a 5G URLLC network).
V. E R We organize this section into two subsections: the prediction results of the DCNN model using the di erent proposed delay-spread-based subdataset selection criteria, and the improved prediction results achieved by adapting the proposed loss function.
A. Results of AMC using DCNN Model
To evaluate the e ect of the training set size, we trained the model with varying set size, namely, 10K, 20K, 30K, 40K, and 50K channels, for each selection criterion. We also consider a larger RandomFD dataset. For each training set, we test the model using three di erent scenarios, namely, suburban macro-cell (C1), urban macro-cell (C2) and rural macro-cell (D1). Fig. 2 shows the percentage of retransmissions to the total data points in each test scenario. We can see that, among the di erent selection criteria, W40%PD obtained the best performance in all the test scenarios. Also note that for all criteria, scenario D1 obtained higher retransmission rate compared to both C1, and C2. This gure also shows that RandomPD and rmsPD training subdatasets always obtain higher retransmission percentage compared to W40%PD and W70%PD. We observe that the performance is largely improved with increasing the size of training dataset. However, a little or no improvement has been recorded when the size increases from 40K to 50K. According to VC-dimension theorem [15], this saturation happens when the number of training data points reaches a threshold, N vc , after which adding more data points does not improve the learning anymore. Fig. 3 shows the percentage of data rate loss obtained using the DCNN-model with di erent training subdataset selection criteria. As explained in section IV, a data rate loss happens when the model predicts a false negative in the index of the ideal TM. The gure shows an inverse trend between the retransmission rate and the data rate loss. However, it is worth noting that since the overall system performance is decided by both: rate loss and retransmission rate, it is more likely to tolerate a reasonable rate loss rather than repeated retransmissions. We can see that W40%PD, which results in the best performance in terms of retransmissions, obtained around -3.1% rate loss in the worst case (scenario C2). Based on these observations, we can conclude that training a model based on W40%PD gives the best performance in the retransmission with acceptable rate loss. Also the proposed DCNN approach obtained nearoptimal TM selection. However, we can further improve the
B. The Performance of the Proposed Loss-Function
To evaluate the performance of the proposed loss function, we trained a model with traditional crossentropy and our proposed loss functions. To obtain fair comparison, we used the same model capacity in the two cases. We also xed all other hyperparameters (e.g., the same number of epochs, initialization, activation, regularizer, optimizer, and learningrate).
The results of training the model using the two loss functions are shown in Table II. The table shows the number of retransmissions in scenario C2. We selected this test scenario since it has the largest percentage of retransmissions compared to other scenarios, as shown in Fig. 2. We can see that the proposed loss function has largely reduced the number of retransmissions under all selection criteria and dataset sizes. The proposes loss function obtained more than 50% improvement over traditional crossentropy in some cases. Table II shows the percentage of rate loss for each training set size. We can see that the rate loss using our proposed loss function is larger than that of traditional crossentropy. Given that the model capacity is the same, this can be explained by the fact that reducing the false positives may result in increased false negatives. However, depending on the speci cations of the used communication system (speci cally the cost of retransmissions compared to rate loss), varying the value of β in (7) provides a wide range of ne-tuning to meet di erent performance requirements.
VI. C
A convolutional neural network framework for adaptive modulation and coding (AMC) in IEEE 802.11ax has been presented. We modeled the problem of AMC as a multi-label multi-class problem. The results showed that traditional loss functions are limited in solving such problem. We proposed a new loss function that increases the reliability of the adaptation framework. The proposed loss function proved to outperform the traditional crossentropy function. We also studied the impact of subdataset selection on the model performance. Empirically, we concluded that window delay 40% subdataset selection criterion along with the proposed loss function give the best throughput/reliability compromise.
A
The authors thank Mitacs and Ciena for supporting this research in the IT13947 grant. | 5,244.4 | 2021-06-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Connectivity of Drones in FANETs Using Biologically Inspired Dragonfly Algorithm (DA) through Machine Learning
Department of Electronics, Quaid-i-Azam University, Islamabad, Pakistan Department of Computer Science, Iqra National University, Peshawar, Pakistan Department of Computer Science, Sarhad University of Science and Information Technology, Peshawar, Pakistan Institute of Computer Sciences and Information Technology, +e University of Agriculture Peshawar, Peshawar, Pakistan Department of Computer Science, Islamia College Peshawar, Peshawar, Pakistan Institute of Computing, Kohat University of Science and Technology, Kohat, Pakistan Computer School, Hubei University of Arts and Science, Xiangyang 441000, China
Introduction
UAV (unmanned aerial vehicle) is an aircraft without a human pilot onboard which is popularly known as flying drone.
ey are equipped with a variety of additional equipment such as cameras, global positioning systems (GPSs), GPS-guided missiles, navigation systems, and sensors. ey have ultra-stable flight and can hover and perform different acrobatics in the air. eir versatility is what truly makes them popular. Multiple drones are connected directly or through intermediate nodes in FANETs. ese drones act as wireless relay in ad hoc networks, which provide coverage of wirelessly connected devices. Formation of small drones are now being introducing in military expeditions, civilian applications, disaster management, forest fire detection, agricultural management, border surveillance, and telecommunications [1]. FANETs can be used as mobile radio stations or WLAN transmitters in regions lacking infrastructure [2][3][4][5][6][7][8][9][10][11][12].
Connectivity of a network is of utmost importance in critical fields involving uncertain flying driving units in FANETs. If a drone is destroyed by an enemy, it is important to offload data wirelessly to other neighboring drones. erefore, FANETs can address predisaster and postdisaster calamity in real-time applications. Drones in the predisaster situation, collect the location information of all vulnerable zones and update that information periodically such as occurrence of disaster. is can measure the destroyed area for rescue operation. It can strengthen respond ability to the end user. Moreover, drones deployed on the postdisaster situation can help to establish necessary communication service. Application of FANETs in calamity situation is shown in Figure 1.
Previously, traditional approaches were adopted to analyze the performance of FANET to model the connectivity. e performances of the biological inspired algorithms are important development to capture the scalability and reliability patterns of wireless ad hoc networks. We therefore propose a machine-learning-based DA algorithm for connectivity of the drones in FANETs. Quick deployment of small flying drones is similar to dragonfly's behavior. Instead of food, the drones are searching for neighbors to enable wireless communication network. e choice of dragonfly technique comes into inspiration due to their light weight, rapid flying adjustment and finding neighbors within a communication range. ey are able to maintain mobility, reduce isolation, and search for target consistently. Unpredictable FANETs scenario needs to organize crucial flight factors of altitude, speed, direction, etc. Organizing flying drones is a key challenge to establish a wireless network, and we attempt to make it possible through social inspiration behavior of dragonfly. To the best of our knowledge, the proposed work is the first solution for connectivity in the field of FANETs using machine learning. Our proposed scheme is valid for ad hoc applications and other wireless development technologies. e rest of the paper is constructed as follows: Section 2 briefly explains the existing work. Section 3 presents the proposed working architecture of the dragonfly algorithm. e simulation results of the proposed work are presented in Section 4. Finally, the conclusion of this paper is presented in Section 5.
Related Work
FANETs are sparsely connected networks because of low density and high mobility of nodes. is causes fluctuation of link, loss of connectivity, and performance degradation. In He et al.'s study [13], the concepts of relay chain and relay tree are presented. When nodes are unable to establish connection with existing infrastructures on the ground, they can still communicate through other nodes. In Rautu et al.'s study [14], air-to-ground communication is investigated to overcome the loss of connectivity. e results showed that network stay connected when node replacement is performed. Optimal replacement of flying drone is a challenging task during flight missions. However, it is not feasible to replace the drones in an existing network.
Replacing a drone is not a simple task due to ever-changing location information.
In Zhao et al.'s study [15], emergency communication system is established with the help of UAVs which relies on the mesh network with the objective to ensure connectivity between ground station and UAV. Yu et al. [16] developed UAVNet framework and established flying wireless mesh network. ese studies focused on infrastructure-based ad hoc networks which is not the case of pure ad hoc network and may influence the quality of communication due to interference and time delay. Oubbati et al.'s study [17], an algorithm that considers change in network topology was constructed with an assumption that UAVs have full knowledge of the location of devices. It is investigated that optimal movement of UAVs can improve the connectivity of ad hoc networks.
In Cicek et al.'s study [18], semicentralized framework is proposed to establish ad hoc communication between UAVs under the conserving centralized organization. In this research, movement planning and reliability are addressed with the structure of multiple groups. is requires a governing framework including control and motion planning. is can utilize the UAVs to play the role of gateway to connect the groups to the base station (BS) and communicate further.
is framework presents improved performance as compared to purely centralized base framework. However, certain data transfer route through ground BS still exists which can cause network partition due to failure of BS.
is failure can isolate a group of UAVs from the rest of the network. Popescu et al. [19] present the use of UAV as relay to support wireless sensor network and guaranteed the delivery of data generated by wireless nodes on the ground. As a feature of mobility, the significance of possible isolated drone is not studied in most research works. ere is a need of intranetwork connectivity for transmitting information with recently employed drone.
Many problems in networking can take inspiration from the biological world for its solution. Biological world demonstrates the algorithms which propose different models of networking behavior for optimal solutions. Unlike conventional networks, the study of swarm organizing helps to develop the idea from the natural world in the research field.
ere are several swarming techniques studied in which the researchers tried to figure out the principles of interaction between the individuals. e study that mimics the behavior of individuals and yields to social intelligence is called swarm intelligence (SI) [20]. It deals with the artificial implementation or simulation because there is no centralized unit to control and guide the individuals. e basic principles between some of them can easily simulate the social behavior of the entire population.
Ant colony optimization (ACO) is the first SI technique which simulates the social intelligence of ants [21]. Based on the natural ability of pheromone, each ant in this algorithm draws its own path from nest to food with the help of pheromone. Another popular SI model is particle swarm optimization (PSO) algorithm [22], which mimics the foraging and navigation behavior of flocking birds. It is based on three rules of interaction between birds: 2 Wireless Communications and Mobile Computing (i) Fly and maintain their direction towards current direction (ii) Best food location obtained so far (iii) Best food swam found so far ese rules help each individual towards the optimal solution and swarm simultaneously. Artificial bee colony (ABC) is another recent and well-regarded SI-based algorithm [23], mimicking the social behavior of honey bees when foraging nectar. In this algorithm, bees are categorized in three different ways: (i) e employed bee (ii) Onlooker bee (iii) Scout bee Implementations of PSO [24,25], ABC [26,27], and ACO [28,29] have been applied in different problems to improve the existing algorithms. However these optimization techniques do not obtain static and dynamic swarming behaviors. DA is a recent development in swarm optimization which improved the diversity of solutions and caused exploration algorithms to become effective. e exploration and exploitation of DA are mainly determined by five primitive principles and significant research applications of DA in applied sciences have been conducted. For example, image processing [30][31][32], machine learning [33][34][35], wireless and network [36,37], cooperative diversity [38,39], etc. However, no study is discussed in literature for connectivity in ad hoc networks to simulate the individual and apply social intelligence of dragonfly swarming.
Social behaviours of animals derived in Boids of Reynolds swarm intelligence introduced three primitive principles of separation, alignment, and cohesion [40]. Dragonfly algorithm [41] is an extension of Boids with the novel objectives of static and dynamic swarming behavior of dragonflies. erefore, no scientific procedures were made use of the objective that maintaining high-performance connectivity in FANETs. However, insufficient work cited to provide and maintain network connectivity. Moreover, literature has numerous SI algorithms for applied sciences; however, there is no study found to analyze the DA for FANETs. We summarize our contribution for this research and describe as follows: (i) Biological species are innately intelligent, and they have strong learning ability. Instead of searching for food as in biological species, the proposed learningbased approach supports the isolated drones to search for a neighbor to ensure connectivity. (ii) To construct a valid solution, our proposed work follows the nature-inspired flying principles of DA through machine learning. (iii) When a drone is isolated, it flies in a random flight termed as levy flight. is situation opts an important feature in learning contribution. (iv) Only isolated drone should go for levy flight to search for possible neighbor while learning helps them to become early finding of neighbors. e rest of the drones retain the mobility as per DA rules. (v) Learning supports the isolated drone to move to the direction experienced in its last isolation. (vi) Connectivity is a key challenge in dynamic topology network. However, when a drone is isolated during flight mission, it strives to become a part of the network. (vii) Maximum numbers of drones stay connected using DA to ensue connectivity in minimum number of iterations.
Machine Learning.
Machine learning (ML) offers computer systems to learn with minimal human intervention and teach a machine how to learn and find better solution from practice. Basically, ML is an application of artificial intelligence (AI) and comprises on data analytics technique. is technique not only educates computer systems to do what comes naturally to human individuals but also biological species. is strategy permits computers to learn autonomously or assistance and adjust actions accordingly. Algorithms based on ML employing computational techniques to "learn" information directly without depending on a defined equation as a model. e learning process commence with different observations such as paradigm, direct behavior, experience, or command. Such actions have been learnt from regular practice. e iterative feature of ML is significant; however, as models are discovered to fresh data, they are intelligent to autonomously adjust. ey learn from computations that have been done earlier and able to generate efficient and frequent judgments to improve accuracy.
The Proposed Dragonfly Algorithm (DA) Using Machine Learning
Dragonfly algorithm is an emerging SI algorithm which mimics the behavior of dragonflies. Logically, DA divides the search process into two phases, namely exploration phase and exploitation phase. Dragonflies get into small groups in exploitation phase which enable them to forage over different areas to find their food repeatedly, whereas they form a group of large number in exploration phase when migrating to a certain direction to one destination. e pentagon representation of the basic concept of DA consists of five primitive principles as shown in Figure 2. ese are vital in finding the weights solution with the following classifications: Due to natural leaning and intelligent decisions capability, our work assumes that the dragonflies in a swarm are similar to drone in a FANET. e search range of the dragonfly defines the communication range of drone with potential to allow accurate area marking and unambiguous identification of drones. Based on learning, the search agent feature of the dragonfly assists in identification of neighbors within the predefined range of each dragonfly. e quick deployment of drones inspired from DA and establishing small groups in static swarming may assist in situations to fly over disaster areas. Furthermore, subswarm or interaction of few drones within a network marks the presence of another subgroup in the existing network which aims the property of static swarming. Flying drones need to remain separated from each other in a defined range to avoid collision. Similarly, alignment in FANETs can control the flying speed and direction which ensures data transfer reliably. Finally, cohesion brings each drone to try to move to the center of swarm for the best position to achieve better connectivity. Hence, DA is the only algorithm that fulfils the requirements to establish FANETs through ML and simultaneously search for neighbors to ensure connectivity.
System Model.
Our system model details the features of dragonfly through ML in FANETs. is aims to provide efficient neighboring search solution and connectivity. Assuming similar mobility behavior of drones (i.e., speed and range), proper neighbor selection ensures the connectivity in a network. e drones are deployed randomly and connected within a fixed communication range "R" at the distance "d," whereas each drone is equipped with ad hoc communication capability. Each drone tries to sustain connectivity within the vicinity; however, due to scarcity of network, some drones are isolated and listed outside the communication range of other drones as shown in Figure 3. e maximum step size (Delta max ) is defined for all the drones, and it is based on network dimension. is step size determines the mobility of the drone towards the partial network for joining or rejoining during flight operations. On the contrary, isolated drones are those drones that have no neighbors within their communication range. ese drones need to survive and randomly search for possible neighbors by adopting levy flight. To become a part of a connected network, this can force the isolated drone to search for neighbors. us to recognize drone as isolated, ML supports the drone to opt efficient decision based on previous experience and obtain neighboring solution. Only those drones opt for random flights which have no neighbor. e rest of the drones retain the mobility as per DA rules.
Stability and control are much more complex for flying drones, which can move freely in three-dimensional space as compared to static or vehicular networks. At present, there is an increased need to establish a mechanism which could define the collaborative steps among the uncertain movement of drones for its applicability in the FANETs.
Mathematical Model.
is section details the mathematical modeling of proposed scheme. In this scheme, every drone can sense its communication range to determine any possible neighbor drones. e separation rule ensures to maintain a minimum distance between drones when they are closer to each other or to move towards a drone when located far away. e key separation among flying drones helps in avoiding collision by maintaining a minimum distance between the drones. Mathematically separation can be expressed as where S i is the gap between the drones defined for ith drone, p i is the current position of drone, and p j is the position for jth point in the neighbor. e variable k is defined as the number of neighbors situated within the communication range of ith drone. e ith drone moves with an average velocity which depends on the speed of other drones whether it is in searching mode or connecting mode. e flying movement of drones is matched in velocity to other nearby drones. is average velocity refers to the particles not exceeding in speed from other neighbors in a unit space. e tendency of an individual to match its velocity with neighboring drones can be mathematically calculated as where v j is the velocity of jth neighbor for ith drone. Every drone in FANETs obtaining the proposed architecture tends to move to the center of radius for the best position to achieve better results. In other words, the cohesion step refers to the drone towards the center point of space that contains other drones close to its position. e tendency of an individual to move toward the center of mass of neighboring drones can be mathematically calculated as where p i is the position of ith drone, k is the number of neighbourhood, and p j shows the position of jth neighboring individual. Whenever a drone is isolated, it strives to move to find neighboring drone within the communication range to ensure connectivity. Let d ij (t) be the Euclidean distance at time t between position p i and position p j and is given as In view of the aforementioned mathematical model, the disciplinary instructions are assumed for drone operation in the flying zone. us, a newly linkedup drone in a swarm learns to follow these primitive rules to maintain the network connectivity and conserve its resources. As long as the drone is connected, cooperation among the drones would be sustainable which prolongs the network life. e drone attaining at least one neighboring solution learns to update its positions by a couple of defined vectors, that is, velocity step vector (ΔQ i ) and position vector (Q i ) according to the following mathematical expressions: where ΔQ i (t + 1) is a step vector which is a movement of drone at the next time step t + 1. e product sS i shows separation weight and separation of ith individual respectively. Similarly, v is velocity weight, v i is the velocity of ith individual, c is cohesion weight, C i is the cohesion of ith individual, f is food factor, F i is the food source of ith individual, e is the attacker factor, E i is the location of enemy of ith individual, w denotes the weight of inertia, and t is the where Q i (t + 1) is a position vector at next time t + 1. e different explanative and exploratory natures can be obtained during flight such as separation, velocity, cohesion, food, and enemy attacker factors s, v, c, f, and e. Finally, the position of a drone having no neighboring solution updates its position using the following mathematical function: where ζ is the levy flight and is calculated as follows [42]: where r 1 and r 2 denote the two random numbers in [0, 1], β is assumed constant, and σ is computed as follows: where Γ(x) � (x − 1)!. Solo flight reduces the lifetime of a drone and disrupts connectivity. is mathematical expression assists in learning the nearest neighbor when there is no neighbor and drone is isolated.
In order to ensure connectivity in a large area, every drone in FANET acts as relay for transfer of information to the other drones and/or base station. A link is established between nodes i and j such that j ∈ k|d ij < R. As soon as these individuals are the members of a group, they maintain minimal separation and contribute to the connectivity of the entire network. If a drone is isolated, the connectivity is compromised for that particular region. Moreover, along with task-oriented sensors, drones are also equipped with GPS, radar mechanism, and height sensors. Frequent topology changes with drones leaving or joining the network is often another complex challenge in FANETs. is situation benefits from machine learning to accomplish communication in a highly dynamic topology network. In order to achieve maximum connected drones in entire flying network, we present a solution for connectivity problem. After performing primitive principles of natural species such as separation, alignment, and cohesion, it is now possible to achieve better communication path by maintaining link between drones so they can share data easily. Mathematical expression for improved path stability is given as follows: where greater μ presents better connection opportunity. In the above discussion, it is established that drones do not remain isolated when following the devised strategy. Iteratively, isolated drones become a part of the swarm by using levy flight and cover larger area with maximum connectivity and minimize the isolated drones accordingly. In this learning scheme, the maximum number of flying drones stay connected in the network in minimum iteration and reduces the isolated drones during flight. Hence, neighboring solution minimizes the isolated drones adaptively, which are preserved till the period of network communication. e factor range index is taken into account when measuring the connectivity between drones. e connectivity decreases with increase in the distance and subsequently drone is not able to communicate with any of the other drones. Range index allows selection of appropriate drone for communication path stability. is can be modeled as follows: where η i is the range index of ith individual that decreases with an increase in distance and α is a constant 0 < ∝ i ≤ 1.
In order to determine the suitability of a drone for relaying as part of the swarm, a fitness function is considered which summarizes a single figure of merit. It shows a given design solution to achieve connectivity. e fitness is devised so that the number of isolated drones in the network is reduced and can be calculated as follows: where λ i is the fitness value of ith drone, k i is the number of neighbors of ith drone at any given time t, c i is the remaining energy of ith flying drone, and d iB (t) is the distance of the ith drone from the BS at time t. e distance from BS is incorporated to accommodate the drones which may not have a neighbor but are within transmission range of BS. ese drones are not isolated drones, rather these drones are a great option for relaying data of other flying drones to the BS. e pseudo code for the proposed solution is given in Algorithm 1. Considerable stages for the proposed scheme are detailed as follows: Lines 1 to 4 show the basic network initialization. Here, the number of drones is initialized with random deployment. Solution set for the drones is initialized in certain restricted boundaries. Furthermore, communication range for each drone remains the same, and step vector is initialized for necessary flight operation. Lines 5 to 12 show the computation and update stage. On the basis of initial deployment, each drone computes the position values and available neighboring solutions within an assigned communication range. Different weights such as s, v, c, f, e, and w are updated in this stage. Distances are calculated between the drones and BS. S, v, C, F, and E are also computed in this stage.
us drone should update the position values and neighbouring solution using first three primitive flight rules. In this course of action, each isolated drone learns to locate neighbors and update action of practice accordingly. Lines 13 to 22 show the proposed neighboring solution for isolated drones. If at least one neighbor is available within the drone vicinity, it should retain the flight as per DA principle. However, if there is no neighbor, it should go to opt learning and update their position using levy flight to search for possible neighbor. ML supports the isolated drone to move to the direction experienced in its last isolation. Hence, isolated drone count is reduced adaptively.
Simulation and Results
Simulations are conducted to evaluate the performance of the proposed scheme. In this section, the performance of the proposed scheme is tested using MATLAB simulator. is evaluation observes the effects of flight of drones, its connectivity and reduction of isolated drones using DA technique for machine learning. Basic set of parameters used for simulations are presented in Table 1.
Initial network deployment of nodes is shown in Figure 4. In this model, DA technique is implemented for learning in the proposed scheme. Network is initialized using DA principles such as random deployment of nodes, set the define parameters to maintain separation, velocity, and cohesion. Flight of the nodes experiences the course of actions during iterations, and they are based on learning. However, the wireless communication range for a node in a grid of 100 m 3 is set to 20 m for 10 homogeneous nodes. During flight, the network nodes detect their neighbors within 20 m communication range.
ere are six nodes shown isolated in an initial deployment of network, that is, 4, 5, 6, 7, 8, and 9. It can be seen that there is no neighbor within 20 m communication range for these nodes while rest of the nodes having at least one neighbor within their vicinity. If neighbor is found by a node, it will retain the mobility under DA principle; however, if there is no neighbor in the node vicinity, it will consider isolated and opt ML to flies in a levy flight. is can help the node to search for prospective neighbors.
is search is repeated iteratively till a neighboring solution is found. If there is at least one neighboring solution for an isolated node, it will update its defined vectors of velocity step vector and position Proposed DA algorithm using machine learning (1) Initialize the random position of drones (flying nodes) (2) Initialize the communication range and step size for all drone (3) for iteration 1 to max (4) Compute the position values of all the drones based on mobility (5) Determine the nodes in the communication range of each nodes (6) Determine learning stack to the isolated nodes (7) Compute the neighboring solution (8) Compute the network parameters by Equations (1) to Equation (4) (9) Update the position values (10) Update the neighboring solution (11) Update learning solution (12) if (a drone has at least one neighboring drone) (13) Learn velocity vector by Equation (5) (14) Learn position vector by Equation (6) (15) else (16) Declare the node as isolated (17) Calculate the isolated drones (18) Update position of isolated drone by flying randomly by Equation (7) (19) Update the neighboring connectivity by Equation (8) (5) and (6), respectively. Since the nodes are dynamic and gain experience due to repeatedly finding neighboring solutions. Such type of action helps the existing isolated node to learn for next term of isolation. is aims to update the node position efficiently in future course of actions. ose drones having no neighbor will have to update its position according to Equation (7).
When the simulation begins, three important flight factors of separation, velocity, and cohesion are performed for sustainable network operation. To avoid collision, the distance between the nodes is maintained. ey shall also match the velocity to its neighboring nodes and maintain cohesion among these nodes. All nodes which become a part of this disciplinary behavior create a group for future cooperation. It is important that members of the group must be in the neighboring communication range. DA aims the group of nodes plays to a key role in selecting a networking architecture for effective performance. As soon as a neighboring solution exists for a node, the network becomes connected. Furthermore, as explained, there are two important vectors of DA such as velocity step vector and position vector are incorporated to store and update position of those nodes having at least one neighboring node.
ey are now able to update their positions by adding the step vectors to the position vectors as given in Equation (6). Keep knowing the updated vectors of all nodes; this work enables us to simulate the next iteration based on the existing node position as well as the learning performs levy Figure 5. Although there are six nodes isolated in initial deployment, they become a part of cooperative nodes. is reduces the isolated node count significantly. It can be seen that after the completion of course of iterations, only one node is isolated, that is, Node 7. is improvement is achieved due to DA technique as well as previous experiences of isolated nodes during action of isolation using ML. Moreover, all the other nodes have one or more than one neighbors and consider connected to each other and/or BS. Hence maximum number of nodes stay connected using DA to ensue connectivity in minimum number of iterations. Based on learning-based finding neighbors, this scheme works to reduce the time, provides the best finding neighboring solution, maintains node's connectivity, and updates the flying positing of the drone.
is scheme reduces the spatial complexities of possible isolated drones.
Significance of DA algorithm for proposed scheme is compared without DA algorithm. e result of isolated count without DA is shown in Figure 6. is view results the importance of DA algorithm for dynamic network. As mentioned earlier that there are six nodes isolated at the start of network deployment. e plot of isolated node count without DA shows that isolated node counts are not sufficiently reduced as compared to DA. Higher number of node isolated during course of iterations and found no learning and discipline to force an isolated node to become a part of cooperative nodes. Consequently, only Node 4 and Node 10, which is lying exist within the vicinity of each other, while the rest of the nodes are isolated. e significance of DA can be gauged from its implementation and comparison. is challenging problem of FANET is overcome by inspiring the biological nature of DA and especially the learning-based levy flight.
Significance of DA for ML is clearly viewed in the comparison result of isolated node count with and without DA which is shown in Figure 7. Simulations are performed iteratively and results are obtained for isolated node count. Now it can be clearly seen that isolated nodes count started with six isolated nodes for both the schemes. However, after course of iterations, DA reduced the isolated nodes in best way due to learning. Consequently, it overcomes the connectivity problem of FANETs by reducing the isolated nodes which improves the communication area. On the contrary, without DA scheme where there is no learning exists, maximum nodes stay isolated which reduces the network performance. Hence, neighboring solution minimizes the isolated nodes adaptively, which are preserved till the period of network communication.
Conclusion
In this paper, we have attempted to design a well-connected FANET via biological inspired learning. e rapid mobility of drones leads to drone isolation in FANETs which is a main challenge for FANETs. We present a scheme to minimize the number of isolated drones. is scheme is based on biologically inspired technique of DA using the depth of machine learning.
us, connectivity is achieved by choosing the primitive principles of DA and ML. e preference to DA especially for FANETs is due to the novel SI behavior of dragonflies namely static swarming and dynamic swarming. Social behavior of DA is investigated in this paper, and an ML-based solution is proposed to find the efficient neighboring solution of the drones isolated during the flight mission. In this scheme, maximum number of flying drones stay connected in the network by effective learning in minimum iterations and reduces the isolated drones during flight. Hence, neighboring solution minimizes the isolated drones adaptively, which are preserved till the period of network communication. Furthermore, adopting the concept of biological step walk for neighbor searching overcomes the energy issue in the FANETs. We propose a fitness function for drones situated within the communication range which assists in the proposed learning scheme. Simulation results show that our proposed fitness function maximizes the network stability period, improves connectivity for routing, and updates the flying positioning of drones. us the proposed scheme benefits from the intelligence of machine learning and strategic learning of dragonfly to reduce energy consumption and ensure network connectivity.
Data Availability
Data of simulation code are available and provided in the supplementary materials in attachment separately.
Conflicts of Interest
e authors declare that there are no conflicts of interest regarding the publication of this paper. Wireless Communications and Mobile Computing 9 | 7,434 | 2022-01-28T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
HIV restriction in quiescent CD4+ T cells
The restriction of the Human Immunodeficiency Virus (HIV) infection in quiescent CD4+ T cells has been an area of active investigation. Early studies have suggested that this T cell subset is refractory to infection by the virus. Subsequently it was demonstrated that quiescent cells could be infected at low levels; nevertheless these observations supported the earlier assertions of debilitating defects in the viral life cycle. This phenomenon raised hopes that identification of the block in quiescent cells could lead to the development of new therapies against HIV. As limiting levels of raw cellular factors such as nucleotides did not account for the block to infection, a number of groups pursued the identification of cellular proteins whose presence or absence may impact the permissiveness of quiescent T cells to HIV infection. A series of studies in the past few years have identified a number of host factors implicated in the block to infection. In this review, we will present the progress made, other avenues of investigation and the potential impact these studies have in the development of more effective therapies against HIV.
Introduction
Quiescence is a unique feature of our immune system as T lymphocytes can remain at a non-dividing state for prolonged periods of time. The majority of circulating T cells in blood are in a quiescent state. This is characterized by low metabolic rates, low levels of transcription, small size and very long periods of survival [1,2]. It was long thought that T cell quiescence was a default state. A recent series of studies reversed this notion as they demonstrated that a number of transcription factors actively maintained this state [1][2][3][4][5][6][7][8][9][10]. To this date, LKLF [3,4,8], FOXO1,3 and 4 [7,[11][12][13][14][15][16][17][18], and Tob [6,10,19,20] have been identified as key factors that maintain T cell quiescence . Loss of expression of any of the above proteins resulted in aberrant T cell proliferation, cellular damage due to higher metabolism, and cell death. CD4 + T cell quiescence and its effect on HIV infection has been a topic of intense investigation as early studies indicated that they are resistant to HIV infection. As a result a strong interest was developed to identify cellular factors that mediate this block and can potentially be the basis for effective therapeutic approaches against HIV. None of the factors regulating T cell quiescence have been implicated in influencing HIV infection.
In this review, we will discuss the steps of the viral life cycle inhibited in quiescent CD4 + T cells, the factors involved and the impact these studies have in understanding HIV infection in quiescent T cells as well as the development of better targets against the virus.
HIV replication is defective in quiescent CD4 + T cells
For the past two decades, the infection of quiescent CD4 T cells by HIV has been an area of intense investigation. Unlike other retroviruses, HIV replication is not dependent on cell cycle. Nevertheless, HIV and other lentiviruses more efficiently infect non-dividing cells and establish a latent infection [21][22][23]. While early reports supported the notion that only pre-activated T cells can be infected by HIV [24][25][26], subsequent studies showed that quiescent T cells could be infected by the virus [27][28][29][30]. Yet, key differences arose relating to the degree and levels of infection efficiency.
On the one hand it was shown that HIV viral entry and initiation of reverse transcription were not affected. However, completion of reverse transcription was inefficient resulting in the accumulation of labile, intermediate viral cDNA species [28,29]. Rescue of infection was possible with stimulation but it was temporally sensitive as production of viral progeny decreased at later reactivation timepoints [29]. Additional work focusing on the CD25-(non-activated) and CD25+ (activated) T cell populations lent more support to the notion that quiescent T cells are resistant to HIV infection [31][32][33]. In the absence of any stimulation, HIV infection of CD25-T cells failed while that of CD25+ was successful. Furthermore, when total human peripheral blood monocytes were infected, the CD25-population did carry viral cDNA suggesting either bystander activation of the non-activated population or more efficient infection via cell-cell contact. Finally, Tang and colleagues further supported the above observations by demonstrating that infection of quiescent cells with HIV did not result in the production of virus [34].
On the other hand, other studies showed that HIV infection of quiescent T cells could be productive. More specifically, they demonstrated that the viral cDNA was fully reverse transcribed and stably localized in the cytosol. This linear cDNA following T cell activation would then integrate and result in the production of viral progeny [27,30]. Thus, the block was not seen at the early stages of infection such as reverse transcription but later either in nuclear transport or integration [27,30]. However, the key conclusion from these studies was that the block could be easily alleviated at any time after infection with T cell activation, a notion not shared by the studies outlined above [29].
Despite the divergent opinions, this early work clearly demonstrated that the life cycle of HIV in quiescent CD4 + T cells was quite distinct from that of activated T cells and warranted further investigation. As technologies evolved, our knowledge was further expanded in regards to the characteristics of the HIV life cycle in quiescent T cells. Studies by Korin et.al utilized a cell cycle progression assay that could assess the levels of both RNA and DNA synthesis and demonstrated that nondividing T cells can be classified into two categories: (1) cells in the G o /G 1a phase which is characterized by undetectable levels of DNA and RNA synthesis (truly quiescent) and (2) cells in the G 1b phase which is characterized by high levels of RNA expression but not DNA [35]. Following infection of these two sub-populations of non-dividing T cells, it was shown that cells in the G 1b stage were susceptible to infection while the truly quiescent G o /G 1a were resistant [35]. Thus, the data did lend a justification for the disagreement raised in the earlier studies. It would have been possible that the rescue seen after stimulation was due to the fact that G 1b phase cells were infected. More importantly, this study underscored the fact that partly activated but non-dividing T cells can be productively infected by HIV and that quiescent T cells are indeed resistant to infection.
Overall these early studies established that HIV replication in quiescent cells is defective. As new and more sensitive technologies developed, groups were able to further dissect and examine in more detail the stages of the viral life cycle that is impacted in quiescent T cells. These studies were more focused on the events leading up to including integration with a growing number interested in post-integration events.
Pre-integration blocks to HIV infection in quiescent T cells
A series of studies using more sensitive PCR techniques further supported the opinion that quiescent T cells were resistant to infection and shed some more light on what stages of the HIV life cycle were impacted. The Siliciano group, using a linker-mediated PCR assay, determined that in quiescent T cells reverse transcription occurred at a slower rate, 2-3 days, and produced viral cDNA with a half life of a approximately a day [36]. Despite the formation of full-length viral cDNA, the infection was not productive. In a follow up study, the same group found that the linear non-integrated cDNA was integration competent [37]. Thus, these studies supported and further characterized the presence of labile viral cDNA that was not able to support a productive HIV infection.
Moreover, the development of a sensitive and quantitative assay allowed for the detection of low levels of integration in HIV infected cells [38] and proved to be very useful in the study of HIV infection in quiescent T cells. Using this assay the O'Doherty group demonstrated that quiescent CD4 + T cells were infectable by HIV resulting in accumulation of viral cDNA over a three-day period and subsequent integration [39][40][41]. Furthermore, the authors were able to induce expression of virus following stimulation with IL-7 and anti-CD3/anti-CD28. These studies demonstrated that a productive and latent infection could be established in quiescent cells. However, despite these promising results, the major deficiencies previously seen in quiescent T cells, still persisted and potentially were masked by the use of spinoculation [42] as a method of infection.
Studies done by our group using quantitative real time PCR assays and the integration assay developed by the O'Doherty group analyzed in more detail the kinetics of HIV infection in quiescent CD4 T cells and compared them with that of stimulated T cells [43]. Based on our results, we did not observe any defects on viral entry. However, we did see a significant difference in reverse transcription. Unlike the earlier studies, initiation of reverse transcription was severely decreased (30-fold lower) in quiescent T cells. Interestingly, there was completion of reverse transcription that was delayed by 16hours. The newly synthesized viral cDNA did integrate in quiescent cells with efficiency similar to that of activated T cells. However, the process was completed 24 hours later than that seen in activated T cells. The integrated provirus found in quiescent T cells did express low levels of multiply spliced viral mRNA, however this did not translate into the expression of detectable viral protein. Interestingly, activation immediately after infection did not rescue this inefficient infection process in quiescent T cells [43]. The results from our studies revealed and pointed to debilitating blocks in the early stages of the viral life cycle as well as delays leading up to viral integration.
HIV integration and viral expression defects in quiescent T cells
The finding that there is proviral DNA in quiescent T cells raised the possibility that quiescent T cells can be a reservoir that could support a spreading infection. Integrated virus was previously found in resting cells of HIV infected patients but this was attributed to the infection of previously activated T cells that returned to a resting state [44]. Furthermore, the presence viral mRNA in our studies but the lack of detectable viral protein [43] raised the possibility that HIV integration site selection in quiescent T cells may be distinct from activated ones. Since T cell quiescence is an actively maintained state and HIV preferentially integrates into transcriptionally active units, it would be inferred that a distinct distribution of integration sites could explain our observations. Others and we examined integration site selection in quiescent CD4 + T cells [45,46]. Based on our data, integration in both activated and quiescent CD4 + T cells occurred in transcriptionally active units such as housekeeping genes that were not affected by cell state [45]. The orientation of integrants between the two cell types was similar as well as the chromosomal locations. Yet, despite the observed similarities, proviral DNA in quiescent cells exhibited higher levels of abnormal LTR-host junctions [45]. Furthermore, we observed higher levels of 2-LTR circles with both normal and abnormal junctions [45]. These patterns suggest that the delays prior to integration had a severe detrimental effect on the ends of the viral cDNA. On the other hand, in the studies by Brady et.al, HIV integration patterns were somewhat different between stimulated and quiescent T cells [46]. HIV integrated in less transcriptionally active regions in quiescent cells when compared to stimulated cells, but the observed differences were modest. Yet, despite the differing conclusions, both studies identified additional potential blocks to HIV infection: (i) LTR attrition that can lead to the integration of defective virions and (ii) integration into transcriptionally repressed regions.
The integration site analysis outlined above however suggested that quiescent T cells might be a source of viral release. To this date, only a handful of studies have examined the post integration events of the HIV life cycle in quiescent cells and in the absence of any stimulation. As quiescent T cells are transcriptionally less active and given the defects in the early stages of infection resulting in mutations of the viral cDNA as well as the potential integration into transcriptionally repressive regions, spontaneous viral release in HIV infected quiescent T cells can also be impaired. Recently studies using the SIV rhesus macaque model suggested that infected resting T cells can spontaneously release virions [47]. However, the transcriptional state of these cells was not fully examined. Our data as well as recent work have shown that multiply spliced tat/rev mRNA are lower in HIV infected quiescent and resting CD4 T cells [43,[48][49][50][51]. This coupled with data from HIV patients on HAART that show elevated levels of unspliced viral mRNA compared to spliced would suggest that defects in splicing can impact the release of virions from quiescent T cells [48, [52][53][54]. Furthermore, low levels of multiply spliced HIV RNA would result in lower levels of Tat protein as it has been shown to play a crucial role in transcriptional elongation [55][56][57][58][59][60][61][62] and recently in RNA splicing [63]. Such an outcome could have detrimental effects in the generation of higher levels of multiply spliced viral RNA. Yet, even if there is production of adequate levels of multiply spliced HIV RNA in quiescent T cells, this is further blocked by reduced nuclear export. This is due to the low levels of the polypyrimidine tract binding protein (PTB) in resting T cells. Low levels of PTB results in nuclear retention of multiply spliced viral RNA thus limiting the production of virions [49,51]. Despite these observed post-integration defects, recent work by Pace and colleagues demonstrated that there is observable but low Gag expression in HIV infected resting T cells [50]. However, this expression of Gag could not support a spreading infection, as the levels of Env protein were very low.
Restriction factors
While the above studies identified and further refined the stages of HIV life cycle impacted in quiescent T cells, they did not address the mechanisms behind the block. As quiescent T cells are characterized by low transcriptional and metabolic activity, it was reasonable to infer that the lack of cellular substrates or raw materials can have a detrimental effect on viral replication. While pretreatment of quiescent T cells with nucleosides improved reverse transcription in these cells, it failed to rescue infection [64,65]. This suggested that the presence of inhibitory factors or the absence of other supportive processes were responsible for this phenotype.
a. Murr1
Murr1 is involved in copper regulation and inhibits NFκB activity. This inhibition is mediated by blocking proteosomal degradation of IκB resulting in decreased NFκB activity [115]. Studies by Ganesh and colleagues found that the protein is highly expressed in T cells [115]. This in conjunction with the role of NFκB in HIV expression made this a strong candidate for a host restriction factor. Recent studies highlighted the lack of a cellular protein rather than the presence of a restriction factor as a potential block to HIV infection in quiescent T cells. More specifically, c-Jun Nterminal kinase (JNK) phosphorylates viral integrase, which in turn interacts with the peptidyl prolylisomerase enzyme Pin1 causing a conformational change in integrase [116]. This combined effect increases the stability of integrase allowing for viral integration to occur. In these studies quiescent T cells were found not to express JNK, thus ameriolating the role of Pin1 in facilitating HIV intgration [116]. These results lend support to earlier studies demonstrating the presence of preintegrated viral cDNA in resting cells that can act as an inducible reservoir [27,30]. However, these studies did not address the major defects identified by us and others in the early stages of the HIV life cycle as well as the fact that the efficiency of HIV integration in quiescent cells is similar to that of activated cells [39,43,45,46]. c. Glut1 Glut1 has recently been implicated as a potential cellular factor that could facilitate HIV infection. Like JNK and Pin1, the absence of this protein seems to impact HIV infection [117]. Interestingly, its role is linked to the metabolic processes of T cells. More specifically, Glut1 is a major glucose transporter found in both mature T cells and thymocytes [117]. Protein expression is upregulated by IL-7 treatment or conventional T cell activation. When Glut expression was knockdown in activated T cells, it resulted in decreased HIV infection of these cells [117]. Expression levels of the protein were further correlated with permissiveness to HIV infection as double positive thymocytes expressing high levels of Glut1 were more likely to be infected by HIV than their low expressing counterparts [117]. This study is quite intriguing as it is the first one linking cell metabolism to HIV replication. depolymerization factor d. Cytoskeleton The cytoskeleton has been shown to have a key role in HIV replication [118]. The cell structure plays a key role in cell shape, motility, organelle organization and intracellular trafficking [118]. This section involves multiple factors that have been recently identified to facilitate or block HIV infection. Early work showed that the HIV reverse transcription complex interacted with actin and disruption of the interaction resulted in blocking infection [119]. These studies suggested that the cytoskeleton was crucial for a productive HIV infection. Subsequent studies explored the molecular mechanisms of this phenomenon. More specifically, Yoder and colleagues showed that cross linking of CXCR4, one of the co-receptors for the virus, results in activation of cofilin, an actin that allows for HIV to rearrange actin and, consequently, facilitate infection [120]. This was further supported by patient studies showing that resting T cells isolated from HIV infected individuals had elevated levels of active cofilin thus facilitating the spread of infection [121]. In addition to CXCR4, crosslinking of CCR7, CXCR3, and CCR6 have been shown to activate cofilin and mediate the establishment of a latent HIV infection in resting T cells [122]. However, cofilin is not the sole factor involved in the interplay between HIV and actin. LIM domain kinase 1 (LIMK1), which phosphorylates cofilin and inactivates it, becomes activated following cross-linking of gp120 and CXCR4 [123]. This leads to actin polymerization and stabilization of the CD4/CXCR4 cluster allowing for efficient viral entry and uncoating. The stable complex then activates cofilin to further facilitate infection. This pathway was recently shown to be disrupted by the N-terminal fragment of Slit2 a secreted glycoprotein and reversed the HIV mediated changes in actin thus inhibiting infection [124]. These studies underscore the importance of cytoskeleton in HIV infection and have become an exciting area of HIV research as they can lead to the development of new therapies against the virus. e. SAMDH1 The Sterile Alpha Motif (SAM) domain and HD domain-containing protein 1 (SAMHD1) has been recently identified as a potential restriction factor in quiescent T cells. SAMHD1, like APOBEC3G, seems to target the early stages of the HIV life cycle more specifically reverse transcription. SAMHD1 is mutated in a subset of patients suffering from the Aicardi-Goutieres syndrome (AGS), an early-onset encephalopathy that mimics a congenital infection and is associated with increased levels of IFN-.alpha; production [125]. Studies suggested that the protein may be involved in negatively regulating innate immune responses [125]. With regards to HIV restriction, studies showed that SAMHD1 mediated the restriction to HIV infection in dendritic cells and monocytes [126][127][128]. The observed restriction of SAMHD1 was alleviated by the lentiviral protein Vpx, which is expressed in SIV [127,129]. When Vpx, a relative of Vpr the viral accessory protein expressed in HIV-1, was introduced into macrophages and monocyte derived dendritic cells, it significantly enhanced their infection by HIV [130][131][132]. Additional studies revealed that SAMHD1 is a strong dGTP triphosphohydrolase, thus impacting total nucleotide pools in cells [133,134]. By depleting these pools SAMHD1 inhibits reverse transcription thus restricting HIV replication [135]. While a number of studies employed gene knockdown to further elucidate the role of SAMHD1 and other restriction factors in HIV infection, the use of cell samples from AGS patients has proven particularly beneficial as it eliminated the variable of cell manipulation. Monocytes and dendritic cells from AGS patients were susceptible to HIV infection [127,128]. With respect to quiescent T cells, two studies have independently shown that the protein is abundantly expressed in them [136,137]. Both the SAMDH1 depleted and AGS patient derived CD4 T cells demonstrated improved HIV infection due to increased reverse transcription [136,137]. However, the expression of viral progeny was still defective in quiescent cells as suggested by both studies. In addition, even though SAMHD1 is also highly expressed in activated T cells, its inhibitory effects are only seen in quiescent T cells [127,135,136]. Thus, the combination of both limiting endogenous pools of nucleotides in quiescent T cells and the presence of SAMHD1 have a combined inhibitory effect on viral replication. As the field further explores the role of SAMHD1 it is clear that the protein limits the available nucleotide pools in quiescent cells thus restricting efficient reverse transcription. However, as previous studies have shown, a mere addition of nucleosides while improving reverse transcription does not remedy the block seen in quiescent T cells [64,65].
Conclusions
In conclusion, the mechanisms and/or cellular factors mediating the block in the HIV infection of quiescent CD4 T cell are not fully understood yet. While a number of cellular factors have been implicated, it is clear that the blocking effect to HIV infection is mediated by multiple events due to the physiology of quiescent T cells. Cellular size, transcriptional and metabolic activities are all important cell functions that are used by intracellular parasites such as viruses to successfully infect and replicate into the host cells.
Based on the early and subsequent work, the characterization of the HIV life cycle in quiescent T cells strongly indicate that the major impact to infection occurs very early, immediately following viral entry at the initiation of reverse transcription. While limited raw materials such as nucleotides impacted by both the nature of quiescent cells and SAMHD1 can result in decreased levels of reverse transcription, it is clear that downstream events prior to integration or even at integration are quite important. In addition, another process that is widely bypassed due to technical challenges, uncoating can be impacted in quiescent cells and be detrimental to infection [138,139].
Therefore, further studies are needed to understand the block in quiescent T cells. To this date, based on what we know and the nature of cellular factors identified, it is not clear how the mechanisms of resistance in quiescent cells can translate into future therapies. Nevertheless, these studies will allow us to better understand the relationship between HIV and it various target cells which can ultimately can lead to more effective interventions.
Competing interests
The authors declare that they have no competing interests.
Authors' contributions JAZ, SGK and DNV wrote and edited the manuscript; SGK prepared the manuscript figure. All authors read and approved the final manuscript. | 5,292 | 2013-04-04T00:00:00.000 | [
"Biology"
] |
Robustification of Naïve Bayes Classifier and Its Application for Microarray Gene Expression Data Analysis
The naïve Bayes classifier (NBC) is one of the most popular classifiers for class prediction or pattern recognition from microarray gene expression data (MGED). However, it is very much sensitive to outliers with the classical estimates of the location and scale parameters. It is one of the most important drawbacks for gene expression data analysis by the classical NBC. The gene expression dataset is often contaminated by outliers due to several steps involved in the data generating process from hybridization of DNA samples to image analysis. Therefore, in this paper, an attempt is made to robustify the Gaussian NBC by the minimum β-divergence method. The role of minimum β-divergence method in this article is to produce the robust estimators for the location and scale parameters based on the training dataset and outlier detection and modification in test dataset. The performance of the proposed method depends on the tuning parameter β. It reduces to the traditional naïve Bayes classifier when β → 0. We investigated the performance of the proposed beta naïve Bayes classifier (β-NBC) in a comparison with some popular existing classifiers (NBC, KNN, SVM, and AdaBoost) using both simulated and real gene expression datasets. We observed that the proposed method improved the performance over the others in presence of outliers. Otherwise, it keeps almost equal performance.
Introduction
Classification is a supervised learning approach for separation of multivariate data into various sources of populations. It has been playing significant roles in bioinformatics by class prediction or pattern recognition from molecular OMICS datasets. Microarray gene expression data analysis is one of the most important OMICS research wings for bioinformatics [1]. There are several classification and clustering approaches that have been addressed previously for analyzing MGED [2][3][4][5][6][7][8][9][10][11]. The Gaussian linear Bayes classifier (LBC) is one of the most popular classifiers for class prediction or pattern recognition. However, it is not so popular for microarray gene expression data analysis, since it suffers from the inverse problem of its covariance matrix in presence of large number of genes (p) with small number of patients/samples (n) in the training dataset. The Gaussian naïve Bayes classifier (NBC) overcomes this difficulty of Gaussian LBC by taking the normality and independence assumptions on the variables. If these two assumptions are violated, then the nonparametric version of NBC is suggested in [12]. In this case the nonparametric classification methods work well but they produce poor performance for small sample sizes or in presence of outliers. In MGED the small samples are conducted because of cost and limited specimen availability [13]. There are some other versions of NBC also [14,15]. However, none of them are so robust against outliers. It is one of the most important drawbacks for gene expression data analysis by the existing NBC. The gene expression dataset is often contaminated by outliers due to several steps involved in the data generating process from hybridization of DNA samples to image analysis. Therefore, in this paper, an attempt is made to robustify the Gaussian NBC by the minimum -divergence method within two steps. At step-1, the minimum -divergence method [16][17][18] attempts to estimate the parameters for the Gaussian NBC based on the training dataset. At step-2, an attempt is made to detect the outlying data vector from the test dataset using the -weight function. Then an attempt is made to propose criteria to detect the outlying components in the test data vector and the modification of outlying components by the reasonable values. It will be observed that the performance of the proposed method depends on the tuning parameter and it reduces to the traditional Gaussian NBC when → 0. Therefore, we call the proposed classifier as -NBC.
An attempt is made to investigate the robustness performance of the proposed -NBC in a comparison with several versions of robust linear classifiers based on M-estimator [19,20], MCD (Minimum Covariance Determinant), and MVE (Minimum Volume Ellipsoid) estimators [21,22], Orthogonalized Gnanadesikan-Kettenring (OGK) estimator including MCD-A, MCD-B, and MCD-C [23], and Feasible Solution Algorithm (FSA) classifiers [24][25][26]. We observed that the proposed -NBC outperforms existing robust linear classifiers as mentioned earlier. Then we investigate the performance of the proposed method in a comparison with some popular classifiers including Support Vector Machine (SVM), k-nearest neighbors (KNN), and AdaBoost; those are widely used in gene expression data analysis [27][28][29]. We observed that the proposed method improves the performance over the others in presence of outliers. Otherwise, it keeps almost equal performance.
Methodology
2.1. Naïve Bayes Classifier. The naïve Bayes classifiers (NBCs) [30] are a family of probabilistic classifiers depending on the Bayes' theorem with independence and normality assumptions among the variables. The common rule of NBCs is to pick the hypothesis that is most probable; this is known as the maximum a posteriori (MAP) decision rule. Assume that we have a training sample of vectors {x = ( 1 , 2 , . . . , ) ; = 1, 2, . . . , } of size for = 1, 2, . . . , , where x denotes the jth observation of the ith variable in the kth population/class ( ). Then the NBCs assign a class label̂= C for some k as follows: For the Gaussian NBC, the density function (x | , ) of kth population/class ( ) can be written as where ={ ,Λ }, and here = ( 1 , 2 , . . . , ) , is the mean vector and the diagonal covariance matrix is
Maximum Likelihood Estimators (MLEs) for the Gaussian NBC.
We assume that the prior probabilities ( ) are known and the maximum likelihood estimators (MLEs)â ndΛ of and Λ are obtained based on the training dataset as follows: It is obvious from (1)-(2) that the Gaussian NBC depends on the mean vectors ( ) and diagonal covariance matrix (Λ ); those are estimated by the maximum likelihood estimators (MLEs) as given in (4)-(6) based on the training dataset. Therefore, MLE based Gaussian NBC produces misleading results in presence of outliers in the datasets. To get rid of this problem, an attempt is made to robustify the Gaussian NBC by minimum -divergence method [16][17][18].
The minimum -divergence estimator is defined bŷ For the Gaussian density = { , Λ } and the minimum -divergence estimatorŝ, andΛ , for the mean vector and the diagonal covariance matrix Λ , respectively, are obtained iteratively as follows: The formulation of (10)-(12) is straightforward as described in the previous works [17,18]. The function in (12) is called the -weight function, which plays the key role for robust estimation of the parameters. If tends to 0, then (10) are reduced to the classical noniterative estimates of mean and diagonal covariance matrix as given in (4) and (6), respectively. The performance of the proposed method depends on the value of the tuning parameter and initialization of the Gaussian parameters = { , Λ }.
Parameters Initialization and Breakdown Points of the
Estimates. The mean vector is initialized by the median vector, since mean and median are same for normal distribution and the median (Me) is highly robust against outliers with 50% breakdown points to estimate central value of the distribution. The median vector of kth class/population is defined as The diagonal covariance matrix Λ is initialized by the identity matrix (I). The iterative procedure will converge to the optimal point of the parameters, since the initial mean vector would belong to the center of the dataset with 50% breakdown points. The proposed estimators can resist the effect of more than 50% breakdown points if we can initialize the mean vector by a vector that belongs to the good part of the dataset and the variance-covariance Λ by the identity (I) matrix. More discussion about high breakdown points for the minimum -divergence estimators can be found in [18].
-Selection Using T-Fold Cross Validation (CV) for
Parameter Estimation. To select the appropriate by CV, we fix the tuning parameter to 0 . The computation steps for selecting appropriate by T-fold cross validation is given below.
Outlier Identification
Using -Weight Function. The performance of NBC for classification of an unlabeled data vector x using (1) not only depends on the robust estimation of the parameters but also depends on the values of x weather it is contaminated or not. The data vector x is said to be contaminated if at least one component of x = { 1 , 2 , . . . , } is contaminated by outlier. To derive a criterion of whether the unlabeled data vector x is contaminated or not, we consider -weight function (12) and rewrite it as follows: The values of this weight function lie between 0 and 1. This weight function produces larger weight (but less than 1) if x ∈ and smaller weight (but greater than 0) if x ∉ or contaminated by outlier. Therefore, the -weight function (15) can be characterized as The threshold value can be determined based on the empirical distribution of -weight function as discussed in [31] and by the quantile values of , (x |̂, ,Λ , ) for = 1, 2, . . . , with probability where is the probability for selecting the cut-off value and the value of should lie between 0.00 and 0.05. In this paper, heuristically we choose = 0.03 to fix the cut-off value for detection of outlying data vector using (18). This idea was first introduced in [31]. Then the criteria whether the unlabeled data vector x is contaminated or not can be defined as follows: However, in this paper, we directly choose the threshold value of as follows: With heuristically = 0.10, where D is the training dataset including the unclassified data vector x, (19) was also used in the previous works in [16,18] to choose the threshold value for outlier detection.
Classification by the Proposed -NBC.
When the unlabeled data vector x is usual, the appropriate label/class of x can be determined using the minimum -divergence estimators Table 1: Gene expression data generating model.
Gene group Individual Normal
Patient . If the unlabeled data vector x is unusual/contaminated by outliers, then we propose a classification rule as follows. We compute the absolute difference between the outlying vector and each of mean vectors as Compute sum of the smallest r components of d as = Then the unlabeled test data vector x can be classified aŝ If the outlying test vector x is classified in to class , then its ith component is said to be outlying if > ( = 1, 2, . . . , ). Then we update x by replacing its outlying components with the corresponding mean components from the mean vector̂, of kth population. Let x * be the updated vector of x. Then we use x * instead of x to confirm the label/class of x using (1).
Simulated Dataset 1.
To investigate the performance of our proposed ( -NBC) classifier in a comparison with four popular classifiers (KNN, NBC, SVM, and AdaBoost), we generated both training and test datasets from = 2 multivariate normal distributions with different mean vectors ( , = 1, 2) of length = 10 but common covariance matrix (Λ = Λ; = 1, 2). In this simulation study, we generated N 1 = 40 samples from the first population and N 2 = 42 samples from the second population for both training and test datasets. We computed the training error and test error rate for all five classifiers using both original and contaminated datasets with different mean vectors {( 1 , 2 = 1 + ); = 0, . . . , 9}, where the other parameters remain the same for each dataset. For convenience of the presentation, we distinguish the two mean vectors in such a way in which the second mean vector is generated by adding t with each of the components of the first mean vector.
Simulated Dataset 2.
To investigate the performance of the proposed classifier ( -NBC) in a comparison of the classical NBC for the classification of object into two groups, let us consider a model for generating gene expression datasets as displayed in Table 1 which was also used in Nowak and Tibshirani [32]. In Table 1, the first column represents the gene expressions of normal individuals and the second column represents the gene expressions of patient individuals. First row represents the genes from group A and second row represents the genes from group B. To randomize the gene expression, Gaussian noise is added from (0, 2 ). First we generate a training gene-set using the data generating model (Table 1) where the scalar number Ω is the common difference between two corresponding mean components of 1 and 2 , respectively. Similarly, for generating the training and test datasets, we consider the 1 = 30, 2 = 30, and 3 = 30 ( = 1 + 2 + 3 ) samples from = 3. It is carried out with different means and common variance-covariance matrix of multivariate normal populations ( 1 , Λ 1 ), ( 2 , Λ 2 ), and ( 3 , Λ 3 ). In this case we consider = + Ω with Ω = 0, 1, . . . , 10 and = 1, 2, 3 such that 1 = 2 = 3 for Ω = 0; otherwise 1 ̸ = 2 ̸ = 3 , where the scalar number Ω is the common difference among the corresponding mean components of 1 , 2 , and 3 , respectively.
Head and Neck Cancer Gene Expression Dataset.
To demonstrate the performance of the proposed classifier ( -NBC) in a comparison with four popular classifiers (KNN, NBC, SVM, and AdaBoost) with the real gene expression dataset, we considered the head and neck cancer (HNC) gene expression dataset from the previous work [33]. The term head and neck cancer denotes a group of biologically comparable cancers originating from the upper aero digestive tract, including the following parts of human body: lip, oral cavity (mouth), nasal cavity, pharynx and larynx, and paranasal sinuses. This microarray gene expression dataset contains 12626 genes, where 594 genes are differentially expressed and the rest of the genes are equally expressed.
Simulation Results of Dataset 1.
We have used the simulated dataset 1 to investigate the performance of the proposed method with the performance of the other popular classifiers such as classical NBC, SVM, KNN, and AdaBoost. Figures 1(a)-1(f) represent the test error rate estimated by these five classifiers against the common mean differences in absence of outliers (original dataset) and in presence of 5%, 10%, 15%, 20%, and 25% outliers in test dataset, respectively. From Figure 1(f) it is evident that in absence of outlier every method produces almost the same result, whereas, in presence of different levels of outliers (see Figures 1(a)-1(e)), the proposed method outperformed the other methods by producing low test error rate. Table 2 is summarized with different performance measures (accuracy, sensitivity, specificity, positive predicted value (PPV), negative predicted value (NPV), prevalence, detection rate, detection prevalence, Matthews correlation coefficient (MCC), and misclassification error rate). All these performance measures are computed by the five methods (NBC, KNN, SVM, AdaBoost, and proposed).
From Table 2 we observed that the proposed method produces better results than the other classifiers (NBC, SVM, KNN, and AdaBoost), since it produces higher values of accuracy (>97%), sensitivity (>95%), specificity (>94%), PPV (>94%), NPV (>94%), and MCC (>94%) and lower values of prevalence and MER (<4%). The proportion test statistic [34] has been used to test the significance of several proportions produced by the five classifiers for each of the performance measures. The column 7 of Table 2 represents the values of this test statistic. Since all the values except MER are less than 0.01, so we can conclude that the performance results are highly statistically significant. The MER ( value < 0.05) is also statistically significant at 5% level of significance. So we may conclude from simulated dataset 1 that our proposed method performed better than the other classical methods for the contaminated dataset. It keeps equal performance in absence of outliers for the original dataset.
Simulation Results of Dataset 2.
To investigate the performance of the proposed classifier ( -NBC) in a comparison of the classical NBC for the classification of objects into two groups, we considered the simulated dataset 2. Figures 2(a) and 2(b) show training and test datasets in absence of outliers, respectively. Here genes are randomly allocated in the test dataset. Figures 3(a) and 3(b) show the results of classified test dataset by classical and proposed NBC, respectively.
From classification results we observed that both the naïve Bayes procedures and proposed method produce almost the same results with low misclassification error rates in absence of outliers. To investigate the robustness performance of our proposed method in a comparison with the conventional naïve Bayes procedure for classification, we randomly contaminated 30% genes by outliers in the test gene-sets (Figures 4(a)-4(c)).
Head and Neck Cancer Gene Expression Data Analysis.
We also investigated the performance of the proposed method in real microarray gene expression dataset. The normalized Head and Neck cancer (HNC) dataset is considered here [33]. The RNA sample was extracted from the 22 normal and 22 cancer tissues for generating the HNC dataset. The Affymetrix GeneChip was used for processing RNA samples and finally got the quantified CELL file format. The Robust Multichip Analysis (RMA) and quantile normalization methods were used for processing the CELL files. The HNC dataset was 12,642 probe sets, 44 samples, and 42 significantly differentially expressed probe sets. The detailed discussion is shown in [33] for preprocessing of HNC dataset. We first select the differentially expressed (DE) genes whose posterior probability is more than 0.9; otherwise the genes are equally expressed (EE) using bridge R package [35] which is shown in Figure 6 that shows 594 differentially expressed genes from 12626 genes. We have performed the Anderson-Darling (A-D) normality test [36,37] for the HNC dataset. The results show that a few numbers of DE genes (5%) for both normal and cancer groups break the normality assumption at 1% level of significance. Also we checked the independence assumption of DE genes using the mutual information [38]. We found that the mutual information for HNC dataset is 0.044 which is almost close to zero for both normal and cancer groups. So we may conclude that the DE genes almost satisfy the independence assumption. Therefore, we may assume that the HNC dataset almost satisfies the normality and independence assumption of NBC for a given class/groups.
For classification problem, we have considered half of the differentially expressed genes (594/2 = 297) as training gene-set and we identified their group using hierarchical clustering (HC). Figure 7 represents the dendrogram of HC of half of the differentially expressed genes for training data. The rest of the 297 differentially expressed genes are considered as a test gene-set. Then we employed both classical NBC and robust NBC ( -NBC) in this dataset to classify cancer genes (see Figures 8(a)-8(d)). We observed that from Figure 8 the traditional naïve Bayes procedure can not find the group of gene properly whereas our proposed method ( -NBC) performs better for identifying the gene group in the HNC dataset. Figure 8(d) shows that the proposed classifier shows better performance for classifying the samples than the classical method (Figure 8(c)).
We also computed different performance measures (accuracy, sensitivity, specificity, positive predicted value (PPV), negative predicted value (NPV), prevalence, detection rate, detection, prevalence, Matthews correlation coefficient (MCC), and misclassification error rate) by the five classification methods (NBC, KNN, SVM, AdaBoost, and proposed) using HNC dataset (Table 5). From Table 5 we have observed that the proposed classifier produces better results than the other classifiers (NBC, SVM, KNN, and AdaBoost). The proportion test [34] has shown that the values <0.01 for the different performance results excluding MCC and MER. Then we may say that they are highly statistically significant. The MCC and MER are statistically significant at 5% level of significance because of the values < 0.05. Hence, the performances of the proposed methods in real HNC data analysis are better than classical and other methods. Also this data set is contaminated by outliers reported in [31]. So we consider this dataset to investigate the performance of the proposed method in a comparison of some popular existing classifiers. We observed that the proposed method outperforms the others for this HNC dataset.
Discussion
In this paper, we discussed the robustification of Gaussian NBC using the minimum -divergence method within two steps. For both simulated and real data analysis, at first, the mean vectors and the diagonal covariance matrices were computed by the minimum -divergence estimators for the Gaussian NBC based on the training dataset. Then outlying test data vectors were detected from the test dataset using the -weight function and outlying components in each test data vector were replaced by the corresponding values of their estimated mean vectors. Then the modified test data vectors were used as the input data vectors in the proposed -NBC for their class prediction or pattern recognition. The rest of the data vectors from the test dataset were directly used as the input data vectors in the proposed -NBC for their class prediction or pattern recognition. We observed that the performance of the proposed method depends on the tuning parameter and the initialization of the Gaussian parameters. Therefore, in this paper, we also discussed the initialization procedure for the Gaussian parameters and the -selection procedure using cross validation in Sections 2.3.2 and 2.3.3, respectively. The classifier reduces to the traditional Gaussian NBC when → 0. Therefore, we call the proposed classifier -NBC. We investigated the robustness performance of the proposed -NBC in a comparison of several robust versions of linear classifiers based on MCD, MVE, and OGK estimators taking the smaller number of variables/genes (p) with larger number of patients/samples (n) in the training dataset, since these types of robust classifiers also suffer from the inverse problem of its covariance matrix in presence of large number of variables/genes (p) with small number of patients/samples ( ) in the training dataset. We observed that the proposed -NBC outperforms the existing robust linear classifiers as early mentioned in presence of outliers. Otherwise, it keeps almost equal performance. Then we investigated the performance of the proposed method in a comparison of some popular classifiers including Support Vector Machine (SVM), K-Nearest Neighbors (KNN), and AdaBoost which are widely used for gene expression data analysis [27][28][29].
In that comparison, we used both simulated and real gene expression datasets. We observed that the proposed method improves the performance over the others in presence of outliers. Otherwise, it keeps almost equal performance as before. The main advantage of the proposed classifier over the others is that it works well for both conditions of (i) < and (ii) > , and it can resist the effect of 50% breakdown points. If the dataset does not satisfy the normality assumptions, then the proposed method may show weaker performance than others in absence of outliers. However, the nonnormal dataset can transform to the normal dataset by some suitable transformation like Box-Cox transformation [39]. Then the proposed method would be useful to tackle the outlying problems. The proposed method may also suffer from the correlated observations. In that case, correlated observations can be transforming to the uncorrelated observations using standard principal component analysis (PCA) or singular value decomposition (SVD) based PCA. Then the proposed method would be more useful to tackle the outlying problems as before. However, in our current studied in this paper, we investigated the performance of the proposed classifier ( -NBC) in a comparison of some popular existing classifiers (NBC, KNN, SVM, and AdaBoost) including some robust linear classifiers (MCD, MVE, OGK, MCD-A, MCD-B, MCD-C, and FSA) using both simulated and real gene expression datasets, where simulated datasets satisfied the normality and independent assumptions. We observed that the proposed method improved the performance over the others in presence of outliers. Otherwise, it keeps almost equal performance. Usually gene expression datasets are often contaminated by outliers due to several steps involved in the data generating process from hybridization to image analysis. Therefore the proposed method would be more suitable for gene expression data analysis.
Conclusion
The accurate sample class prediction or pattern recognition is one of the most significant issues for MGED analysis. The naïve Bayes classifier is an important and widely used method for the class prediction in bioinformatics. However, this method suffers from outlying problems to estimate the location parameters in the MGED analysis. To overcome this we proposed -NBC for estimating the robust location and scale parameters. In the simulation studies 1 and 2, we showed that, in presence of outliers, the proposed -NBC outperforms other popular classifiers while datasets were generated from the multivariate and univariate normal distribution, respectively, and it keeps equal performance with the other classifiers, in absence of outliers. We also investigated the robustness performance of the proposed -NBC in a comparison of linear classifier using some popular robust estimators in the simulation study 3. From this simulation study we observed that the proposed -NBC outperforms existing robust linear classifiers. Finally we applied in the real HNC dataset; our proposed -NBC showed better performance than the other traditional classifiers. Therefore, we may conclude that, in presence of outliers, our proposed -NBC outperforms other methods using both simulated and real datasets.
Additional Points
Supplementary Materials. The source code is written in R which is available in the Supplementary Material, available online at https://doi.org/10.1155/2017/3020627.
Conflicts of Interest
The authors declare that they have no conflicts of interest. | 5,997.6 | 2017-08-07T00:00:00.000 | [
"Computer Science"
] |
Carrier-induced refractive index change observed by a whispering gallery mode shift in GaN microrods
Vertical oriented GaN microrods were grown by metal-organic vapor phase epitaxy with four different n-type carrier concentration sections above 1019 cm−3 along the c-axis. In cathodoluminescence investigations carried out on each section of the microrod, whispering gallery modes can be observed due to the hexagonal symmetry. Comparisons of the spectral positions of the modes from each section show the presence of an energy dependent mode shift, which suggest a carrier-induced refractive index change. The shift of the high energy edge of the near band edge emission points out that the band gap parameter in the analytical expression of the refractive index has to be modified. A proper adjustment of the band gap parameter explains the observed whispering gallery mode shift.
Introduction
The presence of whispering gallery modes (WGMs) with high quality (Q-)factors as well as lasing activity in GaN microrods enable their use as optical microcavities in applications such as polariton and photon lasers and ultrasensitive optical sensors [1][2][3][4][5][6][7]. Beside the size and shape of the microrods, the refractive index determines the optical properties of microcavities such as spectral positions of the WGMs. Therefore, it is important to know to identify processes, which influence the refractive index. The analysis of WGMs from microrod structures have already been used to determine the refractive index of GaN [8]. In such microrods, high carrier concentrations above 2 × 10 20 cm −3 have been reported [9]. High carrier concentrations might modify the refractive index as it has been proposed in [10]. The current-induced refractive index change for carriers injected into a GaN based laser diode has already been reported [11]. In this letter, the influence of different high carrier concentrations on the refractive index of GaN will be investigated. GaN microrods were grown with four sections each having different doping concentrations. Cathodoluminescence (CL) measurements on each section show WGMs and a spectral energy dependent WGM shift as well as a shift of the near band edge (NBE) emission. The band gap parameter in the analytical expression of the refractive index can be modified to explain the WGM shift. stabilization and antisurfactant layer [13]. The samples are grown at a pressure of 100 mbar and a susceptor temperature of 1150°C. A V/III ratio of 6-25 was adjusted. All further details concerning growth can be found in [6,12]. Scanning electron microscopy (SEM) and room temperature (RT) CL measurements were performed utilizing a Hitachi S4800 in combination with a Gatan MonoCL setup. For the SEM images and CL measurements an acceleration voltage of 5 keV and a sample tilt of 60°were used. Raman measurements were performed at RT in backscattering configuration using a LabRam HR800 spectrometer from Horiba Scientific. The linearly polarized laser emitting at 457 nm was focused by a 100 × objective (numerical aperture 0.9) resulting in a diameter of the normally incident probing beam of 0.7 μm and in a laser power on the sample surface of ∼582 μW using a filter.
Theoretical background
WGMs in a hexagonal structure are based on reflections of light either at six or three sidewall facets, designated as hexagonal or triangular WGMs, respectively [14]. In hexagonal cavities consisting of materials having a refractive index > n 2 (with n = 1 outside the cavity), triangular WGMs are dominant because Q-factors by up to two orders of magnitude higher compared to Q-factors from hexagonal WGMs are possible due to strong coupling of superscar modes [15]. Coupling of triangular WGMs results in a suppressed field distribution at the corners, thus reducing the scattering losses at the corners. Recalculations of the spectral positions of WGMs from microrods presented in a previous work [1] show much better agreement if triangular WGMs are used instead of hexagonal WGMs together with a multiplication of the refractive index by a fairly uncommon factor of 0.9103.
Spectral positions λ WGM of triangular WGMs, which are observed for the investigated microrod, can be calculated applying a simple plane wave model [14,16]: N is the mode number, β is n GaN for transversal electric (TE) and − n GaN 1 for transversal magnetic (TM) polarization, respectively. The inner diameter d i can be measured by SEM. The refractive index n GaN of GaN in the transparent region below the band gap can be determined from the real part of the dielectric function ε 1 measured on strain free and undoped GaN by spectroscopic ellipsometry using ε = n GaN 2 1 [17,18]. The experimental data of the ordinary and extraordinary refractive index are shown in figure 1. The following equation derived from the Kramers-Kronig relation can be used for analytical expression of the experimental data [19]: The E 0 parameter represents the effective band gap energy of GaN, A 0 and A 1 are magnitude parameters and E 1 is the contribution of all high-energy optical transitions [19]. Using the parameters summarized in table 1, good agreement is achieved in a range between 1-3.37 eV for the ordinary and 1-3.36 eV for the extraordinary refractive index. It has already been shown that the NBE emission peak position is dependent on the carrier concentration [20,21]. A shift towards higher (lower) energy is expected for increasing carrier concentrations above (below) ∼8 × 10 18 cm −3 due to the Burstein-Moss effect (band gap narrowing) [10,20,22,23]. Therefore, it is necessary to check the influence of the band gap parameter E 0 in the analytical expression of the refractive index. The ordinary and extraordinary refractive index has been calculated with a band gap parameter modified by +25 and +50 meV, respectively. It is shown in the upper graph of figure 1 that an increase of E 0 slighlty decreases the refractive index. The lower graph in figure 1 displays the difference between the initial refractive index and the modified one. In the lower energy range there is only a small deviation. However, in the energy range close to the band gap there is a significant deviation of larger than 2%. Increasing E 0 leads to a decreased differential change of the refractive index. [17,18]. Each solid line is a best fit using equation (2) together with the parameters in table 1. Each dashed and dotted line is calculated with a modified E 0 parameter plus 25 and 50 meV, respectively. The lower plot shows the corresponding energy dependent deviation Δn GaN,ord (difference between the dashed/dotted and solid lines). Table 1. Best fit parameters for equation (2) of the experimental data shown in figure 1 for the ordinary and extraordinary refractive index of GaN, valid in a range between 1.00-3.37 eV and 1.00-3.36 eV, respectively. 10 sccm within 50 s [6,12,24]. Afterwards, four microrod sections each with a growth time of 5 min and with different Si doping concentrations were deposited.
CL investigations of a single GaN microrod
Among an ensemble of differently sized microrods, a long GaN microrod is chosen for further investigations with a height and inner diameter of 9.21 ± 0.04 and 1.75 ± 0.01 μm, respectively. The diameter along the microrod axis is constant within the error range, i.e., no tapering such as in previous reported GaN wires is present [25] (for the details of the diameter measurements see the supplementary material). The SEM image of the microrod and the corresponding panchromatic CL image are shown in figures 3(a), (b). The basis grown with a high TMGa flux has poor optical properties (strong yellow defect luminescence and weak GaN NBE emission) and is not considered for the following investigations. Details on the optical properties can be found in [6]. The upper part of the microrod grown with a low TMGa flux having improved optical properties can be separated into four sections defined in figure 2: between each section there is a thin layer of different contrast visible in the panchromatic CL map in figure 3(b). From each section a CL spectra was recorded while fixing the focused electron beam at the center of each section (see figure 3(c)). Dominating NBE emission as well as weak yellow luminescence centered at 2.2 eV and surface related emission at 2.7 eV are visible [26].
Observation of WGMs in each section of the microrod
Independent of the respective section of the microrod, WGMs are observed in the energy range from the NBE emission to the yellow defect luminescence. The spectral positions of the WGMs in the blue spectrum obtained from section 1 have been calculated using equation (1). The inner diameter used for the calculation was set to 1779.8 nm, which is in good agreement with the measured inner diameter of 1.75 ± 0.01 μm. The band gap parameter E 0 was modified in order to get the best agreement between experimental and calculated data. For the ordinary and extraordinary refractive index, E 0 was set to 3.386 and 3.429 eV to fit with the TE and TM WGMs, respectively. The calculated spectral positions of the WGMs are also in agreement with the black spectrum from section 4, but not in good agreement with the green and red spectra from sections 2 and 3, respectively, especially in the high energy range. Figure 3(d) displays the energy shift of four selected WGMs at different energetical positions. There is a WGM shift towards higher energy with increasing silane flow and the shift is more pronounced at higher energies. The TM 19 WGM at ∼2.15 eV shows only a small shift of up to 6 meV whereas the TM 34 WGM at ∼3.34 eV is shifted by 30 meV with respect to the spectrum from section 1.
Due to the superposition of the NBE emission with WGMs, the NBE emission peak position can not unambiguously be determined. Compared to section 1, a NBE emission peak shift of ∼10 and ∼20 meV for sections 2 and 3, respectively, can roughly be estimated. However, these values are not very reliable due to the presence of WGMs at the peak and low energy edge of the NBE emission. A more precise estimation is possible when only considering the high energy edge of the NBE emission at half maximum which is not in superposition with WGMs. The shift is displayed in figure 3(d) (black line). With increasing the silane flow a shift up to 34 meV towards higher energy is observed, which can be explained by the Burstein-Moss effect.
Determination of the carrier-concentration by analysis of the FWHM of the GaN NBE emission
The FWHM of the NBE emission of each spectra was determined and is shown in figure 3(d) (right axis). Values in a range between 100-160 meV are present. The carrier concentration of each section was estimated from the FWHM of the NBE emission according to [27] and are summarized in table 2. High n-type carrier concentrations up to × 1.2 10 20 cm −3 were found. The estimated carrier concentrations reveal that not all the supplied Si atoms are incorporated into GaN as dopants. An increase of silane supply by a factor of 4 and 7 enhances the carrier concentration only by a factor of ∼2 and ∼3, respectively. This is attributed to the selfcatalytic VLS growth of the microrods and the low solubility of Si atoms into the Ga droplet on top of rod leading to enhanced formation of SiN on the sapphire surface and on the sidewalls of the rods acting as a sink for Si [12,13,28]. The high carrier concentrations above 1×10 19 cm −3 in all four sections of the microrod are in agreement with the trend showing a shift of the high energy edge of the NBE emission towards higher energies [21].
Determination of the carrier-concentration by analysis of the Raman spectra
Micro-Raman measurements were used as an additional method to determine the carrier concentration in the microrod [29,30]. The micro-Raman spectra in figure 4 recorded on a single microrod show the A 1 (TO), E 1 (TO) and E H 2 (TO) modes at 531.3 , 558.2 and 567.2 cm −1 , respectively. The values are comparable to Raman measurements on high-quality nonpolar and strain-free GaN substrates grown by hydride phase epitaxy [31]. The spectral position of the longitudinal optical phonon plasmon coupled mode (LOPPCM) provides Table 2. Summary of the carrier concentrations N obtained from FWHM of NBE emission (see figures 3(c), (d)) and from the LOPPCM-position in the Raman spectra (see figure 4). The LOPPCM-measured at 515 cm −1 corresponds to contributions from sections 1, 2 and 4. information about the carrier concentration. A LOPPCM-and a LOPPCM+ peak show up below the transversal optical (TO) phonon frequency in the uncoupled mode ω T (= 531.3 cm −1 ) and above the LO phonon frequency in the uncoupled mode ω L (= 734 cm −1 ) for high and low carrier concentrations, respectively. The following equation, derived from the dielectric function, reproduces the N dependent LOPPCM ± frequency ω ± [32]: [33].
Two measurement configurations have been used for the micro-Raman investigations as illustrated in the inset of figure 4 [34]. In the − x z x ( , )¯configuration, i.e., laser excitation and detection is perpendicular to the microrod side-wall facet, the intense A 1 (TO) mode at ∼531.3 cm −1 covers the LOPPCM-peak. The latter is expected to be located between 500-531.4 cm −1 for high carrier concentrations above 3 × 10 19 cm −3 , as for the microrod in the present study. Therefore, each section of the microrod can not be analyzed by micro-Raman with respect to the LOPPCM-peak. The A 1 (TO) mode is not allowed in the − z x z ( , )¯configuration, i.e., laser excitation and detection is perpendicular to the microrod top facet, however, all sections are probed at once. Two LOPPCM-peaks are visible at 515 and 528 cm −1 corresponding to 5.4 × 10 19 and × 2.7 10 20 cm −3 , respectively, using equation (3). The LOPPCM-at 528 cm −1 originates from section 3 of the microrods, while the LOPPCM-at 515 cm −1 is a superposition of all other sections. The carrier concentrations obtained from micro-Raman measurements and from the FWHM of the NBE emission are in good agreement as can be seen in table 2. Finally, there is a peak at 735.5 cm −1 referred to a A 1 (LO)/LOPPCM+, which is expected to stem from unintentional doped GaN located at the base of the microrod formed during the initial microrod growth. Such a peak at 735.5 cm −1 (corresponds to a carrier concentration of 6 × 10 16 cm −3 using equation (3)) can also be observed in unintentional doped GaN layers grown in the used MOVPE system (not shown here).
4.6. Adjusting the refractive index band gap parameter towards fitting of the spectral energy dependent WGM shift Concerning the WGM shift, the NBE emission shift, and the FWHM, there is no significant difference between the spectra from sections 1 and 4, i.e., there is still some Si incorporation taking place after switching off silane supply during growth of section 4. High temperature decomposition of the surface SiN might act as a source leading to a background doping effect. figure 3(c). The new spectral positions of the WGMs for the green and red spectra obtained from sections 2 and 3, respectively, were also calculated. For that purpose, the band gap parameter E 0 was set to 3.457 eV for the green spectra and to 3.480 eV for the red spectra, i.e., the initial E 0 used for the blue spectra was modified by +28 and +51 meV. The trend of these values is in agreement with the high energy edge shift of the NBE emission. The green dashed-dotted and red dotted lines in figure 5 are in good agreement with the new spectral positions of the WGMs in the green and red spectra. The same modification of the ordinary refractive index is also in good agreement with the observed shift of the TE WGMs (not shown here).
It was already stated that the diameter along the microrod is constant, however, a small variation of a few nm might not be measureable via SEM. Using equations (1) and (2) it can be calculated that a diameter reduction of 10 nm for the microrod having a diameter of 1780 nm would lead to a blue shift of 11, 13, 13 and 7 meV for the TM 19 , TM 26 , TM 30 , and TM 34 WGM, respectively (for the details of the diameter dependent WGM shift see the supplementary material). Comparing these values with the observed WGM shift in figure 3(d), which shows an increasing WGM shift with increasing N, it is clear that a diameter variation can not be responsible for the observed WGM shift.
Si is known to induce tensile strain in a GaN layer leading to a reduction of the band gap [35][36][37]. This is in contrast with the data presented in figure 3(d) showing a blue shift with increasing the Si doping concentration and can therefore not be considered as a reason for the observed WGM shift.
It is clear that the carrier concentration effects the refractive index change, which leads to a WGM shift. The high carrier concentration induces a Burnstein-Moss shift, i.e., an increase of the band gap. A modification of the band gap parameter E 0 in the refractive index equation (2) is therefore a proper tool to explain the WGM shift caused by different carrier concentrations.
The present study is not only limited to the GaN material system, but can also be applied to other material systems such as ZnO [38].
Conclusion
In conclusion, GaN microrods with four different doping concentration sections were grown by MOVPE. The carrier concentration in the range between 10 19 -10 20 cm −3 was determined by the FWHM of the NBE emission and confirmed by an analysis of the micro-Raman spectrum. Spatially and spectrally resolved CL investigations have been performed on the microrods. The four sections of the microrod can clearly be identified in the panchromatic CL map and from each section a CL spectrum was recorded. Independent of the respective section of the microrod, WGMs are observed and there is a spectral energy dependent shift observed for the sections with increasing carrier concentrations. The new positions of the WGMs can be calculated by modification of the band gap parameter of the analytical expression of the refractive index, which accompanies with the observed NBE emission high energy edge shift. | 4,300.8 | 2015-08-24T00:00:00.000 | [
"Physics"
] |
Disambiguating Seesaw Models using Invariant Mass Variables at Hadron Colliders
We propose ways to distinguish between different mechanisms behind the collider signals of TeV-scale seesaw models for neutrino masses using kinematic endpoints of invariant mass variables. We particularly focus on two classes of such models widely discussed in literature: (i) Standard Model extended by the addition of singlet neutrinos and (ii) Left-Right Symmetric Models. Relevant scenarios involving the same"smoking-gun"collider signature of dilepton plus dijet with no missing transverse energy differ from one another by their event topology, resulting in distinctive relationships among the kinematic endpoints to be used for discerning them at hadron colliders. These kinematic endpoints are readily translated to the mass parameters of the on-shell particles through simple analytic expressions which can be used for measuring the masses of the new particles. A Monte Carlo simulation with detector effects is conducted to test the viability of the proposed strategy in a realistic environment. Finally, we discuss the future prospects of testing these scenarios at the $\sqrt s=14$ and 100 TeV hadron colliders.
Introduction
Despite the spectacular experimental progress in the past two decades in determining the neutrino oscillation parameters [1], the nature of new physics beyond the Standard Model (SM) responsible for nonzero neutrino masses is still unknown. A widely discussed paradigm for neutrino masses is the so-called type-I seesaw mechanism [2][3][4][5][6] which postulates the existence of heavy right-handed (RH) neutrinos with Majorana masses. The masses of the RH neutrinos, synonymous with the seesaw scale, is a priori unknown, and its determination would play a big role in vindicating the seesaw mechanism as the new physics responsible for neutrino mass generation. There are a variety of opinions as to where this scale could be [7], ranging from the left-handed (LH) neutrino mass scale of sub-eV all the way up to the grand unification theory (GUT) scale. The GUT approach, which is quite attractive, is mainly motivated by the straightforward embedding of the seesaw mechanism in SO(10) GUTs [3]. The seesaw scale in simple SO(10) models is near 10 14 GeV [7] making it quite hard to test in any foreseeable laboratory experiments. 1 There are also arguments based on naturalness of the Higgs mass which suggest the seesaw scale to be below 10 7 GeV or so [20,21]. It is therefore of interest to focus on the seesaw scale being in the TeV range since we have the Large Hadron Collider (LHC) searching for many kinds of TeV-scale new physics, which could also probe TeV-scale seesaw through the "smoking gun" lepton number violating (LNV) signal of same-sign dilepton plus dijet final states ± ± jj [22] and other related processes [23][24][25]. In addition, there are many kinds of complementary low energy searches for rare processes, such as neutrinoless double beta decay (0νββ) [26], lepton flavor violation (LFV) [27], anomalous Higgs decays [28][29][30][31] and so on which are sensitive to TeV-scale models of neutrino mass. For reviews on various phenomenological aspects of TeV-scale seesaw, see e.g., [32][33][34][35].
As far as the collider signals are concerned, in the case of SM seesaw, the Yukawa part of the model Lagrangian is given by where ψ L , H, N denote respectively the SM lepton and Higgs doublets and the RH neutrino singlet fields under SU (2) L , and C denotes the charge conjugation matrix. We have dropped the generation indices. 3 When H picks up a vacuum expectation value v to break the electroweak gauge symmetry, it leads to the usual type-I seesaw [2][3][4][5][6]. On the other hand, in the LRSM, based on the gauge group SU (2) L × SU (2) R × U (1) B−L , we have the additional gauge bosons W R , Z which play an important role in collider phenomenology, if W R mass is in the accessible range. The part of the Lagrangian relevant to our discussion is that involving the charged current and Yukawa interactions responsible for the seesaw matrix after SU (2) R × U (1) B−L symmetry is broken: There exist examples [40] where the small neutrino masses via type-I seesaw at TeV-scale can arise without excessive fine tuning of the LRSM parameters. 3 As in most of the collider studies of seesaw, we will assume later a single heavy neutrino giving the dominant contribution to the jj signal for simplicity.
where we have again omitted the flavor structure in the RH sector. Note that there is now a direct coupling of the RH neutrinos to the W R gauge boson which will distinguish its production mode at the LHC from that of the SM seesaw case. Assuming m N < m W R , as suggested from vacuum stability arguments [68], the basic collider signal arises from the on-shell production of the RH neutrinos accompanied by a charged lepton in the first stage, and the subsequent decay of N to jj final states (with = e, µ, τ , depending on the initial flavor of N ). The latter can go via the virtual intermediate state W R or physical on-shell W , assuming that m N > m W . 4 It is worth emphasizing that in the LRSM, the Majorana nature of the RH neutrinos inevitably leads to the LNV signature of ± ± jj [22], irrespective of the Dirac Yukawa couplings in Eq. (1.2). On the other hand, in the case of the SM seesaw, the strength of the dilepton signal crucially depends on the size of the heavy-light neutrino mixing parameters V N ∼ vh ν m −1 N , m N being the corresponding eigenvalues of the RH neutrino mass matrix M N in Eq. (1.1). In fact, the constraints of small neutrino masses from type-I seesaw have severe implications for the LNV ± ± jj signals [73][74][75]. Whether the dilepton signal can be of same-sign or mostly of opposite-sign depends on how degenerate the RH neutrinos are and to what extent they satisfy the coherence condition [76]. 5 Nevertheless, the general kinematic strategy presented in this paper is equally applicable to both same and opposite-sign dilepton signals. In this sense, its efficacy is not just limited to the type-I seesaw models, but also to many of its variants, such as the inverse [82,83], linear [8, 84,85] and generalized [17,86,87] seesaw models, which typically predict a dominant oppositesign dilepton signal. In fact, in light of the recent observations from both CMS [88] and ATLAS [89] indicating a paucity of ± ± jj events, the ± ∓ jj signal might turn out to be more relevant [17,18,90,91] in the potential discovery of parity restoration in future LHC data. Therefore, we will not specify the lepton charge in our subsequent discussion, and the signal will be simply referred to as the jj signal.
For m W R > m N > m W , there are four different sources in the LRSM for the origin of the jj signal at the LHC [47,92] (see Figure 1): where the first (LL) mode is the only one that arises in the SM seesaw via s-channel exchange of the SM W -boson (which is denoted by W L to justify the name LL), whereas all the four modes can arise in L-R models. These signals are uniquely suited to probe the Majorana and Dirac flavor structure of the neutrino seesaw and are therefore an important probe of the detailed nature of the seesaw mechanism. To this end, it is important to 4 For mN < mW , one can look for other interesting signals like displaced vertices at the LHC [69] or decay products of charged mesons [70][71][72], which we do not discuss here. 5 For instance, if the pattern of RH neutrino masses is hierarchical while maintaining large |V N | 2 [77] and satisfying the neutrino oscillation data, one can in principle have a ± ± jj signal [23][24][25][78][79][80][81]. The main point of this paper is to show that determining the kinematic endpoints of different invariant mass distributions, 6 e.g., m , m jj etc., provides a unique way to distinguish these scenarios, irrespective of the dynamical details. Note that given possible scenarios listed above, the invariant mass variables involving a single lepton suffer from the combinatorial ambiguity due to the unawareness of the order between the two leptons. A special prescription is taken for those variables. We then provide a systematic way of determining the kinematic endpoints of the modified variables resulting from such a prescription. Various ways of distinguishing the underlying new physics scenarios by making use of the sharp features of kinematic variables have been studied in several recent works, e.g., in [94][95][96][97] in the context of dark matter stabilization symmetries. In particular, kinematic endpoints are robust against detailed dynamics of the underlying physics such as spin correlation, i.e., specific and extreme kinematic configurations are relevant to the kinematic endpoints without the need to know the full dynamical details of the process. Moreover, they are typically protected from event selection cuts unless such cuts are customized to reject the events corresponding to the kinematic endpoints. These observations motivate us to utilize the endpoints of various invariant mass variables to distinguish between the seesaw signals (1.3)-(1.6). We emphasize that different scenarios typically give rise to different event topologies, resulting in the kinematic endpoints having distinctive dependencies upon underlying mass parameters. The identification of specific relationships among them will enable us to uncover the relevant model parameter space and may eventually lead to a measurement of the masses of the involved heavy particles. Hence, we argue that this method is more robust than the dynamical variables like spin or angular correlations, as proposed earlier to distinguish the seesaw signals [47,98].
The rest of the paper is organized as follows. We first define our notations to be used in subsequent sections, followed by a discussion about the general strategy, in Section 2. We then give detailed derivations for the kinematic endpoints of various invariant mass variables case-by-case in Section 3. In Section 4, we discuss various ways for the topology disambiguation and mass measurements of the new particles. In Section 5, we illustrate the utility of the endpoint technique for the L-R seesaw signals by performing a Monte Carlo simulation including detector effects. In Section 6, we discuss the L-R seesaw phase diagram for collider studies and the future prospects of probing the LRSM parameter space at the √ s = 14 TeV LHC, as well as its complementarity with other low-energy probes. We also derive the first LRSM sensitivity contours for the planned √ s = 100 TeV pp collider. Our conclusions are given in Section 7. In Appendix A, we illustrate some invariant mass distributions for the LRSM.
Notations and general strategy
To accommodate all possibilities for an jj final state in a model-independent manner, we generalize the scenarios introduced in Eqs. (1.3)-(1.6) according to their decay topologies in conjunction with symbolic notations. We begin with a three-step cascade decay sequence of a heavy resonance C: where n and f are correspondingly identified as "near" and "far"-lepton with respect to the particle C to identify their relative location to each other. The associated event topology is explicitly diagrammed in Figure 2(i), being henceforth denoted as Case (i). If particle A is heavier than particle B, then the latter directly decays into f jj via a 3-body decay as shown in Figure 2(ii), which is denoted as Case (ii). In an analogous manner, one can imagine the situation where particle B is heavier than particle C so that the latter decays into two leptons and particle A via a 3-body decay. The relevant diagram is shown in Figure 2(iii) and denoted as Case (iii). Another possibility is the situation where particle C directly decays into two leptons and two jets via a 4-body decay, as shown in Figure 2(iv) Topology Region(s) Model Scenario(s) Table 1. Relations of the event topologies (i)-(vii) shown in Figure 2 with the relevant kinematic regions labelled as R 1 − R 7 , as well as their mapping with the possible scenarios in the LRSM. The subregions in parenthesis, e.g., R 1(1) and R 1 (2) , mean that they correspond to the same event topology but with different kinematic endpoints, as explained later in Section 3 [cf. Eqs. and labelled as Case (iv). Finally, we consider the situations where particle C is off-shell (denoted as C * ) for Cases (i) through (iv), and their counterparts are respectively exhibited in Figure 2(v) to 2(viii), and labelled as Cases (v) to (viii). Note that Case (viii) is a very unlikely scenario, as C is always assumed to be heavier than the jj system, so we do not consider it any further. Provided with the L-R seesaw models described in the previous section, one can easily correlate these event topologies with various scenarios arising therein by identifying N as particle B and various combinations of gauge bosons (W L , W L ), (W R , W R ), (W R , W L ), and (W L , W R ) as particles (C, A), respectively, depending on the underlying scenario. To accommodate all possible mass hierarchies, we introduce a subscript h (l) meaning that N is heavier (lighter) than the subscripted gauge boson, e.g., R l L h means m W R > m N > m W . For LL, RR, RL, and LR, each letter accepts either l or h subscript so that (naively) 16 scenarios would be possible. However, the subscripts in four scenarios such as L l L h , L h L l , R l R h , and R h R l imply a contradictory mass spectrum, i.e., the same gauge boson (either W L or W R ) cannot be heavier and lighter than N simultaneously. In addition, we have only assumed m W R > m W , as required to satisfy the experimental constraints from direct searches at the LHC [88,89], as well as the indirect constraints from K L − K S mass difference [99][100][101]. Therefore, the scenarios implying m W > m W R , i.e., R h L l and L l R h , are not allowed. As a result, only 10 different combinations are possible. In principle, the particle identified as C can be forced to be on-shell (assuming that it is within the kinematic reach of the collider experiment), so that all 10 possibilities can be assigned to the first four event topologies, listed in Table 1. Depending on the underlying model details, particle C can also be off-shell, if m C < m B , or even N can be off-shell. The relevant identifications to the last three topologies are also tabulated in Table 1. Note that the scenarios listed in Eqs. (1.3)-(1.6) can be obtained readily by imposing the additional assumption that m W R > m N > m W , as shown by the colored text in Table 1. A detailed topology identification of these scenarios with respect to the kinematic regions R 1 to R 7 will be discussed in Sections 4 and 5.
With the final state of n f jj, one can come up with eight non-trivial invariant mass variables, namely, m , m nj , m f j , m jj , m j , m njj , m f jj , and m jj , where we omit the subscript on the lepton for the variables involving both leptons. We remark that the two jets are emitted from the same particle for all eight cases, and thus we do not distinguish one from the other. An immediate experimental challenge is that one does not know a priori which lepton comes first with respect to the C particle, although we theoretically label them as n and f . This innate combinatorial issue particularly becomes a major hurdle to form invariant mass variables involving a single lepton. For example, m nj or m f j are not valid experimental observables again because n and f are not discernible event-by-event.
Given an invariant mass variable having such a combinatorial ambiguity, a possible prescription to resolve it is to evaluate both invariant mass values for a fixed set of jets and divide them into the bigger and the smaller ones, labelled by the superscripts ">" and "<", respectively. Taking the example of the invariant masses formed by a lepton and a jet, we compute m nj and m f j for a given j and assign the bigger (smaller) one to m > j (m < j ). A similar rule is applied to the invariant mass variables constructed by a lepton and two jets, yielding m > jj and m < jj . With this prescription, we end up with the following eight experimentally valid observables: We basically measure the kinematic endpoints of these eight variables to identify the underlying decay topology by examining their interrelationship and kinematic features which are the main subjects of the next section. 7
Derivation of kinematic endpoints
In this section we derive the analytic expressions for the kinematic endpoints of the invariant mass variables listed in Eqs. (2.2)-(2.4). We first elaborate the detailed steps in Figure 3. The event topology for Case (i) (left panel) and the relevant kinematics described in the rest frame of particle B (right panel). Here, α is the polar angle of f with respect to the direction of n (or equivalently, particle C) while θ is the polar angle of j (black arrow) with respect to the direction of B in the rest frame of particle A. β and φ are azimuthal angles for n and j (black arrows) around the horizontal axis defined by f .
obtaining the final expressions when particle C is on-shell, and later we briefly mention how the relevant results can be applicable to the case of off-shell C. Our main focus is the kinematic endpoints that are insensitive to the details of relevant decays, and thus we do not hypothesize any specific matrix element, i.e., we deal with only the phase-space structure for the associated decay kinematics.
This scenario can be represented by a three-step on-shell cascade decay of a heavy resonance C, as shown in Figure 3(a). We emphasize that it is rather convenient to describe the associated kinematic configuration in the rest frame of particle B, as shown in Figure 3(b). 8 Naively, the total degrees of freedom are four, but one angle β can be dropped by taking into account the azimuthal symmetry of the system. The mass hierarchy in this case determines the baseline inequalities defining the region of interest: where R ij denotes the mass ratio between the massive states i and j: R ij ≡ m 2 i /m 2 j .
2-body invariant mass variables
We begin with the invariant mass of two leptons m , for which the kinematic endpoint is well-known: Although angle θ is defined with respect to one jet of the two, one could develop the parallel argument with respect to the other jet. Since θ is a quantity measured in the rest frame of particle A, exactly the same argument goes through with cos θ → − cos θ. However, this does not provide additional information due to the fact that cos θ (− cos θ) spans +1 (−1) to −1 (+1) as θ increases, and in turn, any similar prescription which would be applied to jet-induced combinatorics for a fixed set of leptons does not enable us to obtain any further independent information.
Trivially, m 2 jj is given as a resonance of m 2 A : In order to derive the expressions for m >,max j and m <,max j , we first rewrite the relevant two invariant masses in terms of angular variables introduced in Figure 3(b): with η, x m , and z m defined as follows: Note that the invariant mass variables in Eqs. (3.4) and (3.7) are normalized by m 2 C for later convenience.
Since the angular variables α and φ are irrelevant to variable x, we first maximize the variable z over them to end up with an expression of z as a function of only cos θ: which is nothing but a straight line in cos θ with a negative slope. To obtain the maxima for m > j and m < j , we compare Eqs. (3.4) and (3.7) while varying cos θ. Since x| cos θ=1 = 0 while z| cos θ=1 = 0, two topologically different cases arise as shown in Figure 4. Each line describes the maximum x or z value for a given cos θ. For x m ≥ z m (left panel), one can identify which invariant mass is greater or smaller for any given event, i.e., for a fixed cos θ. Such a comparison can be conducted for all values of cos θ, and the trajectories for the greater and the smaller are explicitly delineated by blue-solid and red-dashed arrows respectively. The kinematic endpoint can be obtained simply by reading off the maximum value of each line. For the current example (left panel), they are x m and z m , and vice versa for x m < z m (right panel). Thus, the invariant masses can be formulated as follows: with R AB ∈ (0, 1]. We denote the first and the second regions in Eq. (3.8) as R 1(1) and R 1 (2) , respectively, and their link to specific model scenarios are tabulated in Table 1.
3-body invariant mass variables
For the kinematic endpoint for m 2 j , we simply use the relevant expression from Ref. [103]: Considering the other two variables m > jj and m < jj , we remind the reader that the two jets originate from a common mother particle A, and thus they are equivalent to m > A and m < A , i.e., they are reduced to 2-body invariant mass variables involving a massive visible particle. To obtain their analytic expressions, we first find the formulae for m 2 f A and m 2 nA . The former is nothing but the resonance B, i.e., m 2 f A = m 2 C R BC . The maximum of the latter can be easily obtained by going through the steps leading to (m max ) 2 [cf. Eq. (3.2)]: Since m 2 f A is fixed whatever m 2 nA is given, the kinematic endpoints for the variables of interest can be obtained, as follows: where the first and the second regions correspond to R 1(1) and R 1(2) , respectively.
4-body invariant mass variable
The 4-body invariant mass variable m jj is trivially given by the mass of resonance C: Figure 5. The event topology of Case (ii) (left panel) and the relevant kinematics described in the rest frame of particle B. Here α is the polar angle of n with respect to the direction of the composite system of f and j (black arrow) while θ is the polar angle of particle f with respect to the direction of the composite system in the rest frame of particle B. β and φ are azimuthal angles for particles n and f about the axis extended by j (red arrow) and the composite system.
Applying the formulae to Case (v) Once the particle C is off-shell, as in Case (v) shown in Figure 2, one can interpret that its mass is given by the center of mass energy √ŝ . Therefore, it is possible to reuse all analytic expressions thus far by replacing m 2 C byŝ max = s. The chance of reachingŝ max is, however, so small that any associated endpoints depending on s are not typically saturated. Therefore, only the variables having no dependence on s are reliable: m 2 jj , one of m 2 j 's, and one of m 2 jj 's. Since s is assumed much larger than the mass of particle B, R BC → 0, 9 so that the corresponding region (denoted by R 5 in Table 1) is defined by R BC = 0 and 0 < R AB < 1 . (3.12) This scenario can be represented by a two-step cascade decay of a heavy resonance C, where the second step is proceeded via a 3-body decay. The corresponding diagram and kinematic configuration are delineated in Figure 5. It turns out that it is convenient to do the analysis in the rest frame of particle B as shown in Figure 5(b). Just like Case (i), the angle β is irrelevant to the description of the system due to the azimuthal symmetry. The mass hierarchy for this case defines the baseline inequalities determining the corresponding region as
2-body invariant mass variables
Since particle B decays into three final states via off-shell A, m 2 jj is given by a distribution, not a resonance as in Case (i), and the analytic expression for its endpoint is 14) The kinematic endpoint for m 2 can be found in Ref. [104], i.e., When it comes to the analytic expressions for m >,max j and m <,max j , we go through a similar argument to that in Case (i).
We observe that x m is simply given by R BC . Unlike the previous case, cos θ is irrelevant to x so that one can maximize z even over cos θ as well as φ and α, yielding z = z m which is nothing but a horizontal line in x . Clearly, only two cases are available, and the respective expressions are and we denote the first and the second regions as R 2(1) and R 2(2) , respectively.
3-body invariant mass variables
To obtain the expression for m max j 2 , the result in Ref. [104] can be reused, simply yielding For the other two variables, we again follow a similar argument to the previous case: Applying the formulae to Case (vi) As in Case (i), we can simply reuse all expressions derived thus far along with replacement of m 2 C by s for an off-shell C. Again, the chance of havingŝ max = s is so small that the kinematic endpoints depending on s are not typically saturated. Therefore, only the variables having no dependence on s are reliable: m 2 jj , one of m 2 j 's, and m < jj 2 . As in the previous case, s is assumed to be much larger than the mass of particle B, and thus the inequalities defining the corresponding region (denoted by R 6 ) are given by Figure 6. The event topology of Case (iii) (left panel) and the relevant kinematics described in the rest frame of particle C. α is the polar angle of n with respect to the direction of the composite system of n and f while θ is the polar angle of j (black arrow) with respect to the direction of such a composite system at the rest frame of particle C. β and φ are azimuthal angles for n and j (black arrow) about the axis extended by j (red arrow) and the composite system.
Case
This scenario can be represented by a two-step cascade decay of a heavy resonance C, where the first step is proceeded via a 3-body decay. The corresponding diagram and kinematic configuration are delineated in Figure 6. It turns out that it is convenient to do analysis in the rest frame of particle C, as shown in Figure 6(b). 10 Just like the previous two cases, azimuthal angle β is irrelevant to the description of the system due to the azimuthal symmetry. Though particle B is heavy enough to be integrated out, R AC should be still less than 1, i.e., R AC = R AB R BC < 1. Therefore, the baseline inequalities defining the corresponding region (denoted by R 3 ) are (3.20)
2-body invariant mass variables
Since particle A undergoes a 2-body decay, m 2 jj is simply given by the resonance of particle A, i.e., m 2 jj = m 2 C R AC . Due to the 3-body decay of C, the endpoint of m 2 is given by For the purpose of deriving the analytic expressions of kinematic endpoints for m > j and m < j , we first express the momenta for all visible particles in the rest frame of particle C: p µ j = m A 2 (cosh η 2 + sinh η 2 cos θ, sinh η 2 + cosh η 2 cos θ, sin θ cos φ, sin θ sin φ) , (3.24) where η 1 and η 2 are defined as With these definitions, one can prove the following useful relations: where λ denotes the kinematic triangular function defined as λ(x, y, z) ≡ x 2 + y 2 + z 2 − 2(xy + yz + zx) . (3.27) The symmetry of n ↔ f enables us to have 28) which is maximized at cos θ = −1 and y = 0, i.e., From this constraint, we find that m > j can be maximized when either x or z vanishes, and m < j can be maximized when x = z. Therefore, we have (3.30)
3-body invariant mass variables
To obtain the kinematic endpoint for m 2 j the relevant result in Ref. [105] can be reused: Note that the second expression corresponds to the kinematic configuration where the two visible particles are moving in the same direction with equal energy while particle A is moving in the opposite direction.
Applying the formulae to Case (vii) As in the previous two cases, we again reuse all expressions derived thus far with the simple replacement of m 2 C by s. In this case, the only variable having no dependence on s is m jj and all other variables unreliable as the chance of havingŝ max = s is so small that the kinematic endpoints depending on s are not typically saturated. With the assumption that s is much larger than any of on-shell particle masses, we have R AC → 0. As R BC ≥ 1, R AB should be close to 0, giving the inequalities defining the corresponding region (denoted by R 7 ) as follows: R BC ≥ 1 and R AB = 0 . (3.33)
Case (iv): m A , m B ≥ m C
This scenario can be represented by a single 4-body decay of a heavy resonance C. A convenient frame for the relevant analysis is at the rest frame of particle C. The mass hierarchy of this case enables us to have the following inequality defining the corresponding region (denoted by R 4 ):
2-body invariant mass variables
Due to the 4-body decay feature of this process, m 2 jj and m 2 have an identical distribution whose kinematic endpoint is given by For the maximum of m > j , we imagine the situation where one of two 's and one of two j's are so soft in the rest frame of particle C that all mass-energy of C is split into the other and the other j. Similarly, for the maximum of m < j , we imagine the situation where one of j is extremely soft, the two leptons are emitted in the same direction, and the other j is emitted in the opposite direction so that m 2 nj = m 2 f j and m 2 = 0. Therefore, we have
3-body invariant mass variables
The expression for the m 2 j kinematic endpoint is simply given by
Summary
Let us collect the formulae we have derived so far for the kinematic endpoints and rearrange them with respect to invariant mass variables in Eqs. (2.2)-(2.4). To make a connection of the formulae with specific LRSM scenarios, we again refer the readers to Table 1.
where the expressions in the square brackets indicate that the associated distributions appear as a resonance peak. Since the mass parameter m 2 C merely determines the overall scale, the relevant parameter space can be divided in the Table 1. Additional subscripts in the parenthesis denote subdivisions in the associated topology. as illustrated in Figure 7. As introduced earlier, R 1 -R 7 correspond to different decay topologies diagrammed in Figure 2 and tabulated in Table 1 [see also Eqs. .17)]. Note that regions R 5 , R 6 , and R 7 are represented by one-dimensional strips unlike the others. This is because the basic assumption that the center of mass energy s is much larger than any of the masses of on-shell particles allows either R AB or R BC to vanish [cf. Eqs (3.12), (3.19), and (3.33)].
Topology disambiguation and mass measurement
In the busy environment of a real-life hadron collider experiment, leptons are much betterreconstructed than jets. Thus, from the precision measurement aspects, the best among the eight invariant mass variables listed in Eqs. (2.2)-(2.4) is m . It therefore follows that as a minimal choice, the dilepton invariant mass variable can be used for topology disambiguation. For example, Ref. [47] examined the shape of the dileption invariant mass distribution for the decay topologies (1.3)-(1.5). This was based on the observation that different decay topologies involve different spin correlations [98] which affect the m spectrum. However, typical challenges in shape analysis are the facts that (relatively) large statistics is required and more importantly, the relevant shape would be severely distorted by the detailed dynamics and the predicament of hard cuts essential to suppress the relevant SM background. 11 On the contrary, the kinematic endpoint does not suffer from not only the details of the associated decay but the cuts to suppress backgrounds because it depends only on some specific kinematic configurations. Also, the need for "local" information near the endpoint typically demands less statistics than the shape analysis. In any case, we lose useful information encoded in the shape of the distribution, and the measurement of m endpoint by itself does not enable us to distinguish a certain topology from others. Hence, we need to supplement this with other observables.
Constraints and Correlation matrix
Following the basic idea of involving as few jets as possible in the observables to be used, the next best ones are the invariant mass variables having only a single jet, i.e., m > j , m < j , and m j . However, including m , this makes the number of observables greater than the number of unknown mass parameters (m C , m B , and m A ), namely, the system is now over-constrained. 12 We find several characteristic sum rules/constraints among the four invariant mass variables mentioned above, which are listed below in terms of the notations introduced in Eqs. C6: Only c is a well-defined endpoint for R 5 , R 6 , and R 7 .
C7: For R 1(2) and R 5 , m > j has a "foot" structure in it, i.e., a cusp is developed in the middle of the distribution. The cusp position should be the same as c.
We also provide a correlation matrix between regions and sum rules/constraints given above in Table 2. Here the symbol " √ " implies that the relevant sum rule/constraint should be obeyed for the region of interest. We see that in principle, it is possible to discern all regions but R 6 and R 7 using C1-C7.
Of course, more sum rules/constraints can be utilized, once we allow the invariant mass variables involving more than 1 jet, which enable us to distinguish even R 6 and R 7 . Instead of exhausting all possibilities, we enumerate some of them below: C8: d appears as a resonance peak for R 1(1) , R 1(2) , R 3 , R 5 , and R 7 .
11 Despite such potential difficulties, the invariant mass shape can in some cases be sufficient to completely determine the new particle mass spectrum, including the overall mass scale [96]. 12 For some specific cases of LRSM, either A or C, or both can be the SM W boson. One would then expect the number of unknown mass parameters reduced to two or one. However, since we do not know which underlying scenario governs the channel of our current interest, we generically treat the masses of A, B, C as unknown parameters in our discussion. Table 2. Correlation matrix between regions and sum rules/constraints. " √ " implies that the relevant sum rule/constraint should be satisfied for the region of interest. C1-C7 involve only the invariant mass variables having 0 or 1 jet, while the others involve ≥ 2 jets.
It may be noted here that Refs. [106,107] studied a way of distinguishing event topologies (i) and (ii) in Figure 2 by the existence of resonance peaks, which is relevant to C8 and C9.
Mass measurement of new particles
Once different regions are determined with the methods explicated in the previous section, the mass measurement of on-shell particles, or equivalently, the measurement of R ij 's together with mass measurement of the first on-shell particle mass can be readily performed in terms of kinematic endpoints of various invariant mass distributions. We first provide the inverse formulae using only a, b, and c for various regions but R 5 and R 7 .
(4.5) For R 5 and R 7 , the relevant mass parameters can be extracted by using other observables involving more than one jet. For example, We should remark that Eqs. (4.1)-(4.9) are not unique ways of writing inverse formulae. For most of the regions, the interrelationships among the eight available kinematic endpoints are over-constrained due to less number of unknown mass parameters than the employed variables. Although the above-listed inverse formulae are expressed with a certain set of kinematic endpoints, others can as well be utilized for cross-checks.
Application to the L-R seesaw
In this section, we apply the general kinematic endpoint technique discussed in the preceding section to the specific case of L-R seesaw. Our aim is to illustrate the distinction between the invariant mass distributions for the scenarios mentioned in Eqs. In terms of the model scenarios listed in Table 1, they correspond to L h L h , R l R l , and R l L h , respectively. One can therefore correspondingly relate them to event topologies (v), (ii), and (i), and hence, to regions R 5 , R 2 and R 1 , respectively in Figure 7. Parton-level events are generated with MadGraph aMC@NLO [108] and the parton distribution functions (PDFs) of protons are evaluated by the default NNPDF2.3 [109]. To describe parton showering and hadronization, the events are streamlined to Pythia6.4 [110]. The relevant output is subsequently fed into Delphes3 [111] interfaced with FastJet [112] for describing detector effects and finding jets. All the simulations are conducted for a √ s = 14 TeV pp collider at the leading order. In the first two benchmark scenarios, jets are formed using the anti-k t algorithm [113] with a radius parameter R = 0.4. For the last benchmark scenario, we point out that the on-shell W bosons tend to be highly boosted due to a large mass gap between N and W . The large boost of W leads to a high collimation of the jets from its decay, thus requiring jet substructure techniques to resolve the subjets in each two-prong "W "-jet. To this end, jets are initially clustered by the Cambridge-Aachen algorithm [114,115] with a jet radius R = 1.2, and the resulting "fat" jets are further processed by the Mass Drop Tagger method [116].
We emphasize that the main purpose of this simulation study is to see whether or not the proposed endpoint strategies are viable in the presence of detector effects. In this sense, including any potential backgrounds is beyond the scope of this paper, so we simply consider the signal component. Also, the precise measurement of kinematic endpoints typically requires large statistics; therefore, an arbitrarily large number of signal events are generated to minimize any statistical fluctuation. Finally, event selection is executed with a minimal set of selection criteria for simplicity. Of course, in the presence of backgrounds, one would impose a more stringent set of cuts to suppress them, but considering the fact that kinematic endpoints are typically least affected by cuts, we expect that the endpoint behaviors in our scheme will be similar to those under more sophisticated selection criteria.
Particle acceptance/isolation and detector geometry are basically performed according to the default parametrization in Delphes3 [111]. We then choose the events having exactly two leptons (of either e or µ flavor) and ≥ 2 jets in the final state. For the second scenario (BS2), the first two hardest jets are used to construct relevant invariant masses, while for the other two scenarios (BS1 and BS3), the W mass window is employed because the relevant jets were emitted from an on-shell W boson. More specifically, we evaluate the dijet invariant mass for all possible combinations among observed jets and choose two jets satisfying a slightly tight condition of |m jj − m W | < 12 GeV. If there exist multiple combinations, then we simply take the one with the smallest difference. Although some events can contain more than 2 reconstructed jets, we evaluate the jet-involving invariant mass values using the two jets identified in the above-explained way, so that in every single event we have the same number of invariant mass values.
We show various 2-and 3-body invariant mass distributions for the three benchmark scenarios in Figure 8. 13 Top, middle, and bottom panels correspond to BS1, BS2, and BS3, respectively. In each row of panels, the left panel shows the 2-body invariant mass distributions m (red dashed), m > j (blue dot-dashed), and m < j (green solid) while the right panel shows the 3-body invariant mass distributions m j (red dashed), m > jj (blue dot-dashed), and m < jj (green solid). Black dashed lines indicate the relevant theory predictions for the kinematic endpoints derived in Section 3. Note that we provide unit-normalized distributions for illustration since our focus is the structure of kinematic endpoints. Based on the analysis scheme described above, one can easily infer the corresponding relative weights. We observe that for BS1 and BS2 the invariant mass variables involving ≤ 1 jet, i.e., m , m > j , m < j , and m j , are reasonably well-matched to the associated theoretical predictions in spite of detector effects such as jet energy resolution, smearing and contamination from initial and final state radiations (ISR/FSR). 14 In contrast, once more jets are involved, 13 Dijet invariant mass distributions are not displayed in the figure, as it develops a trivial distribution, i.e., a resonance peak, for BS1 and BS3.
14 Note that for BS1 the kinematic endpoints for m , m > j , and m j are ill-defined because their values can be arbitrarily large up to the associated kinematic limit determined by the available center of mass energy. i.e., m > jj and m < jj , kinematic endpoints are more smeared so that identifying their correct positions would be rather challenging. When it comes to BS3, the situation becomes more challenging, and even for 2-body invariant mass distributions, the endpoints either suffer from relatively large smearing (green and red histograms) or are unsaturated (blue histogram). In particular, the information of subjets in the W -jet is typically less accurate than that of regular jets, thus rendering the distributions more smeared. Certainly, these phenomena can be improved by better jet energy measurement and ISR/FSR identification (see, for example, Ref. [117] studying the impact of various detector effects upon the distributions of mass variables). Moreover, the boosted jet technique is a research field that is being actively investigated and developed. We therefore expect that locating the true kinematic endpoints will be advanced in the future, which will help us in determining the new particle masses more accurately.
L-R seesaw phase diagram
In this section, we present some future prospects of probing the L-R seesaw parameter space using the different collider signals mentioned earlier. To be concrete, we only focus on the mass hierarchy m W R > m N > m W which can be effectively probed at the LHC via the jj final state. In this case, as mentioned in Eqs. (1.3)-(1.6), there are four classes of Feynman diagrams for the jj signal ( Figure 1) [47]. The LL channel is a clear probe of the seesaw matrix in both SM seesaw and L-R seesaw models. However, its effectiveness solely relies on the heavy-light neutrino mixing parameter |V N | 2 , and is limited to heavy neutrino mass M N only up to a few hundred GeV [23][24][25][78][79][80][81]. Experimentally, the mass range M N = 100-500 GeV has been explored at the √ s = 8 TeV LHC for = e, µ [89,118], and direct upper limits on |V N | 2 of the order of 10 −2 -10 −1 have been set. We note here that the current indirect limits from a global fit of electroweak precision data (EWPD), lepton flavor violation, and lepton universality constraints [119] are roughly one to two orders of magnitude stronger. For a bird's-eye view of other complementary limits and the future sensitivities, see e.g., [34].
In the RR channel [22], the RH neutrino is produced on-shell due to its gauge interaction with W R [cf. Eq. (1.2)] and subsequently decays into a 3-body final state via an off-shell W R . This diagram only relies on the gauge coupling g R and is independent of V N ; therefore, it gives the dominant contribution for small V N which is of course the naive expectation in the "vanilla" type-I seesaw case. Using this channel, LHC exclusion limits are derived in the (m N , m W R ) plane, and currently exclude m W R up to 3 TeV assuming the equality between the SU (2) R and SU (2) L gauge couplings, i.e., g R = g L [88,89]. 15 In general, the signal cross section will be scaled by a factor of (g R /g L ) 4 , and hence, the corresponding limit on m W R could be weaker if g R < g L . In any case, being independent of the Dirac neutrino Yukawa coupling, the RR process does not probe the complete seesaw matrix.
The RL and LR contributions, on the other hand, necessarily involve the heavy-light neutrino mixing. In fact, the RL diagram could give the dominant contribution to the jj signal, if the mixing |V N | 2 is not negligible and/or the W R gauge boson is not too heavy [47]. There are two reasons for this dominance: (i) it leads to a production rate σ(pp → W R → N ± ) which is independent of mixing and only suppressed by (g R /g L ) 4 (m W /m W R ) 4 (as in the RR case), and can therefore dominate over the LL contribution which depends on |V N | 2 g 4 L ; (ii) the RH neutrino in this case has a 2-body decay: N → ± W L → ± jj (as in the LL case) which is not suppressed by the phase space, unlike in the RR case with a 3-body decay of N . Hence, for a sizable range of the mixing and RH gauge boson mass, the RL mode is expected to be dominant for the jj signal at the LHC and could constitute 15 Similar lower limits on mW R are also obtained from low-energy flavor changing neutral current observables [102]. a clear probe of the full seesaw matrix. There exist classes of L-R seesaw models where such large mixing parameters can be realized without much fine-tuning [40]. The remaining possibility, namely, the LR contribution is doubly suppressed by the heavy-light mixing as well as phase space, and hence, always smaller than one of the other three contributions discussed above. So this is not relevant for the experimental searches of L-R seesaw.
The regions of dominance for the different contributions discussed above are shown in Figure 9 by various shaded regions (blue for LL, green for RL and red for RR) as a function of the mixing parameter in the electron sector for a fixed value of the heavy neutrino mass m N = 1 TeV and assuming g R = g L . The vertical (brown) solid line in Figure 9 shows the 95% CL direct limit on m W R from the √ s = 8 TeV LHC [118], whereas the vertical dashed line shows a projected lower limit from √ s = 14 TeV LHC with 300 fb −1 integrated luminosity [41]. For comparison, we also show the current 90% CL upper limit on the mixing parameter from a recent global fit to the EWPD [119] (horizontal dotted line). All the signal cross sections for LL, RL and RR used in Figure 9 For simplicity, we assume all the non-standard Higgs bosons in the LRSM to be heavier than W R , so that the total width of W R is mostly governed by the masses of W R and N . Figure 10. L-R seesaw phase diagram for the jj signal at the LHC with |V eN | 2 = 10 −6 . The labels are the same as in Figure 9. In addition, we have shown the exclusion region due to LFV constraints from MEG (solid orange) and the future limit from MEG2 (dashed orange). The grey shaded region (upper left corner) corresponds to m N > m W R which cannot be probed by the jj signal at the LHC, but accessible to the low-energy searches.
The presence of Majorana neutrinos in the LRSM also gives several additional contributions to the low-energy LNV process of 0νββ [3,40,[50][51][52][53][54][55][56][57][58][59][60][61][62]. In the limit of large m W R , the only dominant contributions are due to LH current exchange, which gives an upper limit on the mixing parameter, independent of m W R , as shown by the solid magenta line in Figure 9. On the other hand, for very light RH gauge bosons, the purely RH current exchange diagram becomes dominant over the rest and puts a lower bound on m W R for a given m N from the non-observation of 0νββ, irrespective of the value of the mixing parameter, as shown by the vertical portion of the solid magenta curve. Here we have used the 76 Ge isotope and the corresponding combined 90% CL limit on the 0νββ half-life from GERDA phase-I: T 0ν 1/2 > 3 × 10 25 yr [120] for illustration. For the nuclear matrix elements, we use the maximum values from a recent SRQRPA calculation [121], so as to obtain the minimum half-life predictions. The dashed magenta curve shows the future sensitivity of multi-ton scale 0νββ experiments (LSGe) with 76 Ge isotope, such as the proposed Majorana+GERDA experiment with an ultimate limit of T 0ν 1/2 > 10 28 yr [122]. The phase diagram shown in Figure 9 can be easily translated to the more familiar (m W R , m N ) parameter space, as shown in Figure 10. The current 95% CL exclusion contour from √ s = 8 TeV LHC [118] (LHC8) and the future sensitivity of √ s = 14 TeV LHC with 300 fb −1 luminosity (LHC14) [41] are also shown. Here we have fixed the mixing parameter |V eN | 2 = 10 −6 for illustration. Due to the relatively small value of mixing, the LL channel is no longer relevant, and either RL or RR channel is the dominant one in the entire L-R seesaw parameter space of interest. In particular, for smaller m N values, the RR channel becomes less efficient, compared to the RL channel, due to the phase space suppression factor of m 5 N /m 4 W R in the 3-body decay rate Γ(N → jj). Increasing the value of the mixing parameter will further enlarge the RL dominance region. Thus, a combination of RR and RL modes provides a better probe of the L-R seesaw, as compared to the RR mode alone. It is worth noting here that the 0νββ searches provide a complementary probe of the L-R seesaw parameter space, especially for the m N > m W R regime, which is kinematically inaccessible in the jj channel at the LHC. We should note here that the 0νββ analogs of the LL, RL and RR modes will give rise to different angular distributions, which can in principle be measured in the proposed SuperNEMO experiment [123]. Similarly, one can use polarized beams in a linear collider to distinguish between the different contributions in L-R seesaw [124]. These are complementary to the kinematic endpoint method proposed here for a hadron collider.
Another complementary low-energy probe of the LRSM is through the LFV processes [45,56,62,66,67]. In particular, the µ → eγ decay rate gets an additional contribution from the purely RH current and is currently constrained to be Br(µ → eγ) < 5.7×10 −13 at 90% CL by the MEG experiment [125]. Assuming a maximal mixing between the RH electron and muon sectors of the heavy neutrino mass matrix, we translate this limit into an exclusion region in the (m W R , m N ) parameter space, as shown by the shaded region above the solid orange curve in Figure 10. The projected limit of Br(µ → eγ) < 10 −14 from the future upgrade of MEG experiment [126] could probe most of the remaining parameter space shown in Figure 10. This clearly illustrates the importance of a synergistic approach at both energy and intensity frontiers in testing the L-R seesaw paradigm in future.
Since the high energy physics community has seriously started thinking about a nextgeneration √ s = 100 TeV hadron collider [127], we feel motivated to present the sensitivity reach of such a machine in the context of L-R seesaw. As before, we generate the signal cross sections at the parton level using MadGraph aMC@NLO [108] with the default NNPDF2.3 [109] PDF sets. We have used the following conservative selection cuts in our event simulation: Our results for the projected 2σ exclusion region with 1 ab −1 integrated luminosity at √ s = 100 TeV are shown in Figure 11 (blue dotted curve), which suggests that one could probe RH gauge boson masses up to 32 TeV, in good agreement with previous studies [48]. This sensitivity reach extends to regions of the LRSM parameter space not accessible by even future low-energy searches at the intensity frontier, as demonstrated by a comparison with the MEG-2 projected limit (orange dashed curve) in Figure 11. Moreover, one can access even smaller mixing parameters at the collider through the RL mode, as illustrated here for |V N | 2 = 10 −8 which still gives a dominant RL contribution for relatively smaller m N values. Note that the shape of our exclusion contour for the √ s = 100 TeV case is not exactly similar to the √ s = 8 or 14 TeV contours in the low m N region, mainly because we have not applied any specialized selection cuts on the invariant masses and have taken the tagging efficiencies to be 100%, which is usually not the case in a realistic hadron collider Figure 11. Sensitivity reach of a futuristic √ s = 100 TeV collider for probing the L-R seesaw parameter space through the jj signal. Other labels are the same as in Figure 10. environment. Nevertheless, our parton-level estimates should serve as a rough guideline for more sophisticated studies in future.
Conclusion
In summary, we have pointed out a clean and robust way to distinguish between different mechanisms for the production of dilepton plus dijet final states at a hadron collider using the kinematic endpoints of various invariant mass distributions. We derived analytic expressions for these kinematic endpoints, under a minimal set of assumptions : (i) no invisible particles are involved in the relevant process (i.e., no missing transverse momentum at the parton level), (ii) all the final state particles originate from a common mother particle, and (iii) the two jets are the decay products of the same particle. We then provided various criteria to distinguish the possible scenarios yielding the same jj collider signature from one another. As a "spin-off", the potential determination of the masses of the heavy resonances in the associated process was also discussed. We emphasize that the relevant derivations, prescriptions and strategy are rather general and can be straightforwardly extended to other event topologies, even with invisible particles (see e.g., Ref. [105]).
As a proof of principle, we have applied this general method to study the distinction between the seesaw models based on the Standard Model and the Left-Right symmetric model. Once there is statistically significant evidence for such a jj signal, we can pinpoint the diagram(s) responsible for this signal by using our kinematic endpoint method and also measure the masses of the new particles involved in this process. This can be a powerful way to study the origin of neutrino masses at the LHC and beyond. Along this line, we examined the signal sensitivity for some well-motivated channels at the LHC and a 100 TeV future collider, and found that a W R gauge boson mass up to ∼ 5.5 TeV with 300 fb −1 data at √ s = 14 TeV LHC or up to ∼ 32 TeV with 1 ab −1 data at √ s = 100 TeV collider. This has important consequences for other new physics observables, such as neutrinoless double beta decay, lepton flavor violation and even matter-antimatter asymmetry [128,129].
A Invariant mass distributions
We illustrate the invariant mass distributions for the regions discussed in Section 3 in Figure 12. The relevant distributions for region R 7 are not shown here because all kinematic endpoints (in squared mass) are simply given by either s or s/2, hence less informative than the others. Among the eight possible invariant mass variables (2.2)-(2.4), two are trivial, viz., m jj always peaks at m W R and m jj peaks at m W for some cases; hence they are not considered here. For any event, the decay process of an on-shell resonance is performed through pure phase space. The events are generated with √ s = 14 TeV for all regions. As clearly mentioned in Section 3, the mass of particle C merely determines the overall scale (for the on-shell C-initiated processes), while all the details are completely governed by the mass ratios R ij . In this sense, the exact values of all mass parameters are irrelevant. Every distribution is plotted in the unit of m C which is fixed to be 1 TeV. Figure 12. Invariant mass distributions in regions R 1(1) (first two panels in the 1st row), R 1(2) (last two panels in the 1st row), R 2(1) (first two panels in the 2nd row), R 2(2) (last two panels in the 2nd row), R 3 (first two panels in the 3rd row), R 4 (last two panels in the 3rd row), R 5 (first two panels in the 4th row), and R 6 (last two panels in the 4th row). In each pair of panels, the first one demonstrates the 2-body invariant mass variables such as m (red dashed), m > j (blue dot-dashed), and m < j (green solid) while the second one demonstrates the 3-body invariant mass variables such as m j (red dashed), m > jj (blue dot-dashed), and m < jj (green solid). Black dashed lines denote the relevant theoretical kinematic endpoints formulated in Section 3. | 13,997.2 | 2015-10-14T00:00:00.000 | [
"Physics"
] |
CancellationTools: All-in-one software for administration and analysis of cancellation tasks
In a cancellation task, a participant is required to search for and cross out (“cancel”) targets, which are usually embedded among distractor stimuli. The number of cancelled targets and their location can be used to diagnose the neglect syndrome after stroke. In addition, the organization of search provides a potentially useful way to measure executive control over multitarget search. Although many useful cancellation measures have been introduced, most fail to make their way into research studies and clinical practice due to the practical difficulty of acquiring such parameters from traditional pen-and-paper measures. Here we present new, open-source software that is freely available to all. It allows researchers and clinicians to flexibly administer computerized cancellation tasks using stimuli of their choice, and to directly analyze the data in a convenient manner. The automated analysis suite provides output that includes almost all of the currently existing measures, as well as several new ones introduced here. All tasks can be performed using either a computer mouse or a touchscreen as an input device, and an online version of the task runtime is available for tablet devices. A summary of the results is produced in a single A4-sized PDF document, including high quality data visualizations. For research purposes, batch analysis of large datasets is possible. In sum, CancellationTools allows users to employ a flexible, computerized cancellation task, which provides extensive benefits and ease of use.
Introduction
Almost half of all stroke patients initially suffer from impaired attention (Lesniak et al., 2008). One of the most severe strokeinduced attention deficits is hemispatial neglect, a syndrome where patients disregard what happens towards contralesional space. It occurs in 25-50 % of stroke victims (Appelros et al., 2002;Buxbaum et al., 2004;Nijboer et al., 2013a), predominantly after damage to the right hemisphere (Ringman et al., 2004). Stroke patients suffering from neglect are hospitalized longer and face profound problems in daily life (Nijboer et al., 2013b;Nys et al., 2005). Although spontaneous recovery occurs, about 30-40 % of individuals with neglect still suffer from the syndrome after a year (Nijboer et al., 2013a, b). Importantly, neglect is associated with many negative factors, for example it appears to have a suppressive effect on upperlimb motor recovery (both synergism and strength) especially over the first ten weeks post-stroke (Nijboer et al., 2014).
Because of its severity, it is important that good tools are available to diagnose the neglect syndrome, and to support research on potential rehabilitation methods. One type of test that is widely used for assessment measures multitarget visual search. Such cancellation tasks require participants to cross out ("cancel") all stimuli of a certain type, often while ignoring stimuli of all other types (distractors). These search tasks h a v e g a i n e d i m m e n s e p o p u l a r i t y i n c o g n i t i v e neuropsychology, and have proven their worth both in clinical and research environments.
Cancellation performance is not only a measure of interest in patient groups, but in other sets of participants as well. For example, a recent study on a wide age range of healthy adults described search patterns on cancellation tasks in a qualitative manner (e.g., "horizontal left-to-right"), and concluded that no significant differences exist between different age groups (Warren et al., 2008). However, this investigation lacked more sensitive measures of search organization that have been shown to improve with age in children (Woods et al., 2013). Healthy elderly people tested two years before dementia require significantly more time to complete a cancellation task than elderly individuals who did not develop dementia (Fabrigoule et al., 1998). Differences in performance within demented patients became apparent when tests of a higher attentional load were deployed: patients with Alzheimer's disease performed as accurately as patients with multi-infarct dementia on a low-load cancellation task, but were both less accurate and faster on a cancellation task that required more selective and divided attention (Gainotti et al., 2001). Principal component analysis of a range of neuropsychological tests, including cancellation, indicates there might be a common factor underlying performance deterioration for in the pre-clinical stage of Alzheimer's disease, perhaps associated with a general ability to control cognitive processes (Fabrigoule et al., 1998).
All of the findings summarized above could profitably be extended with more sensitive measures of cancellation performance and search organisation. When diagnosing neglect, the primary measures of cancellation tasks are usually the amount and spatial spread of omissions (non-cancelled targets). However, there is emerging evidence that the neglect syndrome constitutes more than just lateralized deficits (Husain & Rorden, 2003), and deficits of spatial working memory or sustained attention might contribute, for which additional indices of cancellation performance might be helpful.
Numerous measures of general performance, timing, and search strategy that can be derived from cancellation tasks have been suggested in the literature (for an overview, see the section Supported Measures). However, data collection for these measures is often performed using labor-intensive and perhaps suboptimal procedures, e.g., frame-by-frame video analysis (Mark et al., 2004;Woods & Mark, 2007), monitoring of "verbal cancellation" (Samuelsson et al., 2002), "observing and recording the predominant search pattern" during a task by a human observer (Warren et al., 2008), or asking patients to change the color of their pencil every 10-15 cancellations (Weintraub & Mesulam, 1988). A more efficient way of analyzing search patterns would be to use a computerized cancellation task, with which cancellation positions and order can be recorded without the risk of human error.
Although the first reports of computerized cancellation software date back 15 years (Donnelly et al., 1999), the currently available packages are very limited in either the number of supported tasks (Donnelly et al., 1999;Wang et al., 2006), or the supported measures (Rorden & Karnath, 2010;Wang et al., 2006), and none of them provide both task presentation and data analysis (CACTS by Wang et al. is reported to be able to do both, but is not available for download). Therefore, most laboratories use custom software and most clinicians still prefer pen-and-paper tests.
Due to the lack of practically useful software, the field is currently in a situation in which ample theoretically valid measures exists (Donnelly et al., 1999;Hills & Geldmacher, 1998;Malhotra et al., 2006;Mark et al., 2004;Rorden & Karnath, 2010;Samuelsson et al., 2002;Warren et al., 2008;Weintraub & Mesulam, 1988), of which most are validated on a small scale in research studies, but very few can be applied on a large scale in clinical practice or research studies due to the aforementioned practical issues.
In the current paper, we present a potential solution: CancellationTools, a package that combines the administration and the analysis of cancellation tasks, supporting almost all types of cancellation tests, and outputting almost all of the currently available research measures. The software is designed to be as user-friendly as possible, by using a very straightforward interface, and the option to import a scanned task that allows users to use their preferred cancellation task type. Additionally, CancellationTools supports touchscreen input, which is very comparable to pen-and-paper cancellation, for example in the sense that it allows bedside testing.
Our package is open source, and is available to download for free. An online version of the task software is available to provide support for tablet devices.
Software characteristics
Open source CancellationTools has been written completely in Python (Van Rossum & Drake, 2011), using as few dependencies as possible. The graphical user interface (GUI) has been written from scratch using the PyGame toolbox, and the software to analyze and visualize data has been written using the NumPy (Oliphant, 2007) and Matplotlib (Hunter, 2007) packages. All of these are open-source projects that are maintained by a large community of volunteers.
The software can be downloaded for free from www. cancellationtools.org>. It is released under the GNU General Public License version 3 (Free Software Foundation, 2007), which ensures that it can be used, shared, and modified by anyone. The source code is publicly available and managed via GitHub, which stimulates programming with frequent feedback, version control, and collaboration on a large scaleall according to the best practices for scientific computing as formulated by Wilson et al. (2014).
Supported systems
A simplified version of the application can be used online. Due to copyright issues, we cannot allow users to upload their own tasks to our website. We do provide different versions of the Landolt C cancellation task. After online completion of a task, a raw data file can be downloaded, which can later be analyzed via the offline version. No data will be permanently stored or accessed by the authors of CancellationTools, or any third party. An advantage of the online runtime is that it can be accessed from computers that do not allow installation of new software (e.g., in most hospitals), or via tablet devices (e.g., Apple's iPad) that are gaining increasing popularity in neuropsychological testing.
Currently, the standalone version of CancellationTools is only available on Windows. Users of other operating systems can choose between running the application from source via Python, or using the online runtime to test participants and a PC for data analysis. We are currently working on standalone versions for other platforms, e.g., Macintosh OS X and Android, and will release these in the future.
Interface
We have aimed to keep the software as user-friendly as possible, without constraining functionality. The graphical user interface (GUI) is tailored to be operated smoothly via touchscreen devices and traditional PCs, and is both visually appealing and intuitive (Fig. 1). Tasks can be set up and started within a minute. Analyzing data can be done with a minimum of two mouse clicks.
Landolt C cancellation task CancellationTools' default cancellation task is a Landolt C cancellation task, as described by Parton et al. (2006). The stimuli are circles with or without a gap, displayed in rows and columns with a random spatial jitter for each stimulus (Fig. 2). A user is free to choose the types and number of targets and distractors, the foreground and background color, the input type (mouse or touchscreen), and whether cancellation marks should be visible or not. The optimal placement of the stimuli (i.e., the number of rows and columns) is automatically calculated based on the display resolution. The placement of targets is pseudo-random, as they are placed evenly over the width of the screen. In the example task depicted in Fig. 2, this means that four targets are present in every column.
Importing scanned tasks
For researchers and clinicians who prefer to work with a different cancellation task, CancellationTools has an option to import scanned tasks. If users select this option, they are asked to provide an image file. The image is automatically scaled to the display resolution, and a user can proceed to manually indicate where the targets and distractors are. The task is then saved, and is available for future use in task administration and analysis.
Supported measures
We have attempted to include all of the currently existing measures that can be derived from cancellation tasks, which can be broadly divided into three categories: measures of biases in spatial attention, of search organization, and of general performance. Furthermore, to complement or improve on existing measures, we have devised a few of our own (e.g., the standardized angle, see below). We have not included qualitative descriptions of cancellation path structure (Samuelsson et al., 2002;Warren et al., 2008;Weintraub & Mesulam, 1988), or an algorithm to categorize search organization (Huang & Wang, 2008). In our view, these do not Omissions CancellationTools reports the total number of omissions and the omissions per half of the search array, which have traditionally been used to diagnose neglect. These values are to be interpreted using standardized scores, depending on what task is employed. Traditionally, a relatively large number of omissions has been used as one index of neglect, but the left:right omissions ratio is potentially more informative and has been used widely. For example, a recent study on a particularly large sample (55 neglect patients, 138 non-neglect patients, and 119 controls) by Rabuffetti et al. (2012) reported that neglect patients show a large directional (left vs. right) imbalance in omissions, compared to healthy controls and patients with left or right lesions without neglect.
Revisits
A revisit is a cancellation of a previously cancelled target. Some authors refer to this kind of response in the cancellation literature as 'perseveration'. However, perseverations are often used as a term associated with a (frontal) lack of ability to inhibit. In neglect research, there is evidence that while some patients might have a problem with the ability to inhibit recancelling a previously visited item, others re-cancel because of a deficit in spatial working memory (Mannan et al., 2005). Therefore, we prefer to use the empirically descriptive term 'revisit'.
Revisits can occur immediately, when a participant cancels the same target twice in a rowanalogous perhaps to perseveration. A delayed revisit occurs when a participant goes back to a previously cancelled target, after cancelling other targets (Mannan et al., 2005). The number of revisits correlates with measures of disorganized search, such as the best R (see below), the inter-cancellation distance, and the number of cancellation path intersections (Mark et al., 2004). Parton et al. (2006) reported that neglect patients demonstrated a higher number of revisits than non-neglect patients, an effect that was especially apparent when no cancellation marks were visible, i.e., when patients had to remember which targets they had previously visited. In this touch screen study, the median number of intervening targets was 8. The authors argued that a possible underlying mechanism for such revisiting behaviour might therefore be a deficit in spatial working memory. Our software provides the option of using an invisible cancellation condition, should users wish to use this type of search display which can provide a more sensitive measure of left:right biases in neglect, and allows investigation of the role of spatial working memory in cancellation tasks (Wojciulik et al., 2004).
Standardized inter-cancellation distance
Inter-cancellation distance refers to the Euclidean distance between two consecutively cancelled targets (sometimes divided by the number of targets) and has been used to assess search behavior (Huang & Wang, 2008;Mark et al., 2004;Wang et al., 2006;Woods & Mark, 2007). We introduce a new measure that originates from the inter-cancellation distance, but is comparable across different tasks: the standardized inter-cancellation distance (Fig. 3). This is the mean intercancellation distance, divided by the mean distance between each target and its nearest neighboring target. A low standardized inter-cancellation distance originates from cancelling targets that are in close proximity of each other, and reflects an organized search pattern. Both the average and standardized
Center of cancellation
The center of cancellation (CoC), introduced by Binder et al. (1992) and popularized by Rorden and Karnath (2010), is the average horizontal position of all cancelled targets, standardized so that a value of −1 corresponds with the leftmost, and 1 with the rightmost target. The CoC is a very elegant measure of neglect severity, as it captures an attentional gradient rather than a bimodal decision (i.e., left field is or is not impaired). Additional to the horizontal CoC., CancellationTools provides the vertical CoC, where −1 corresponds with the topmost target, and 1 with the target that is closest to the bottom of the task.
Timing
The total amount of time a participant spends on a cancellation task might be an indication of the participant's sustained attention for the task. Primary reports indicate that this measure is potentially influenced by pharmacological intervention , and could therefore be used in diagnostics and rehabilitation. The average inter-cancellation time (sometimes dubbed latency index) differs between healthy controls and brain-damaged patients, but also between neglect and non-neglect patients (Rabuffetti et al., 2012). It could hypothetically serve as a measure of executive functioning, as it reflects how much processing time a participant needs to find and cancel a new target.
Search speed
The search speed is the average of all inter-cancellation distances divided by all inter-cancellation times (Eq. 1), and has been introduced and validated by Rabuffetti et al. (2012), who show that controls are slightly faster than brain-damaged patients. This is not surprising, as the same study reports lower inter-cancellation times for patients than for controls.
Where: n is the number of cancellations s is the distance between two consecutive cancellations t is the time between two consecutive cancellations Quality of search (Q) score A measure of the quality of search, is the Q score introduced by Hills and Geldmacher (1998). The Q score combines speed and accuracy in a single measure, and is calculated using Eq. 2. A high Q score reflects a combination of a high number of cancelled targets, and a high cancellation speed. This index does not seem to be task independent: Huang & Wang (2008) found that Q scores in healthy undergraduates were higher for unstructured arrays compared to structured arrays. The number of correct responses for both task types did not differ, meaning that the difference in Q scores was driven by a higher time-on-task for the structured array. However, one should be careful when interpreting these results, as the terms 'structured' and 'unstructured' only applied to the distractors in this study: The targets locations were the same for both tasks, and only the distractors (the noise) were distributed either with or without equal spacing.
Where: N cor is the number of cancelled targets (correct responses) N tar is the total number of targets t tot is the total time spent on the task Intersections rate Donnelly et al. (1999) counted the total number of cancellation path intersections: the number of times a cancellation path crosses itself (Fig. 4). Mark et al. (2004) and Rabuffetti et al. (2012) divided the intersections total by the amount of produced markings to correct for search path length, resulting in the intersections rate. Rabuffetti et al. use the term crossing index, which differs from the intersections rate in one aspect: The total amount of intersections is divided by the total Fig. 3 Two examples of a cancellation path. The top path was obtained from a target grid with a 100-pixel interspacing, the bottom path from a task with 250-pixel interspacing. The search organization is identical for both paths, yet the mean inter-cancellation distances are not. Other measures of search organisation (best R, standardize angle and standardized interdistance) are independent of inter-target distance amount of markings, whereas the intersections rate of Mark et al. is calculated by dividing the amount of intersections by the total amount of markings excluding immediate revisits. As the cancellation path is only determined by cancelled targets, we define the intersections rate as the total amount of path intersections divided by the amount of cancellations that are not immediate revisits (Eqs. 3-8).
An efficient search pattern includes as few intersections as possible. In other words, a high rate of intersections would be indicative for unsystematic exploration. Rabuffetti et al. (2012) have shown that this measure can differentiate between different groups of participants: controls < non-neglect rightbrain damage < non-neglect left-brain damage < neglect rightbrain damage.
D y ¼ x 1;i ⋅y 2;i −y 1;i ⋅x 2;i À Á ⋅ y 1; j −y 2; j − y 1;i −y 2;i À Á ⋅ x 1; j ⋅y 2; j −y 1; j ⋅x 2; j Where: (X 1 , y 1 ) is the starting coordinate of the line between two consecutive cancellations (cancellation n) (X 2 , y 2 ) is the ending coordinate of the line between two consecutive cancellations (cancellation n+1) (P x , P y ) is the coordinate of the intersection between two inter-cancellation lines n is the number of inter-cancellation lines (not to be confused with the number of cancellations) Best R Mark et al. (2004) coined a quantitative measure for assessing cancellation strategy, which can be viewed as a formalization of the qualitative ways in which some researchers have tried to describe cancellation paths (Samuelsson et al., 2002;Warren et al., 2008;Weintraub & Mesulam, 1988). The best R is defined as the highest absolute value of the Pearson correlation between cancellation rank number and either horizontal or vertical cancellation position (Eq. 9, Fig. 5), and should increase with search efficiency. The most efficient way of performing a cancellation task is to start searching at an extremity (e.g., the left), and proceed with the search in one general direction (e.g., rightward or downward), alternating moving up and down on the perpendicular direction (e.g., upward and downward, or leftward and rightward), as is depicted in Fig. 5a.
Where: R hor is the Pearson correlation coefficient of the horizontal position of all cancellations and their rank numbers R ver is the Pearson correlation coefficient of the vertical position of all cancellations and their rank numbers Standardized angle One of the possible cancellation paths that is efficient, but will nonetheless result in a relatively low best R, is a circular path that starts in the extremes of the cancellation task, and gradually moves inward, or spirals (Fig. 5c). What characterizes this kind of path and the paths that do result in a high best R (e.g., Fig. 5a) is the occurrence of predominantly horizontal and vertical lines between cancellation locations. We introduce a measure that can differentiate between horizontal and vertical paths (associated with an optimal search strategy) on the one hand, and diagonal lines (associated with a suboptimal strategy) on the other (Eqs. 10 and 11). As the intercancellation angle approaches 45°, the standardized angle approaches 0. In contrast, inter-cancellation angles approaching either 90°or 0°will result in a standardized angle that approaches 1 (Fig. 6). Therefore, a high standardized angle is potentially an indication of an efficient cancellation process.
Where: γ is the angle between two consecutive cancellations Δy is the vertical distance between two consecutive cancellations d is the Euclidean distance between two consecutive cancellations n is the total amount of inter-cancellation angles between consecutive cancellations that are not immediate revisits
First marking
Age has a significant influence on measures of search organisation. Specifically, the mean inter-cancellation distance and the amount of intersections decrease as age increases in children, while the best R increases, demonstrating an improvement in search organisation over time (Woods et al., 2013). Another index that increases with age is the likelihood of the first cancellation to be in the top-left quadrant of the search array. CancellationTools provides the location of the first marking in standardized space, so that the top left of the search array is (0,0) and the bottom right (1,1). These standardized locations are comparable between different task types and sizes. A qualitative description (e.g., "top-left") of the quadrant in which the first cancellation happened is also available.
Overview
To give a preliminary indication of the ranges of the summarized cancellation measures, we tested small samples of healthy adults (N = 10) and right-hemisphere patients with leftward neglect (N = 10). They were tested on Landolt C cancellation tasks that consisted of 64 targets (opening on top) and 128 distractors (50 % without opening, and 50 % with an opening on the bottom), on which cancellation markings were invisible, and the time limit was 2 min. The averages, standard deviations, and 95 % confidence intervals of all CancellationTools' quantitative measures are listed in Table 1. These values should not be regarded as norm scores. More elaborate studies on larger samples include Rabuffetti Fig. 6 Illustration of the standardized angle. C1 is a cancelled target, C2a-c are potential consecutive cancellations. The standardized angle is 1 for vertical (between C1 and C2a) or horizontal (between C1 and C2c) inter-cancellation angles, and approaches 0 for diagonal angles (between C1 and C2b). Paths containing predominantly horizontal and vertical lines are considered to be more efficient, therefore a high standardized angle is an indication of an efficient search pattern et al. (2012) (omissions, revisits, inter-cancellation distance and time, cancellation speed, and amount of path intersections in healthy controls, stroke patients with and without neglect), Woods & Mark (2007) (inter-cancellation distance, intersection rate, and best R in a healthy and a non-neglect stroke patient sample), Parton et al. (2006) (immediate and delayed revisits in stroke patients with and without neglect), and Rorden & Karnath (2010) (center of cancellation in neglect and non-neglect patients with right hemisphere damage).
Theoretically task-independent measures (provided there is a relatively equal spread of targets over the search array) are left:right omission ratio, standardized inter-cancellation distance, center of cancellation, average inter-cancellation speed, intersections rate, and location of the first cancellation in standardized space. Whether this theoretical taskindependency holds up in practice, might be determined in future research.
Summarized measures
Two kinds of summarized results are produced. The first is a single A4-sized, high-quality PDF document that contains an overview of all outcome measures, as well as a plot of the cancellation path ( Fig. 7a-b) and a heatmap of the cancelled targets ( Fig. 7c-d). This kind of output is potentially useful in a clinical setting, where a medical professional can consecutively run a task and an analysis, and add a print of the results to a patient's file. Furthermore, a simple text file is created, which can be opened with spreadsheet software (e.g., OpenOffice Calc or Microsoft Excel) and statistics packages (e.g., PSPP or SPSS), and can easily be processed using custom analysis scripts (e.g., using Python, R, Matlab, or any other programming language). Using the text files, researchers can easily extract data from a large group of participants for further analysis. Also available is an image of the cancellation task with all cancellation markings that a participant made (i.e., as the participant saw the task upon finishing), and a text file of all raw click (or touch) times and coordinates.
Data visualization
Several plots are created by each CancellationTools analysis. These give further insight into the performance of single participants, and can be used in addition to the measures described above. These plots include the aforementioned cancellation path and heatmap. The cancellation path ( Fig. 7a-b) gives a clear view of a participant's cancellation behavior, e.g., to help with the interpretation of measures of disorganized search. A plot of the relation between the cancellation rank number and either the horizontal or vertical position of the cancelled target ( Fig. 5d-f) gives an indication of how organized a participants search was (Mark et al., 2004;Woods & Mark, 2007). Heatmaps of fixation locations illustrate the deployment of attention in 2D-space, as is demonstrated by Bays et al. (2010). With our cancellation heatmaps we aim to create a similar visualization of spatial attention. Our pilot testing is promising on both an individual level (Fig. 7c-d), and on a group level (Fig. 8). Additional heatmaps are provided based on the locations of omissions, and on the locations of path intersections, to give an indication of the spatial properties of these measures.
For the cancellation and omission a Gaussian kernel is added to the location of each cancelled or missed target. The resulting field is then scaled to the heatmap that would result from an optimal performance on the cancellation task in question, which means that heatmaps are comparable between individuals and tasks. Heatmaps for individual data from a healthy individual and a neglect patient are displayed in Fig. 7c-d. Averaged heatmaps of a healthy and a neglect sample are shown in Fig. 8, and show an even spread of cancellations across the search array in healthy people (Fig. 8a), whereas neglect patients show a rightward bias (Fig. 8b). Neglect patients also display a leftward bias of omissions (Fig. 8d), whereas our healthy sample shows a lack of omissions (Fig. 8c).
Discussion
There is a need to quantify multitarget visual search performance on cancellation tasks. We made an effort to summarize all of the currently available measures that can be derived from cancellation task data. In the new software introduced here, we included all relevant measures from the currently available literature in an application that can be used to administer a computerized cancellation task, and to analyze the resulting data with the click of a button. We have aimed to make this software as flexible as possible, e.g., by allowing users to incorporate their own scanned tasks into the software, whilst keeping an eye on simplicity. The result is a userfriendly interface that can be employed both in clinical and research settings. Our software is open source, and free to download and use by anyone.
We have introduced two new measures of search organisation: the standardized inter-cancellation distance and angle. The former is an improvement of the existing mean intercancellation distance, which takes into account the distances between targets within a search array, therefore allowing comparisons of cancellation performance on different tasks. The standardized inter-cancellation angle can be viewed as complimentary to the best R, as it is robust to situations where the best R does not reflect search organisation optimally (Fig. 5c). Even though the best R and standardized intercancellation angle seem to differentiate between our small test groups, a much larger difference between healthy people and leftward neglect patients is observed in the intersections rate, suggesting that this might be the clearest measure of search organisation.
CancellationTools is already useful to clinicians, as it provides quantitative data on established measures of neglect (e.g., number of omissions), as well as qualitative data that (N=10, a and c), and a sample of right-hemisphere, leftward neglect patients (N=10, b and d). The data was collected using 1280×1024 pixels Landolt C cancellation tasks with 64 targets and 128 distractors, invisible cancellation markings, and a time limit of 2 min provides better insight in patient behavior than pen-and-paper cancellation tests (e.g., cancellation path plots). However, for the majority of the measures summarized above, there are currently no norm scores to compare individual test results to. The value ranges that we provide based on our pilot testing (Table 1) serve as a preliminary indication of how neglect patients and healthy controls differ on different measures, and should not be treated as a clinical directive.
Apart from our newly introduced standardized angle measure, all of the indices we report have been validated on a small scale in the articles in which they were coined. A few have been validated on a larger scale in the study of Rabuffetti et al. (2012), but it is arguable whether this provides enough data to base norm scores on. We aim to facilitate the fast testing of the summarized measures by providing a unified tool that helps to gather cancellation task data as easy as possible. Our hope is that this will help to establish norm scores for the measures that proof to have diagnostic value.
By making CancellationTools publicly available, we hope to inspire large-scale international collaborations to pool data, from healthy people and patient groups, on all of the measures we summarize in the current article. By removing practical boundaries that previously prevented large-scale testing, our software opens up exciting new research possibilities. The availability of CancellationTools creates a situation in which analysis of cancellation task data can be performed at a high level across different clinical and research settings. | 7,260.4 | 2014-11-08T00:00:00.000 | [
"Computer Science",
"Psychology"
] |
Methods for Estimating the Parameters of the Power Function Distribution
In this paper, we present some methods for estimating the parameters of the two parameter Power function distribution. We use the least squares method (L.S.M), relative least squares method (R.L.S.M) and ridge regression method (R.R.M). Sampling behavior of the estimates is indicated by a monte carlo simulation. We use total deviation (T.D) and mean square error (M.S.E) to identify the best estimator among them. We determine the best method of estimation using different values for the parameters and different sample size.
Introduction
Numerous parametric models are used in the analysis of lifetime data and in problems related to the modeling of failure processes.Among univariate models, a few particular distributions occupy a central role because of their demonstrated usefulness in a wide range of situation.Foremost in this category are the exponential, Weibull, gamma and lognormal distributions.
The Power function distribution is also a flexible life time distribution model that may offer a good fit to some sets of failure data.Theoretically, Power function distribution is a special case of Pareto distribution.Meniconi and Barry (1995) discussed the application of Power function distribution.They proved that the power function distribution is the best distribution to check the reliability of any electrical component.They used exponential distribution, lognormal distribution and Weibull distribution and showed from reliability and hazard function that power function distribution is the best distribution.
The probability distribution of power function distribution is With shape parameter and scale parameter , the interval 0 to .Rider (1964) derived distributions of the product and quotients of the order statistics from a power function distribution.Moments of order statistics for a power function distribution were calculated by Malik (1967).Lwin (1972), Arnold and Press (1983) discussed Bayesian estimation for the scale parameter of the Pareto distribution using a power function Prior.Ahsanullah and Kabir (1975) discussed the Estimation of the location and scale parameters of a Power function distribution.
Cohen and Whitten (1982) used the moment and modified moment estimators for the Weibull distribution.Samia and Mohamed (1993) In this paper, we use the least squares method, relative least squares and ridge regression to estimate the two parameters of the power function distribution.The present paper introduces the ridge regression estimators by taking different values of "k".Where "k" is the ridge coefficient.Also, we compare these methods using two parameters power function distribution to find the most accurate method (the method which has least M.S.E).
Least Squares Method (L.S.M)
The least square method (LSM) is extensively used in reliability engineering, mathematics problems and the estimation of probability distribution parameters.
The cumulative distribution function of Power function distribution is given by To get a linear relation between the two parameters taking the logarithm of above equation as follows Where = 1, 2 …. n and n is the sample size.
Let be a random sample of and ( ) is estimated and replaced by the median rank method as follows: Because ( ) of the mean rank method ( ) ( ) , may be a larger value for smaller and a smaller value for larger Therefore we use median rank method.
Thus, equation (2.1.2) is a linear equation and is expressed as To compute a and d by simple linear regression we proceed as follows
Let ( ) ∑ ( -)
We obtain the least square estimates (LSE) of a and d as: where ( )
Relative Least Squares Method (R.L.S.M)
The relative least squares estimators of a and d can be obtained by minimizing the sum of squares of the relative residuals, Pablo and Bruce (1992), w.r.t. a and d as follows
Ridge Regression Method (R.R.M)
The ridge regression estimators are given by Where k ≥ 0 is the ridge coefficient, is the p*p identity matrix and p is the number of parameters. [ After simplification we get Where ( ) and Where k ≥ 0 is the ridge coefficient the readers may see Ronald and Raymond (1978) if k=0, we obtain the least square estimates.
Performance Indices (Goodness of Fit Analysis)
Some methods of goodness of fit analysis are employed here.Mean square error MSE and total deviation TD are two measurements that give an indication of the accuracy of parameter estimation.AL-Fawzan (2000) referred to the use of the procedure of MSE and TD.
Mean Square Error (MSE)
The MSE can be calculated as below
Total Derivation (TD)
The total derivation TD, calculated for each method is as follows Where and are the known parameters, and ̂ n ̂ are the estimated parameters by any method.These techniques are used to measure the variability of parameter estimates for each simulation.These are used to determine the overall "best" parameter estimation method.
Application
A simulation study is used in order to compare the performance of the proposed estimation methods.We carry out this comparison by taking the samples of sizes as n = 20, 60 and 100 with pairs of ( , ) = {(1, 2), (3, 2), (4, 3)}.We generated random samples of different sizes by observing that if R is uniform (0, 1), then ⁄ is the random number generation of power function distribution with ( ) parameters.All results are based on 10,000 replications.It is observed that MSEs and TDs of all estimators of scale parameter as well as all the estimators of shape parameter are decreasing with the increase of sample size.
Consequently, we recommend using the L.S.M method for the parameters estimation of the power function distribution.After L.S.M, the R.R.M (0.1) and R.L.S.M method are best for estimation of scale and shape parameters of the power function distribution.
( ) is the value of the cumulative distribution function of the 2 parameter power function distribution using the estimated parameters, and ̂( ) ( ̂) ̂ Also Standard bias, Bias = E ( ̂) -and M.S.E ( ̂) = *( ̂ -) + Standard bias, Bias = E ( ̂) -and M.S.E ( ̂) = *( ̂-) + Kang and Young (1997)96)of moments to estimate the parameters of the Pareto distribution.Lalitha and Anand (1996)used modified maximum likelihood to estimate the scale parameter of the Rayleigh distribution.Rafiq et al. (1996)discussed the parameters of the gamma distribution.Rafiq (1999)dicussed the method of fractional moments to estimate the parameters of Weibull distribution.Kang and Young (1997)estimated the parameters of a Pareto distribution by jackknife and bootstrap methods.Marks (2005) estimated the parameters of Weibull distribution with the help of percentiles.He called it Common Percentile Method.
Table 3 : Estimation for the parameters β and γ of Power Function Distribution for sample size 100
(3) the results are listed in table (1), (2) and(3).From these tables, we see that the L.S.M estimates of parameters are too close to the true values, and the values of MSE and TD are very small.The parameter estimates from R.L.S.M, R.R.M methods are close to the true values but not as close as L.S.M estimates, because the values of MSE and TD are greater than the corresponding values from L.S.M. | 1,615.6 | 2013-10-21T00:00:00.000 | [
"Mathematics"
] |
Application of a Tabu search-based Bayesian network in identifying factors related to hypertension
Supplemental Digital Content is available in the text
Introduction
Cardiovascular disease (CVD) is a leading cause of death and burden worldwide, and hypertension is ranked as a top modifiable risk factor for CVD. [1,2] Worldwide, more than 60% of stroke cases and 40% of coronary heart disease events are attributable to hypertension. The prevalence of hypertension in the general population is approximately 25%, and is expected to increase markedly (to 60%) by 2025. [3] Therefore, it is important to comprehensively analyze factors related to hypertension to reduce its occurrence. Most previous studies explored factors related to hypertension using logistic regression analyses based on independent variables, with odds ratios (OR) used to reflect the degree of association. However, in reality, these factors are often interdependent, and the relationships may have a complex network structure.
Bayesian networks (BNs) are a method of artificial intelligence [4] that does not have strict requirements regarding statistical assumptions. By constructing a directed acyclic graph (DAG) to reflect potential relationships among multiple factors, a conditional probability distribution table can be used to reflect the strength of associations. In addition, BNs can use the status of a known node (i.e., factors) to infer the probability of the unknown node (i.e., hypertension), which may be a more flexible approach to determine the risk for hypertension. Given the attractive characteristics of BNs, researchers have used this approach in various domains. For example, BNs have been used in mammographic diagnosis for breast cancer, [5] and to analyze the causes of sewage treatment system failure [6] with the predictive performance evaluated by a lab-scale pilot plant. BNs have also been used to predict the increased likelihood of occurrence of safety incidents, with food fraud as an example. [7] In addition, Cai et al [8] used BNs to conduct quantitative risk assessment for operations in the offshore oil and gas industry.
Building a BN from data is called a learning process, and involves two steps: parametric learning and structured learning. [9] Structured learning has been more frequently studied than parametric learning. Common structured learning methods using BNs are the exhaustive method, hill-climbing algorithm, and K2 algorithm. However, each of these three methods has shortcomings. For example, the exhaustive method needs to compare all possible BN structures to choose the best structure, which requires a large amount of calculation. The hill-climbing algorithm is a local optimization method, but there is no guarantee that this algorithm will find the global minimum. [10] The K2 algorithm has 2 preconditions: knowing the order of the nodes and the upper limit of the number of the parent nodes in advance. However, these preconditions are not satisfied in many cases. [11,12] Tabu search is an efficient global optimization technique that incorporates adaptive memory to move beyond a local search to find the global optimum. [13] This method avoids repetition of the same solutions by maintaining a mechanism called a "Tabu list" and activates good solutions using aspiration criteria. [13] In recent years, the Tabu search algorithm has often been applied in a variety of fields because of its advantages, including solving global optimization problems. Therefore, we used BNs optimized with a Tabu search algorithm to model hypertension and related factors and determine how these factors were related to each other. This study aimed to offer comprehensive strategies for effectively reducing the incidence of hypertension.
Study participants
This investigation was a project conducted by social practice college students during their summer vacation in 2008, which was held in Shanxi Province, China. Based on cluster random sampling principles, eight representative investigation points were randomly selected in Shanxi Province. In total, 39 neighborhood committees and villages (Datong, Xinzhou, Taiyuan, Jinzhong, Lüliang, Changzhi, Linfen, and Yuncheng) in Shanxi Province were selected as survey sites. Permanent residents over age 15 years at each survey site were invited to participate in this study. Participants were informed about the study objectives and data confidentiality before data collection, and written informed consent was obtained from all participants. Face-to-face interviews were conducted by uniformly trained investigators. The interviews were based on a questionnaire that collected information on general demographic characteristics (e.g., age, gender, level of education, and occupation), lifestyle factors (e.g., smoking, drinking alcohol), and past medical history (e.g., myocardial infarction, coronary heart disease, nephropathy, stroke, and diabetes mellitus). Anthropometric measurements were also collected, including height, weight, waist circumference, and blood pressure (BP). Factors and their assignments are shown in Table 1.
The eligibility criterion for this study was all residents aged 15 years or older who had lived in the monitoring area for more than 6 of the past 12 months. The exclusion criterion was residents who lived in functional areas, such as sheds, military bases, student dormitories, and nursing homes. The local Ethics Committee of Shanxi Medical University approved this study. All experiments were performed according to the relevant guidelines and regulations.
Quality control
Stringent measures were implemented to ensure the validity and reliability of the research data. All investigators were trained to collect data using standardized protocols and instruments before the participant interviews. The data were recorded in questionnaires. At each site, investigators were asked to check all information after each interview, and key investigators were responsible for re-examining all questionnaires at each site. If missing information or logic errors were detected, repeated interviews or checks were required. All measuring instruments were calibrated before measurement. All data were entered twice into a database, and then compared and checked for errors.
Bayesian networks (BNs)
BNs have been widely used since they were first proposed by Pearl Judea in 1987. A Bayesian network is a directed acyclic graph (DAG) based on probability theory and graph theory, which consists of nodes representing the variables X = {X i , . . . ,X n } and directed edges symbolizing the relationships between the variables. [14] If there is an edge from X i to X j , then we say that the node X i is the parent of X j and X j is the child of X i . [15,16] From the perspective of probability theory, BNs represent a joint probability distribution, which describes the probabilistic dependence between variables. In a series of random variables X = {X 1 , . . . ,X n }, according to the chain rule and conditional independence, its joint probability p(X i ) is the collection of the parent of X i , p(X i )⊆{X 1 , . . . ,X i-1 }, given the value of p(X i ); X i is conditionally independent of other variables in {X 1 , . . . ,X i-1 }. [17] Figure 1 and Figure
Tabu search algorithm
Tabu search was proposed by Glover in 1986, [18] and is an efficient global optimization method that incorporates adaptive Table 1 Factors and their assignments.
Factors Assignments
Gender (x 1 ) Cultural level (x 3 ) Under high school * = 1, High school and over = 2 Occupation (x 4 ) Farmer * = 1;Unemployer or retirees = 2; Drinking status (x 6 ) Never * = 1; Occasionally drinking = 2; Pan et al. Medicine (2019) 98:25 Medicine memory to move beyond a local search to find the global optimum. [19] It prevents cycling by maintaining a Tabu list and activates good solutions using aspiration criteria to ensure that the search algorithm achieves global exploration, and ultimately finds the global optimal solution. [18] The Tabu search algorithm starts from a feasible initial solution and selects a series of specific searches moving in different directions for an exploratory search. If movement in a certain direction results in the most change in the value of the objective function, it means that solution an optimal solution for the local area. That solution is then entered into the Tabu list, and the initial solution is replaced with the new optimal solution; we can continue to move its neighborhood to find the optimal global solution. This process is repeated and the Tabu list is updated until the convergence criterion is met. In this process, if some solutions in the Tabu list have obvious advantages, it is possible to ignore the taboo criteria so that some of the taboo objects can be re-optional, which avoids the loss of a good solution and achieves global optimization. [17]
Evaluating indicators
The main evaluation indexes of a BN model are true positive rate (TPR), true negative rate (TNR), recall, and precision. Sensitivity (TPR) indicates the proportion of positive classes correctly predicted and the ability of the BN to recognize positive classes. Specificity (TNR) represents the proportion of correctly predicted negative classes and measures the ability of the BN to recognize negative classes. Recall rate is similar to sensitivity; the higher the recall rate, the fewer negative classes are classified as positive classes in BNs. Precision means the proportion of positive classes in the samples predicted as positive classes. The higher the accuracy, the lower the error rate of positive classes in BNs.
Definitions
Three consecutive BP readings were taken using an electronic sphygmomanometer with an accuracy of 1 mmHg. The averages were calculated for a final BP reading. According to the Guidance on Prevention and Control of Hypertension in Chinese Residents, hypertension was defined as individuals with an average measured systolic BP ≥140 mmHg or diastolic BP ≥90 mmHg, or who reported having been diagnosed with hypertension or receiving BP-lowering treatment. [20] Participants who reported smoking ≥1 cigarette a day for the previous 6 months were defined as smokers. Drinking alcohol referred to drinking alcohol at least 1 time a week, with an alcohol intake of 50 g or more for 6 consecutive months. Body weight was categorized using body mass index (BMI) as normal weight (BMI 18.5-23.9 kg/m 2 ), overweight (BMI 24-27.9 kg/m 2 ), and obese (BMI ≥28 kg/ m 2 ). [21] Central obesity was defined as a waist circumference ≥85 cm for males and ≥80 cm for females. [22]
Statistical analysis
Chi-square tests were used to compare differences between classification variables. Descriptive statistics, chi-square tests, and multivariate logistic regression were performed using SPSS version 22 (IBM Corp., Armonk, NY). We conducted a multivariate logistic regression analysis using a stepwise method (a in = 0.10, a out = 0.15) to select variables, with the presence of hypertension considered the dependent variable. The independent variables were those that were significantly associated with hypertension in the univariate analysis. Significance for all statistical tests was set at P < .05 (2-sided
Characteristics of the study population
Among the 11,200 initial study participants, 408 participants with incomplete data were excluded. This left 10,792 participants for the analyses; 43.7% were men and 56.3% were women. The median age was 48 years (range 15-92 years). The prevalence of hypertension was 30%. Tables 2 to 4 show the comparison of the prevalence of hypertension among participants with different characteristics. Factors such as older age, being male, employment, low education level, high BMI, central obesity, smoking cessation, abstinence, and having a history of diabetes mellitus, myocardial infarction, coronary heart disease, nephropathy, or stroke were associated with a higher prevalence of hypertension (all P < .05).
Multivariate analysis
Hypertension was significantly associated with: gender ( Table 5). Coronary heart disease (OR = 1.830) was most strongly associated with hypertension, followed by age (OR = 1.684).
BNs model
A model of factors related to hypertension with 14 nodes and 20 directed edges was built using BNs, based on variables with significant differences in the univariate analysis (Fig. 3). Because Table 3 Comparison of differences in prevalence among different lifestyle. Table 4 Comparison of differences in prevalence among different physical condition. this was a cross-sectional survey, directed edges represented probabilistic dependencies between nodes that were connected rather than causal relationships between hypertension and related factors. Figure 3 shows that connections between hypertension and related factors were established by a complex network structure. Age, smoking, occupation, cultural level, BMI, central obesity, drinking alcohol, diabetes mellitus, myocardial infarction, coronary heart disease, nephropathy, and stroke were directly connected to hypertension. In addition, gender was indirectly linked to hypertension through drinking alcohol. Figure 3 also shows the interrelationships between the factors related to hypertension. BMI was related to central obesity, gender was associated with drinking alcohol, and age had a relationship to central obesity, coronary heart disease, diabetes mellitus, and cultural level.
Reasoning model
We can also use BNs to predict the probability of suffering from hypertension by predicting the probability of unknown nodes Figure 4 shows that if a person had central obesity, the probability of suffering from hypertension increased from the marginal value of 30.0% (Fig. 3) to 38.1%. If a person was obese (according to BMI), they had a 50.0% probability of having hypertension (Fig. 5); the probability increased to 51.8% when that person drank alcohol (Fig. 6). BNs can also be used to study the interrelationships between related factors. Figures 3 and 4 show that if a person had central obesity, the probability of having diabetes mellitus, stroke, nephropathy, and coronary heart disease increased to 6.25%, 1.35%, 1.14%, and 3.99%, respectively. The probability of having a BMI ≥24.0 kg/m 2 changed from 42.1% to 63.9%.
Model validation
Finally, we validated the BN model and evaluated it using evaluation indicators. The Weka 3.8.0 results showed that the accuracy of the model was 72.36%; TPR was 0.906, FPR was 0.705, precision was 0.751, recall was 0.906, and the F-measure was 0.821. All of these values were greater than 0.7, which showed that model we established was accurate and effective.
Discussion and conclusions
The increasing prevalence of hypertension has become a worldwide public health problem. [23,24] This study showed the prevalence of hypertension in Shanxi Province, China was 30.0%, which was considerably higher than the nationallyreported prevalence of hypertension as well as that reported in other provinces of China. [20,25,26] This suggests that Shanxi Province should direct more attention to the prevention and control of hypertension. Research indicates that preventing and controlling hypertension can play a major role in both primary and secondary prevention of CVD. [2,27] We found that the prevalence of hypertension varied by different demographic characteristics and lifestyles. It is noteworthy that the prevalence of hypertension was unexpectedly high in participants who had quit smoking and drinking alcohol, which might be related to a conscious control of tobacco and alcohol consumption among these participants after learning that they had hypertension. Our BN showed that factors directly associated with hypertension were age, smoking, occupation, cultural level, BMI, central obesity, drinking alcohol, diabetes mellitus, myocardial infarction, coronary heart disease, nephropathy, and stroke. Gender was indirectly linked to hypertension through drinking alcohol (Fig. 3), and there was a significant correlation between gender and drinking alcohol ( Table 6). The BN also reflected correlations between various related factors. Age was related to diabetes, coronary heart disease, central obesity, and education level (Fig. 3), and the correlation between age and these factors was significant (Tables 7-10). The relationship between BMI and central obesity (Fig. 3) was confirmed (Table 11). Logistic regression cannot show these relationships, as it is a model built on the condition that these factors are independent of each other. Our BN model also predicted the probability of unknown nodes (hypertension) using information about known nodes (related factors). For example, if a person had central obesity, the probability of suffering from hypertension increased to 38.1% (Fig. 4). People that were obese (according to BMI cut-off values) had a 50.0% probability of having hypertension (Fig. 5), with this probability increasing to 51.8% if they drank alcohol (Fig. 6). BNs also show interrelationships between related factors; for example, if a person had obesity, the probability of having diabetes mellitus, stroke, nephropathy, coronary heart disease, and a BMI ≥24.0 kg/m 2 increased (Figs. 3 and 4). This type of model offers an intuitive format to caution people about the hazards of certain high-risk behaviors, and may help control the occurrence of certain high-risk behaviors to reduce the incidence of disease. It can also make up for the shortcomings of logistic regression prediction based on all known variables. Therefore, in practical applications, we can use BNs to establish models of disease with related factors to intuitively reflect the relationship between disease and these factors.
Compared with the traditional BN structure learning algorithm, the Tabu search algorithm has several advantages. It incorporates adaptive memory to move beyond a local search to find the global optimum, [28] and can avoid the repetition of solutions by maintaining a Tabu list and activate good solutions using aspiration criteria. [29] In addition, the solution of the Tabu search algorithm is not randomly generated, but rather is based on a mobile search, thereby increasing the probability of obtaining a better global optimal solution. [13] This study showed that a Tabu search algorithm-optimized BN can be used in exploring factors related to disease. Table 9 Examination of the relationship between age and central obesity.
Strengths and limitations of this study
The advantage of this study was that it used a BN to analyze factors related to hypertension, not only by identifying relevant factors, but also by exploring the relationships among these factors. However, this study also had some limitations. This study was cross-sectional, so the director arcs in the constructed BN reflected correlations between nodes and not causality. In addition, participants were selected from specific cities in Shanxi Province meaning there might be selection bias, which limits the generalizability of the findings to the wider population. There may also be recall and information bias, as participants might have exaggerated their exposure to some factors. The direction of the bias is positive. Confounding bias might also have occurred in the process of this investigation, but a BN can effectively control this type of bias.
Table 11
Examination of the relationship between BMI and central obsity. | 4,167.8 | 2019-06-01T00:00:00.000 | [
"Medicine",
"Computer Science"
] |
Design and In-orbit Demonstration of REGULUS, an Iodine electric propulsion system
REGULUS is an Iodine-based electric propulsion system. It has been designed and manufactured at the Italian company Technology for Propulsion and Innovation SpA (T4i). REGULUS integrates the Magnetically Enhanced Plasma Thruster (MEPT) and its subsystems, namely electronics, fluidic, and thermo-structural in a volume of 1.5 U. The mass envelope is 2.5 kg, including propellant. REGULUS targets CubeSat platforms larger than 6 U and CubeSat carriers. A thrust T = 0.60 mN and a specific impulse Isp = 600 s are achieved with an input power of P = 50 W; the nominal total impulse is Itot = 3000 Ns. REGULUS has been integrated on-board of the UniSat-7 satellite and its In-orbit Demonstration (IoD) is currently ongoing. The principal topics addressed in this work are: (i) design of REGULUS, (ii) comparison of the propulsive performance obtained operating the MEPT with different propellants, namely Xenon and Iodine, (iii) qualification and acceptance tests, (iv) plume analysis, (v) the IoD.
Introducton
CubeSats have become increasingly common in recent years provided the dramatic reduced cost in respect to conventional satellites [1]. Combining this feature with a remarkable versatility, CubeSats allow small companies, small countries, and research centers entering the space market paving the way to a completely new paradigm. Constellations of SmallSats (namely satellites with mass < 500 kg) in Low Earth Orbit (LEO) are entering the market to address the more and more demanding requirements imposed by new applications such as global internet coverage and Internet-of-Things (IoT) [2,3]. Nonetheless, only an on-board propulsion system enables SmallSats to fully exploit their capabilities. In fact, upcoming mission scenarios are increasingly complex since orbit change and maintenance are often required [4,5]. In this frame, many research centers and companies (Enpulsion, Busek, Exotrail, ThrustMe, and AVS, just to name a few) are developing new propulsion systems for CubeSats and SmallSats. The number of missions taking full advantage of a propulsion unit is still moderate, although rapidly increasing [5]. This is associated to the inherent difficulty of integrating a propulsion system into a SmallSat. In fact, a space thruster is an intrinsically complex device and strict volume, mass, power, and cost budgets overcomplicate its design.
In the STRaND-1 [6,7] mission (launched in 2013), a water-alcohol resistojet for attitude control and a Pulsed Plasma Thruster (PPT) for orbit change were combined in a 3 U CubeSat. The BRICSat-P is a space mission in which a propulsion system has been integrated on a 1.5 U Cube-Sat. It was launched in 2015, and four μCAT thrusters [8] were used for attitude control. In the SERPENS [9] mission 1 3 (launched in 2015) a PPT was integrated on a 3 U CubeSat for drag compensation. In 2018 a Field Emission Electric Propulsion (FEEP) has been successfully tested on orbit for the first time [10]. The In-orbit Demonstration (IoD) of the IFM Nano Thruster developed by the Austrian company Enpulsion [11] consisted in changing the semi-major axis of a LEO orbit of several meters. Finally, in 2019 the I2T5 from ThrustMe [12] was the first Iodine propelled cold gas thruster ever tested in orbit; it was integrated on a 6 U CubeSat.
In the last decade, particular effort has been put in studying and developing Iodine-based propulsion systems targeted at CubeSats and SmallSats [12][13][14][15][16]. Iodine is a particularly appealing propellant because of the following properties: • The density of Iodine is three times higher than Xenon, in fact the former can be stored at the solid state [17]; this enables a higher total impulse for the same volume of propellant. • Iodine can be stored at moderate temperature (e.g., ambient temperature) and pressure (e.g., atmospheric pressure). Therefore no cryogenic or strict thermal control is required [13] • Iodine has low procurement cost: 90% less than Xenon [14] • Iodine suffers no transportation issues as the tank is not pressurized and a dedicated thermal control is not needed when the thruster is off. Therefore the entire subsystem can be shipped to the launch platform ready for use.
Moreover, Iodine and Xenon propellants guarantee comparable propulsive performances; this has been proven at least for gridded-ion and Hall-effect thrusters [14,18]. For these reasons, the use of Iodine might be the breakthrough for the widespread diffusion of propulsion units for CubeSats and SmallSats. Nonetheless, the use of Iodine brings some challenges: (i) to avoid the condensation of the propellant along the fluidic line, the temperature of the system must be maintained at about 100 °C during operation [13]; (ii) Iodine is chemically aggressive, so materials must be selected to avoid corrosion [19]; (iii) the interactions between the Iodine plasma plume and the surfaces of the spacecraft are little known, so the risk associated to the degradation of solar arrays and optics should be carefully managed. Three planned missions that rely on an Iodine-based propulsion system are the Lunar iceCube [20], iSat [13], and Robusta-3A [12]. The Lunar iceCube is a 6 U spacecraft targeted at the observation of the Lunar surface; its launch is scheduled for late 2021. The propulsive system consists in an Iodinebased 60-W ion thruster. The iSAT mission aims to propel a SmallSat demonstrator with a Hall-effect thruster fed with Iodine. The mission was initially planned for launch in 2017 but it is temporarily suspended because the propulsion unit needs further development. Finally, the I2T5 cold gas thruster will be used to propel the Robusta-3A CubeSat (a 3 U technology demonstrator) whose launch is planned for late 2021.
The REGULUS propulsion unit (see Fig. 1) has been conceived by the Italian company Technology for Propulsion and Innovation (T4i) for the SmallSat market [21,22]. The core of REGULUS is the Magnetically Enhanced Plasma Thruster (MEPT), an electric system propelled by Iodine. The propulsion unit encompasses the MEPT along with electronics, fluidic line, and thermo-structural subsystems. Its volume envelope is 1.5 U, and the total mass is 2.5 kg (including propellant). REGULUS targets CubeSats larger than 6 U and CubeSat carriers. It relies on standard interfaces to ease the integration on the spacecraft, no spacegrade qualified components are used to reduce the recurring costs. REGULUS provides a thrust T of 0.6 mN and a specific impulse I sp of 600 s for an input power P = 50 W. The nominal total impulse is I tot = 3000 Ns and it can be increased to I tot = 11,000 Ns by enlarging the volume of the propulsive unit up to 2 U. The IoD of REGULUS is currently ongoing. The propulsive unit has been qualified, integrated on the UniSat-7 (a CubeSat carrier operated by the Italian GAUSS company) [23], and launched in March 2021 on-board of a Soyuz-2 vehicle. The objectives of the UniSat-7's mission are (i) to inject several CubeSats into a 600 km height Sun Synchronous Orbit (SSO), and (ii) to act as a technology demonstrator for testing specific payloads for future GAUSS missions (e.g., REGULUS). At the same time, REGULUS' primary objective is to demonstrate its capability to enhance Unisat-7 mission scenarios, while enabling maneuvers such as semi-major axis variation.
The rest of this work is organized as follows. The performance of the MEPT operated with both Xenon and Iodine propellants is discussed in Sect. 2. In Sect. 3 the subsystems of REGULUS (namely, thermo-structural, electronics and fluidics) are described and the qualification tests are illustrated. Sect. 4 is dedicated to the numerical simulation of the plasma plume to verify that the flux of charged particles impinging on the spacecraft is limited. The IoD is discussed in Sect. 5 and, finally, conclusions are drawn in Sect. 6.
Magnetically enhanced plasma thruster
The MEPT is a cathode-less plasma thruster [24,25] specifically targeted at CubeSats (see Fig. 2). This technology is extremely appealing for space propulsion, in particular for CubeSats, because of its simple design, geometry and, in turn, reduced cost. The main characteristics of a cathode-less plasma thruster are: (i) a very simple architecture, which helps keeping at bay the cost of these systems; (ii) no electrodes in contact with the plasma neither for generation nor acceleration; (iii) possibility of operating the thruster with different propellants without a drastic redesign; (iv) absence of a neutralizer provided that the ejected plasma is currentfree and quasi-neutral. The main components of a cathodeless plasma thruster are: (i) a dielectric tube inside which the neutral gas propellant is ionized; (ii) a Radio Frequency (RF) antenna working in the MHz range that provides the power to produce and to heat up the plasma [26], (iii) permanent magnets that generate the magneto-static field required to enhance the plasma confinement [27,28] and to improve the thrust via the magnetic nozzle effect [29][30][31].
The propulsive performance of the MEPT has been evaluated at the high vacuum facility of the University of Padova. To this end, a vacuum chamber of cylindrical shape (diameter 0.6 m, length 2 m) has been used [32]. A Spin HFPA-300 linear amplifier (1.8-30 MHz, power up to 300 W) driven by an HP 8648B signal generator provided the RF power to the MEPT. The latter was connected to the amplifier via a 50 Ω coaxial cable. RF probes for vector voltage and current measurement have been used to characterize the electrical power coupled to the antenna [21]. The MEPT has been propelled both with Xenon and Iodine. The mass flow rate of Xenon has been controlled via a MKS 1179B regulator connected to a pressurized reservoir. Instead, the Iodine feeding line consisted of a heated tank and a manifold comprising valves and mass flow control orifices [33]. The thrust generated by the MEPT has been measured with a counterbalanced pendulum specifically designed for RF thrusters of small-to-medium size [34]. The confidence interval for each measurement is: (i) ± 15-20% for thrust, (ii) ± 10% for the power provided to the MEPT, along with (iii) few percent points for the Xenon mass flow rate, and ± 10-15% for Iodine. These values depend on the accuracy of each instrument and on measurement methods. Specifically, the uncertainty on the thrust is mainly associated to calibration errors (this procedure is performed with known masses), and the correction of the thermal drifts induced on the balance (see [34] for more details). Errors on phasing voltage and current probes result in ± 10% uncertainty for the electrical power. The uncertainty on Iodine mass flow rate is due to the pressure-based control strategy adopted for this test (see [33] for more details). Finally, the error bars associated to indirect measures, as for the thruster efficiency and the thrust-to-power ratio (see Fig. 3), are computed with the uncertainty propagation law [34].
In Fig. 3, the thrust efficiency η and the power-to-thrust ratio T/P are depicted as functions of the input power P In the picture, the thruster is being tested with Iodine propellant: input power P = 50 W, mass flow rate 0.10 mg/s namely the power absorbed by the thruster (thanks to the good impedance matching, the power reflected is lower than 5%). In particular: where T is the thrust, I sp the specific impulse, and g 0 the standard gravity. The working frequency of the RF antenna was fixed at 2 MHz and P has been varied in the range 15-60 W. The vacuum chamber pressure was maintained at 10 -3 Pa, and the MEPT was operated with 0.10 mg/s mass flow rate of both Xenon and Iodine propellants. The efficiency η is a linear functions of P, while the maximum value achieved with Iodine is about 5% and 6% with Xenon. Those values are in line with other cathode-less thrusters not subject to strict power and volume budgets [35]. The thrust-to-power ratio T/P is almost independent from P. Iodine presents a poorer performance compared to Xenon, being T/P 20% lower operating the MEPT with the former propellant than with the latter. This difference seems to be due to the molecular form of Iodine (I 2 ), since part of the available power is lost into dissociation along with excitation of vibration and rotation modes instead of ionization.
The maximum T and I sp measured with each propellant are reported in Table 1. Further results regarding the testing campaign can be found in [36] (in particular for what discussions on T and I sp are concerned).
In conclusion, Iodine is demonstrated as a valid candidate to propel CubeSat missions provided the advantages offered at system level (e.g., capability of being stored at solid state) and the propulsive performance only 20% lower, in terms of T/P, in respect to Xenon.
Thermo-structural
The thermo-structural subsystem has three main functions: (i) to provide a sufficiently stiff structural frame for the MEPT, the fluidic line, and the electronics, (ii) to dissipate the heat produced within the REGULUS unit so that each component is maintained within the correct temperature range, and (iii) to provide a thermo-mechanical interface with the satellite. A more detailed list of requirements is reported in Table 2.
One of the key aspects to satisfy the strict mass and volume budgets (2.5 kg and 1.5 U, respectively) is the extensive adoption of the additive manufacturing technique. Specifically, this production process allows: (i) to manufacture shapes that would be impossible with traditional approaches and (ii) to reduce bolted junctions increasing the reliability of the system. In addition, a mix of materials has been used to develop a sufficiently stiff system capable of withstanding the harsh vibrational environment during launch operations. Materials as Scalmalloy and Ti-6Al-4V [40] guarantee high strength, stiffness and lightness. The structural design of REGULUS has been driven by semi-analytical calculations and Finite Element Method (FEM) analyses, performed with the Ansys software [41]. At the same time, sinusoidal and random vibration tests have been performed to verify the design and to qualify the propulsion unit. The measurement campaign has been performed at the facilities of the University of Padova (see Fig. 4) assuming the spectra of the Soyuz launcher. The first natural frequency of the REGULUS unit is about 250 Hz and a good margin of safety against damages has been found. Moreover, the accordance between the Fig. 3 Comparison of the propulsive performance when the MEPT is operated with Xenon and Iodine propellants. Thruster efficiency η (above), and thrust-to-power ratio T/P (below) as functions of the input power to the thruster P. The mass flow rate is 0.10 mg/s results of the FEM simulations and the real behavior of the propulsion unit is good (e.g., numerical and experimental estimations of the first natural frequency differ by 10 Hz). After the vibration tests, each component of the REGULUS unit has been inspected and subsequent functional tests have been performed: each electronic board, valve, heater, temperature and pressure sensor has been checked. Finally, the nominal operation of the entire propulsion unit has been verified. Since no structural or functional failure has been detected, the mechanical design was considered successfully qualified.
To satisfy the thermal requirements of each component, two thermal paths have been designed, namely the high temperature and the low temperature one (see Fig. 5). Dedicated low emittance coatings have been used to insulate the two paths. The MEPT and the radiator constitute the high temperature path; the former is the principal source of thermal power and the latter is used for the dissipation of the heat into space. The electronics and the fluidic line are included in the low thermal path that is interfaced with the satellite to dissipate the heat produced by the subsystems of REGULUS. The thermal design has been driven by semi-analytical models and FEM analyses. In particular, two scenarios have been considered: (i) the Worst Hot Case (WHC), where the satellite is subject to the solar flux and its temperature is + 70 °C and (ii) the Worst Cold Case (WCC), namely no solar flux and the satellite at − 25 °C. The assumed amount of power to be dissipated is 60 W because of the heat produced during operations and about 3 W due to the solar flux. The results of the thermal analysis for the WHC are shown in Table 3 in terms of temperature of the main critical components. Specifically, data retrieved with a FEM analysis and a Lumped Parameter Model (LPM) created with the Octave software [42] are reported. The latter is intended to cross-validate the FEM outputs and to obtain results in a shorter time. The agreement between the two models is satisfactory being the differences registered in the order of 5-10 °C. Finally, thermo-vacuum tests have been performed at the facilities of the University of Padova to qualify the propulsion unit and to validate the numerical models. REGULUS has been tested inside a dedicated Thermo VAcuum Chamber (TVAC) at a pressure 8 × 10 -1 Pa. The temperature has been varied from − 40 to 80 °C for four times (see Fig. 6). After the thermo-vacuum cycles, the subsystems have been inspected and functional tests have been performed. No failure has been detected, and so the thermal design was considered successfully qualified.
In conclusion, an interface module has been created that includes all mechanical, thermal and electrical connections. The module is compatible with standard CubeSat structures [38,39].
Electronics
The electronics have been designed to meet the budget requirements typical of the CubeSat application while maintaining competitive standards of reliability. To this end, Components Off-The-Shelf (COST) have been largely used. This was not detrimental in terms of reliability provided that, according to the European Cooperation for Space Standardization (ECSS), the selection of COTS was driven by a dedicated risk analysis. First, an average mission duration of 3-5 years and a LEO of altitude about 600 km have been assumed to evaluate the expected radiation environment. Second, calculations with the SPENVIS tool [43] provided a requirement in terms of Total Ionization Dose (TID) that was not highly demanding for the mission at hand. Third, COTS compliant with the expected radiation environment have been selected to design the electronics subsystem. In particular, the approach of careful COTS design proposed by Sinclair [44], the ECSS_E_ST-10-12C, and the ECSS-Q-ST-60C have been referenced for this analysis. It is worth specifying that electronics are fully compliant with the mission scenario of the IoD. Moreover, the selection of COTS can be tailored to different customers' needs by means of a dedicated risk analysis. The electronics rely on a flexible architecture and they have been designed for operating REG-ULUS with an input power from 30 up to 60 W, although the nominal value is 50 W. Moreover, a simplified laboratory version of the electronics subsystem has been used in a completely different application field, namely plasma antennas [45][46][47][48]. The schematic of the electronics of REGULUS is depicted in Fig. 7.
Interface
The electronics have been interfaced with the satellite by means of four types of connectors that differ from one another in terms of size and number of pins (to avoid mistakes): • an "insert before flight" type connector is included for safety reasons; • two separated communication lines (referred to as primary and secondary) that rely on two redundant CANbus and one I2C [49] protocols are used for redundancy; • one power connector is used for feeding the electronics with the 12 V DC power from the bus. The connectors are interfaced to the structure by means of two Printed Circuit Boards (PCBs). Finally, a JTAG connector [50] is easily reachable also after integration for updating the software.
Software
The control software is based on FreeRTOS [51] and manages the following specific tasks: • controlling the thruster in terms of thermal regulation, mass flow, and electrical power; • managing the housekeeping and the thrust operational states; • communicating with the satellite through CANbus or I2C.
Architecture
The electronics subsystem consists of four boards disposed around the MEPT, thermally and electrically insulated from it.
The Power Control Unit (PCU) controls and monitors the operations of REGULUS and provides the interface with the spacecraft via CANbus and/or I2C. More in detail, the PCU: • is the interface with the spacecraft for the data link receiving telecommands and sending telemetries; • processes the data for the thermal control; • runs algorithms for thruster ignition and shut down; • monitors the status of the thruster and manages the Failure Detection, Isolation, and Recovery (FDIR).
The PCU is reprogrammable in flight thanks to the bootstrap functionality and an external flash memory in which the software sent via data link can be stored.
The Power Processing Unit (PPU)-Power provides DC power to sensors, valves and heaters involved in the control of the fluidic line, along with it feeds the DC power to the PPU-RF.
The PPU-RF is the DC/AC converter that provides the RF power to supply the MEPT. It is the main power segment of the electronics subsystem.
The Conditioning Unit provides the signal conditioning for the diagnostics of REGULUS.
Testing
Successful functional tests of the electronics have been performed in vacuum both at ambient temperature (20/30 °C) and in a variable temperature environment (− 20/50 °C). Moreover, Electro-Magnetic Compatibility (EMC) tests, compliant with ECSS, have been performed integrating REGULUS on the CubeSat Test Platform (CTP) of the Polytechnic University of Torino [52]. The RF emissions and the generated electro-magnetic fields have produced noise values up to the 15 dBm without affecting the functionalities of both the CTP and REGULUS. Moreover, the noise generated by the propulsion system on the power line is negligible (< 1 dBm).
Fluidic line
The main requirement of the fluidic subsystem of REGU-LUS (see Fig. 8) is to deliver a 0.10 mg/s mass flow rate of Iodine during thruster operation. To obtain a stable propulsive performance, the tolerance on the delivered mass flow rate is ± 5% throughout the overall mission duration. The fluidic subsystem is composed of a tank where the propellant is stored in solid state, a manifold that acts as flow regulator, and an injector that interfaces the subsystem with the MEPT. The additive manufacturing technique has been extensively adopted to achieve compliance with strict volume and mass budgets. A careful selection of the material for the components wetted by Iodine has been accomplished in order to avoid chemical corrosion. The hardware is made from Nickel superalloys such as Inconel and Hastelloy, whereas [53] sealing is adopted at interfaces. The dimension of the tank, can be varied, according to the mission needs, independently from the rest of the propulsive unit. When the tank is heated, the solid Iodine sublimates and enters the manifold. The latter is the technological core of the fluidic subsystem and maintains the nominal mass flow rate. It comprises a pair of valves along with pressure and temperature transducers controlled in closed-loop. The injector has two main functions: (i) it tunes the thermal resistance between the hot thruster and the fluidic line and (ii) it delivers the gaseous Iodine to the MEPT where it is turned into plasma. Two thin-foil heaters are included in the fluidic line to control the temperature of the subsystem via a closed-loop strategy. Specifically, the thermal control allows to: (i) ensure the sublimation of the solid Iodine into the tank, (ii) perform a coarse control of the mass flow rate, (iii) avoid the obstruction of the fluidic line due to the recondensation of the Iodine, (iv) avoid the liquid state of the Iodine (temperature < 115 °C everywhere), and (v) avoid the over-heating of the entire system with the consequent failure of the electronics.
Finally, the fluidic line has been tested in order to verify its capability to provide a 0.10 mg/s Iodine mass flow rate with accuracy of ± 5% [33]. Vacuum tests have been performed showing that the line was compliant with nominal specifications.
Plume analysis
The effects of the iodine plasma plume on the surfaces of the spacecraft are not well-known, thus it is worth assessing that these interactions are minimized. In the following, a preliminary analysis performed with the numerical solver Spacecraft Plasma Interaction System (SPIS) [54] is presented.
The plume generated by the MEPT has been simulated with a 3-Dimensional (3D) Particle-In-Cell (PIC) approach in order to carefully account for the non-Maxwellian dynamics of the electrons [55]. The computational domain consists of a cylinder 16 cm in diameter and 16 cm in length (see Fig. 9). According to a previous work, the domain is sufficiently large to grasp the principal phenomena governing the plasma expansion (e.g., demagnetization of both ions and electrons) [56]. The spacecraft (a 6 U CubeSat) is an equipotential surface only partially included within the computational domain. This assumption aims at reducing the intense computational cost of PIC simulations and its rationale has been discussed more in detail in the following. The species simulated are electrons (e − ) and ions (I + ) produced by the thruster. Both neutral particles (e.g., I 2 and I) and other ionic species (e.g., I 2 + and I − ) have been neglected provided that they are expected to be present in minor percentage [57]. The plume is assumed non-collisional and the ambient plasma has been neglected. The latter assumption seems reasonable provided that the plume produced by REGULUS has density and temperature orders of magnitude higher in respect to the ambient plasma [58]. Particles are injected in the computational domain from the surface labeled "Thruster outlet" in Fig. 9. The following properties are assumed [30] • Ion temperature T i = 0.3 eV, electron temperature T e = 3 eV • Ions and electrons injection flux Γ = 5 × 10 20 m −2 s −1 • Ions speed V 0 = 1370 m/s The value of Γ derives from the assumption of 100% propellant utilization (0.10 mg/s mass flow rate), and V 0 is the Bohm speed. Further details on boundary conditions and the methodology to compute self-consistently the equilibrium potentials are discussed in [56]. Finally, to reduce the simulation time, the vacuum permittivity has been reduced of a factor f = 400 in respect to the real value in accordance with the similarity laws reported in [59].
In Fig. 9 maps of plasma density n, ion speed V, and potential ϕ are depicted as a function of the position (x-z coordinates). At the location of the thruster outlet, the plasma density peaks as n ≈10 17 m −3 . This value decreases of roughly two orders of magnitudes at the external boundary. This decrement is far more intense for what the surface of the spacecraft is concerned (more than four orders of magnitude). The magnitude of V increases in the plume downstream of the thruster outlet, as expected in a magnetic nozzle [60]. The maximum value is V ≈8000 m/s (almost 5 times higher than V 0 ) at the external boundary. Interestingly, the trajectories of the ions seem to be very mildly affected by the magneto-static field (negligible gyration motion), thus they can be considered demagnetized in the overall domain. The plume is acceptably collimated: 95% of the exhausted particles flux is enclosed in an aperture with a half-angle of 30°. Less than 0.1% of the particles impinges on the satellite. Specifically, 90% of the ions colliding against the spacecraft are collected on the surface of the heater. The plasma potential at the thruster outlet is ϕ ≈20 V. Considering that at an infinite distance from the thruster outlet ϕ is assumed to be at 0 V, the potential drop across the plume is comparable to the one expected across a plasma sheath [60]. The equilibrium potential of the satellite is − 0.5 V. In Fig. 10, the trajectories of a selection of electrons that collide against the spacecraft is depicted. In contrast to the ions, electrons describe a more disordered motion due to their higher temperature. It is, however, possible to distinguish clearly the Larmor rotations described by the particles that are frozen to the magneto-static field lines. Notably, few centimeters downstream the thruster outlet, electrons detach from the field lines and their motion becomes completely chaotic. This happens roughly when the plasma potential is 2-3 V. This result enforces the assumption of considering only a portion of the spacecraft in the simulation domain, in fact a very reduced number of charged particles are expected to stay frozen to field lines resulting in intense and localized interactions with the surfaces of the spacecraft placed out of the computational domain. Finally, it is interesting to notice that the kinetic energy of the electrons (E k ) rapidly decreases downstream the thruster outlet, in agreement with theoretical models of a magnetic nozzle [35,60].
The main result of this preliminary analysis is that the interactions between the plasma plume of REGULUS and the satellite are comparable with other electric thrusters (e.g., Hall-effect systems [61]). As a result a major degradation of the surfaces of the spacecraft is not expected provided that the region near the thruster outlet (i.e., the radiator) is realized with materials compatible with iodine. Nonetheless, more detailed investigations are required to account for plasma collisions within the plume, the ambient plasma, and the different materials that constitute a realistic spacecraft. Moreover, an enlarged computational domain including the overall spacecraft will be considered to cross validate the present results.
In-orbit Demonstration
REGULUS has been integrated on-board of the Unisat-7 satellite developed by the Italian company GAUSS [23]. Before integration, sinusoidal and random vibrations along with thermo-vacuum (temperature range − 20/50 °C) tests have been performed for the acceptance. The launch has been successfully accomplished in March 2021 on-board of a Soyuz-2 vehicle. UniSat-7 is a CubeSat carrier and a technology demonstrator for payloads as the avionic system Color of the trajectory depends on the local kinetic energy of the particle E k DeCAS [62] and REGULUS. The target orbit is circular, Sun-Synchronous, and has height 600 km (see Table 4 for further details). One of the main objectives of the current mission is to test the capability of REGULUS to improve the mobility of UniSat-7. The injection of CubeSats on different orbits (e.g., at various altitudes) will be simulated and the decommissioning of UniSat-7 will be verified as final task of REGULUS. Currently, internal checks to understand the interactions of the subsystems with the space environment and the satellite are ongoing. For example, the capability to heat up the propellant and the tank in different conditions (e.g. starting from different initial temperatures) is under analysis. After this, a series of maneuvers will be performed with incremental thrust duration, from minutes up to hours (potentially, REGULUS can be operated also during eclipse time thanks to the battery pack of UniSat-7). Finally, a longer thrust will allow to verify the performance of the propulsive unit and, in turn, the capability of REGULUS to decommission UniSat-7. The former will be estimated from telemetry data, specifically the thrust can be derived from the altitude change (expected of several kilometers) produced by the continuous firing of REGULUS.
Conclusions
This work is devoted to the description of the design, the performance, and the IoD of REGULUS. The latter is an Iodine-propelled electric propulsion unit conceived at the Italian company T4i. REGULUS targets CubeSats larger than 6 U and CubeSat carries. It has a mass of 2.5 kg and an envelope of 1.5 U. The MEPT (a RF cathode-less thruster) is the core of REGULUS which also includes the thermostructural, the electronic and the fluidic subsystems. The propulsive performance has been evaluated while feeding the MEPT with Xenon and Iodine propellants. Comparable values of thruster efficiency η ≈ 5% and thrust-to-power ratio T/P ≈ 20 mN/kW have been measured for an input power P = 50 W. These values are in line with other cathode-less thrusters not subject to strict volume and power budgets [35]. Successful sinusoidal and random vibration along with thermo-vacuum (temperature range − 40/80 °C) qualification and acceptance tests have been performed before integrating REGULUS on-board of the UniSat-7 satellite for its IoD. Moreover, functional tests have been performed on the complete system both at ambient temperature (20/30 °C) and in a variable temperature environment (− 20/50 °C). The interactions between the iodine plasma plume and the surfaces of the spacecraft have been preliminarily evaluated by means of the SPIS solver. The latter seem in line with other electric thrusters (e.g., Hall-effect systems [61]). Finally, REGULUS has been launched in March 2021 on board of a Soyuz-2 vehicle for its IoD that is currently on-going.
Funding Open access funding provided by Università degli Studi di Padova within the CRUI-CARE Agreement.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 7,744.8 | 2021-06-07T00:00:00.000 | [
"Engineering",
"Physics"
] |
Design and Development of Aerial Robotic Systems for Sampling Operations in Industrial Environment
This chapter describes the development of an autonomous fluid sampling system for outdoor facilities, and the localization solution to be used. The automated sampling system will be based on collaborative robotics, with a team of a UAV and a UGV platform travelling through a plant to collect water samples. The architecture of the system is described, as well as the hardware present in the UAV and the different software frameworks used. A visual simultaneous localization and mapping (SLAM) technique is proposed to deal with the localization problem, based on authors ’ previous works, including several innovations: a new method to initialize the scale using unreliable global positioning system (GPS) measurements, integration of attitude and heading reference system (AHRS) measurements into the recursive state estimation, and a new technique to track features during the delayed feature initialization process. These procedures greatly enhance the robustness and usability of the SLAM technique as they remove the requirement of assisted scale initialization, and they reduce the computa- tional effort to initialize features. To conclude, results from experiments performed with simulated data and real data captured with a prototype UAV are presented and discussed.
Introduction
The development of aerial robots has become one of the most active fields of research in the last decade. Innovations in multiple fields, like the lithium polymer batteries, microelectromechanical sensors, more powerful propellers, and the availability of new materials and prototyping technologies, have opened the field to researchers and institutions, who used to be denied access given the costs, both economic and in specialized personnel. The new-found popularity of this research field has led to the proliferation of advancements in several areas [1] which used to ignore the possibilities of aerial robots given the limited capacities they presented.
One of the environments where aerial robots are putting a foothold for the first time is industry. Given the level of accountability, certification, and responsibility required in industry, the field had been always reluctant to the introduction of experimental state of the art technologies. But thanks to the wider experimentation in aerial robots, resilience and performance robustness levels have been improving, making them an option when solving several industry problems. Currently, these problem are focused in logistical aspects and related operations, like distribution of good and placement of them in otherwise hard to access points. An example of this kind of applications would be the surveying and monitoring of fluids in installations with multiple basins and tanks, for example, a wastewater processing plant.
Basin sampling operations generally require multiple samples of fluids in several points with a given periodicity. This makes the task cumbersome, repetitive, and depending on the features of the environment and other factors, potentially dangerous. As such, automatization of the task would provide great benefits, reducing the efforts and risks taken by human personnel, and opening options in terms of surveying scheduling.
One of the challenges that any autonomous Unmanned Aerial Vehicle (UAV) has to face is that of estimating its pose with respect to the relevant navigational frames with accuracy. The estimation methodology here discussed was formulated for estimating the state of the aerial vehicle. In this case, the state is composed of the variables defining the location and attitude as well as their first derivatives. The visual features seen by the camera are also included into the system state. On the other hand, the orientation estimation can be estimated in a robust manner by most flight management units (FMUs), with the output of the attitude and heading reference system (AHRS) frequently used as a feedback to the control system for stabilization.
In order to account for the uncertainties associated with the estimation provided by the attitude and heading reference systems (AHRSs), the orientation is included into the state vector and is explicitly fused into the system. Regarding the problem of position estimation, it cannot be solved for applications that require performing precise maneuvers, even with global positioning system (GPS) signal available. Therefore, some additional sensory information is integrated into the system in order to improve its accuracy, namely monocular vision.
The use of a monocular camera as the unique sensory input of a simultaneous localization and mapping (SLAM) system comes with a difficulty: the estimation of the robot trajectory, as well as the features map, can be carried out without metric information. This problem was pointed out since early approaches like [2]. If the metric scale wants to be recovered, it is necessary to incorporate some source of metric information into the system. In this case, the GPS and the monocular vision can operate in a complementary manner. In the proposed method, the noisy GPS data are used to incorporate metric information into the system, in periods where it is available. On the other side, the monocular vision is used for refining the estimations when the GPS is available or for performing purely visual-based navigation in periods where the GPS is unavailable.
In this work, we present an automated system designed with the goal of automate sampling tasks in an open air plant and propose a solution for the localization problem in industrial environments when GPS data are unreliable. The final system should use two autonomous vehicles: a robotic ground platform and a UAV, which would collaborate to collect batches of samples in several tanks. The specifications and designs of the system are described, focusing on the architecture and the UAV. The next section describes the vision-based solution proposed to deal with the localization for navigation problem, commenting several contributions done with respect to a classical visual SLAM approach. Results discussing the performance and accuracy of said localization techniques are presented, both based on simulations and real data captured with the UAV described in Section 2. Finally, the conclusions discuss the next step in the testing and development of the system and the refining of the localization technique.
System architecture
The architecture proposed to deal with the fluid sampling task aims to maximize the capability to reach with accuracy the desired points of operation and measurement; and minimize the risks associated with the process. The risks for human operators are removed or minimized, as they can perform their tasks without exposing themselves to the outdoor industrial environment. To achieve this, the system will present two different robotic platforms: a quadcopter UAV acts as sample collector, picking fluid samples from the tanks; and a Unmanned Ground Vehicle (UGV) platform acts as a collector carrier, transporting both the collector and the samples. As the risks associated with the operation of the sample collector UAV are mainly related to the flight operations, the UAV will travel generally safely landed on the collector carrier, where it could be automatically serviced with replacement sample containers or battery charges.
Sampling system architecture and communications
The designed architecture of the system and its expected operation process can be observed in Figure 1. The architecture has been divided into several blocks, so it can fit into a classical deployment scheme in an industrial production environment. The analytics technicians at the laboratory can use the scheduling and control module interface to order the collection of a batch of samples. This is called collection order, detailing a number of samples, which basin must they come from, and if there are any required sampling patterns or preferences. The process can be set to start at a scheduled time, and stops may be enforced, i.e., samples on a specific tank cannot be taken before a set hour.
The collection order is formed by a list of GPS coordinates (with optional parameters, like times to perform operations), which is processed in the central server of the system. This produces a mission path, which includes a route for the sample carrier, with one or more stops. The mission path also includes the sampling flight that the sample collector UAV has to perform with the sample carrier stop. The data of this flight, called collection mission, will be transmitted by the sample carrier to the sample collector and will contain a simple trajectory that the sample collector has to approximately follow, with height indications to avoid obstacles, until the sampling point is reached.
The different paths and trajectories are generated by the path planner module, which works on a two-dimensional (2D) grid map model of the outdoors. When data are available, the grid is to be textured with the map obtained from the visual SLAM technique proposed. The utilization of the grid model allows introducing additional data, like occupancy, possible obstacles, or even scheduling the accessibility of an area, e.g., we can set a path normally used by workers during certain hours to be avoided at those times. An energy minimization planning technique, essentially a simplified approach to Ref. [3], is used to obtain the trajectories, considering a set of criteria.
1. All samples in the same basin/tank should be capture consecutively; 2. Minimizing the expected flight effort for the UAV (heuristically assuming that the trajectory is a polyline of vertical and horizontal movements);
Minimize penalties for restricted access areas;
4. Minimize the trajectory of the UGV.
The communications with the autonomous platforms of the system will be performed through 4G in order to allow video streaming during the prototyping and testing phases. The data produced by the different elements of the system will be stored and logged to allow posterior analysis. The rest of the communications can be performed through any usual channel in industrial environments, be it a local network, Internet, VPN…, as the segmented architecture allows discrete deployment. The communications between the sample carrier and the sample collector will be performed through ZigBee, although they present Wi-Fi modules to ease development and maintenance tasks.
A subset of the communication protocols is considered priority communications. This includes the supervision and surveying messages in normal planned operation, and those signals and procedures that can affect or override normal operation, e.g., emergency recall or landing. For the UAV, the recall protocol uses a prioritized list of possible fallback points so that the UAV tries to reach them. If the sample carrier is present, the UAV will try to land on it, searching for its landing area through fiduciary markers. If it is not possible, the UAV will try another fallback point in the list, until it lands or the battery is below a certain threshold: in that case, it will just land in the first clear patch of ground available, though this will generally require dropping the sampling device.
Sample collector aerial robot architecture
The UAV built as a sample collector, UAV is a 0.96 m diameter quadcopter deploying four 16" propellers with T-Motor MN4014 actuators. The custom-built frame supports the propeller blocks at 5 angle and is made of aluminum and carbon fiber. A PIXHAWK kit is used as a flight management unit (FMU), with custom electronics to support 2 6 s 8000 mAh batteries. An Odroid U4 single-board-computer is used to perform the high level task and deal with all noncritical processes. Beyond the sensors present in the FMU (which includes AHRS and GPS), the UAV presents a front facing camera USB (640Â480@30fps) for monitoring purposes, an optical flow with ultrasound sensor facing downward, and a set of four ultrasound sensors deployed in a planar configuration to detect obstacles collision. The FX4 stack [4] is used to manage flying and navigation, while a Robot Operating System (ROS) [5] distribution for ARM architectures is run at the Odroid Single Board Computer (SBC), supporting MavLink for communications. The communication modules and sensors are described at hardware level in Figure 2.
The hardware used weights approximately 4250 kg, while the propellers provide a theoretical maximum lift of 13,900 g. The sample collection device, including the container, is being designed with a weight below of 300 g, meaning that UAV, with the sample capture system and up to 1000 ml of water-like fluid, would keep the weight/lift ratio around 0.4.
A simplified diagram of the operation process is shown in Figure 3. The communications are divided into two blocks, those connecting the UAV with the main network for supervision and emergency control, including video streaming with MavLink [6] over 4G, and those that will connect it to the sample carrier during routine operation. This routine operation includes receiving the collection mission detailing the trajectory to the sampling point and the signal to start the process. The action planner module in the UAV supervises the navigation tasks, making sure that the trajectory waypoints are reached and the managing the water sample collector control module.
The localization and positioning of the UAV are solved through a combination of GPS and visual odometry estimation (see Section 4). To approach the water surface, the downward looking ultrasonic height sensor will be used, as vision-based approaches are unreliable on reflective surfaces. To ease the landing problem, the sample carrier will present fiduciary markers to estimate the pose of the drone. This allows landing operations while inserting the sample container into a socket, similarly to an assisted peg-in-the-hole operation. After the UAV is landed, the sample collector control module releases the sample container, and the sample carrier will replace the container with a clean one. After the container is replaced, the carrier will send a signal to the UAV so that the new container is properly locked, while the filled one will be stored.
As the autonomous navigation depends mainly on the localization and positioning, it is largely based in GPS and visual odometry. This combination allows, combined with the ultrasound to find height and avoid possible obstacle, to navigate the environment event if the GPS signal is unreliable. Though there are more accurate alternatives to solve the SLAM problem in terms of sensors, namely using RGB-D cameras or Lighting Detection and Ranging (LiDAR)-based approaches, they present several limitations that make them unsuitable for outdoor industrial environments. These limitations are in addition to the penalty imposed by their economical cost, especially for models industry levels of performance and reliability. In the case of RGB-D, most of the sensors found in the market are unreliable in outdoors as they use IR or similar lightning frequencies. The subset of time-of-flight RGB-D sensors presents the same limitation as the LiDAR sensors: they are prone to spurious measurements in environments where the air is not clear (presence of dusty, pollen and particles, vapors from tanks), they present large latencies, making them unfit for real-time operation of UAVs, and they are generally considered not robust enough for industrial operation. In the case of LiDAR, and RGB-D if the SLAM/localization technique used focuses on depth measurements, there is an additional issue present: these approaches normally rely in computationally demanding optimization techniques to achieve accurate results, and the computer power available in an UAV is limited.
UAV localization and visual odometry estimation
The drone platform is considered to freely move in any direction in R3 Â SO(3)R3 Â SO (3), as shown in Figure 4.
As the proposed system is mainly intended for local autonomous vehicle navigation, i.e., localize the sample collector during the different collection missions, the local tangent frame is used as the navigation reference frame. The initial position of the sample collector landed over the sample carrier is used to define the origin of the navigation coordinates frame, and the axes are oriented following the navigation convention NED (North, East, Down). The magnitudes expressed in the navigation, sample collector drone (robot), and camera frame are denoted, respectively, by the superscripts NN, RR, and CC. All the coordinate systems are right-handed defined. The proposed method will be taken mainly into account the AHRS and the downward monocular camera, though it also uses data from the GPS during an initialization step.
The monocular camera is assumed to follow the central-projection camera model, with the image plane in front of the origin, thus forming a noninverted image. The camera frame C is considered right handed with the z-axis pointing toward the field of view. It is considered that the pixel coordinates are denoted with the [u, v] pair convention and follow the classical direct and inverse observation models [7].
The attitude and heading reference system (AHRS) is a device used for estimating the vehicle orientation, while it is maneuvering. The most common sensors integrated with AHRS devices are gyroscopes, accelerometers, and magnetometers. The advances in micro-electro-mechanical systems (MEMS) and microcontrollers have contributed to the development of inexpensive and robust AHRS devices (e.g., [8][9][10]).
In the case of the deployed FMU, the accuracy and reliability provided by its AHRS are enough to directly fuse its data into the estimation system. Thus, AHRS measurements are assumed to be available at high rates (50-200 Hz) and modeled according to where a N ¼ [ϕ υ , θ υ , ψ υ ] T , being ϕ v , θ v , and ψ v Euler angles denoting, respectively, the roll, pitch, and yaw of the vehicle; with v a being Gaussian white noise.
The global positioning system (GPS) is a satellite-based navigation system that provides 3D position information for objects on or near the Earth's surface, studied in several works [11,12].
The user-equivalent range error (UERE) is a measurement of the cumulative error in GPS position measurements caused by multiple sources of error. These error sources can be modeled as a combination of random noise and slowly varying biases [11]. According to a study [13], the UERE is around 4.0 m (σ); in this case, 0.4 m (σ) corresponds to random noise.
In this work, it is assumed that position measurements y r can be obtained from the GPS unit, at least at the beginning of the trajectory, and they are modeled by where v r is Gaussian white noise and r N is the position of the vehicle. As GPS measurements are usually in geodetic coordinates, Eq. (2) assumes that they have been converted to the corresponding local tangent frame for navigation, accounting for the transformation between the robot collector frame and the antenna.
Problem formulation
The objective of the estimation method is to compute the system state x where the system state x can be divided into two parts: one defined by x v and corresponding to the sample collector UAV state, and the other one corresponding the map features. In this case, y N i defines the position of the ith feature map. The UAV state x v is composed, at the same time, of where q NR represents the orientation of the vehicle with respect to the local world (navigation) frame by a unit quaternion, and ω R ¼ [ω x , ω y , ω z ] represents the angular velocity of the UAV expressed in the same frame of reference. r N ¼ [p x , p y , p z ] represents the position of the UAV expressed in the same navigation frame, with v N ¼ [v x , v y , v z ] denoting the linear velocities. The location of a feature y i N (noted as y i for simplicity) is parameterized in its Euclidean form: The proposed system follows the classical loop of prediction-update steps in the extended Kalman filter (EKF) in its direct configuration; working at the frequency of the AHRS. Thus, both the vehicle state and the feature estimations are propagated by the filter, see Figure 5.
At the start of a prediction-update loop, the collector UAV state estimation x v takes a step forward through the following unconstrained constant-acceleration (discrete) model: In the model represented by Eq. (6), a closed form solution of _ q ¼ 1/2(W)q is used to integrate the current velocity rotation ω C over the quaternion q NC . At every step, it is assumed that there is an unknown linear and angular velocity with acceleration zero-mean and known-covariance Gaussian processes σ v and σ ω , producing an impulse of linear and angular velocity: Δt. It is assumed that the map features y i remain static (rigid scene assumption). Then, the state covariance matrix P takes a step forward by where Q is the noise covariance matrix of the process (a diagonal matrix with the position and orientation uncertainties); ∇F x is the Jacobian of the nonlinear prediction model (Eq. (7)); and ∇F u is the Jacobian of the concerning the derivatives of the input process represented by the unknown linear and angular velocity impulses assumed.
Visual features: detection, tracking, and initialization
In order to retrieve the depth of a visual feature, the monocular camera, equipped in the UAV, must observe it at least two times while moves along the flight trajectory. The parallax angle is defined by the two projections of those measurements. The convergence of the depth of the features depends on the evolution of the parallax angle. In this work, a method that is based on a stochastic technique of triangulation is used for computing an initial hypothesis of depth for each new feature prior to its initialization into the map. The initialization method is based on previous author's work [7].
Detection stage
The visual-based navigation method requires a minimum number of visual features yi observed at each frame. If this number of visual features is lower than a threshold, then new visual features are initialized into the stochastic map. The Shi-Tomasi corner detector [14] is used for detecting new visual points in the image that will be taken as candidates to be initialized into the map as new features.
When a feature is detected for the first time at k frame, a candidate feature c l is stored: is the locationof the visual feature in pixel coordinates in image frame k; y ci ¼ [t N c0 , θ 0 , φ 0 ] T , complying the inverse observation model y ci ¼ h(x,z uυ ). Thus, y ci models a 3D ray originating in the optical center of the camera when the feature is first observed, and pointing to the infinite, with azimuth and elevation θ 0 and φ 0 , respectively, according to previous work [7]. At the same time, P yci stores, as a 5 Â 5 covariance matrix, the uncertainties of y ci . P yci are computed as P yci ¼ ∇Y ci P∇Y ci T , where P is the system covariance and is the Jacobian of the observation model h(x,z uυ ) with respect to the state and coordinates u,v. A square patch around [u, v], generally with 11 pixels sides, is also stored, to keep the appearance of the landmark.
Tracking of candidate features
The active search visual technique [15] can be used for tracking visual features that form part of the system state. In this case, the information included in the system state and the system covariance matrix is used for defining images regions where the visual feature is searched. On the other hand, in the case of the candidate features, there is no enough information for applying the active search visual technique. A possibility for tracking the candidate features is to use of general-purpose visual tracking approach [14]. This kind of methods uses only the visual input and do not need information about the system dynamics; however, their computational cost is commonly high. In order to improve the computational performance, a different technique is proposed. The idea is to define very thin elliptical regions of search in the image that are computed based on the epipolar constraints.
After the k frame when the candidate feature was first detected, this one is tracked at subsequent frames k þ 1, k þ 2 … k þ n. In this case, the candidate feature is predicted to lie inside the elliptical region S c (see Figure 6). The elliptical region S c is centered in the point [u, v], and it is aligned (the major axis) with the epipolar line defined by image points e 1 and e 2 .
The epipole is computed by projecting the line t N c0 stored in c l to the current image plane using the camera projective model. As there is not any available depth information, the semiline stored in c l is considered to have origin t N c0 , and a length d ¼ 1, producing point e 1 and e 2 .
The orientation of the ellipse S c is determined by α c ¼ atan2(e y ,e x ), where e y and e x represent the y and x coordinates, respectively, of the epipole e, and e ¼ e 2 À e 1 . The size of the ellipse S c is determined by its major and minor axis, respectively, a and b. This ellipse S c represents a probability region where the candidate point must lie in the current frame. The proposed tracking method is intended to be used during an initial short period of time, applying crosscorrelation operators. During this period, some information will be gathered in order to compute a depth hypothesis for each candidate point, prior to its initialization as a new map feature.
On the other hand, during this stage, the available information of depth about the candidate features is not well-conditioned. For this reason, it is not easy to determine an adaptive and optimal size of the search image region. Therefore, according to the kind of application, the parameter a is chosen empirically. In the experiments presented in this work, good results were obtained choosing a ¼ 20 pixels.
Depth estimation
Each time a new image location z uv ¼ [u, v] is obtained for a given candidate c l , a depth hypothesis is computed, through stochastic triangulation, as described in previous work [16].
In previous authors' work [17], it was found that the estimates of the feature depth can be improved by passing the hypotheses of depth by a low-pass filter. Thus, only when parallax α i is greater than a specific threshold (α i > α min ), a new feature y new ¼ [p xi , p yi , p zi ] T ¼ h(c l , d) is added to the system state vector x, as Eq. (3).
In order to correctly operate the proposed method, a minimum number of map features must be maintained inside the camera field of view. For example in Ref. [21], it is stated that a minimum number of three features are required for the operation of monocular SLAM. In practice, of course, it is better to operate with a higher minimum number of features. The above requires to initialize continuously new features into the map.
The initialization time period that takes a candidate point to become a map feature depends directly on the evolution of the parallax angle. At the same time, the evolution of the parallax depends on the depth of the feature and the movement of the camera. For example, the parallax for a near point can increase very quickly due to a small movement of the camera. In practice, it is assumed that the dynamics of the aerial vehicle allows tracking a sufficient number of visual features for initialization and measurement purposes. Experimentally, at least for nonaggressive typical flight maneuvers, we have not found yet major problems in order to accomplish the above requirement.
Prediction-update loop and map management
Once initialized, the visual features y i are tracked by means of an active search technique, using zero-mean normalized cross-correlation to match a given feature, and the patch that notes its appearance to a given pixel to be found in a search area. This search area is defined using the innovation covariance. The general methodology for the visual update step can be found in previous works [7,16,18], where the details in terms of mathematical representation and implementation are discussed. In this work, the loop closing problem and the application of SLAM to large-scale environments are not addressed. Although it is important to note that SLAM methods that perform well locally can be extended to large-scale problems using different approaches, such as global mapping [19] or global optimization [20].
On the other side, when an attitude measurement y N a is available, the system state is updated. Since most low-cost AHRS devices provide their output in Euler angles format, the following measurement prediction model h a ¼ h(x v ) is used: θ υ φ υ ψ υ 2 4 3 5 ¼ a tan 2ð2ðq 3 q 4 À q 1 q 2 Þ, 1 À 2ðq 2 2 À q 2 3 ÞÞ a sin ðÀ2ðq 1 q 3 À q 2 q 4 ÞÞ a tan 2ð2ðq 3 q 4 À q 1 q 4 Þ, 1 À 2ðq 2 3 À q 2 4 Þ During the initialization period, position measurements y r are incorporated into the system using the simple measurement model h r ¼ h(x v ): The regular Kalman update equations [7,16,18] are used to update attitude and position whenever is required, but using the corresponding Jacobian ∇H and measurement noise covariance matrix R related to the AHRS observation model.
In this work, the GPS signal is used for incorporating metric information into the system in order to retrieve the metric scale of the estimates. For this reason, it is assumed that the GPS signal is available at least during some period at the beginning of the operation of the system. In order to obtain a proper operation of the system, this period of GPS availability should be enough for allowing the convergence of the depth for some initial features. After this initialization period, the system is capable of operating using only visual information.
In Ref. [21], it is shown that only three landmarks are required for setting the metric scale in estimates. Nevertheless, in practice, there is often a chance that a visual feature is lost during the tracking process. In this case, it is convenient to choose a threshold n ≥ 3 of features that present convergence in order to end the initialization process. In this work, good experimental results were obtained with n ¼ 5.
An approach for testing features convergence is the Kullback distance [22]. Nevertheless, the computational cost of this test is quite high. For this reason, the following criterion is proposed for this purpose: maxðeigðP y i ÞÞ < ky i À r N k 100 (11) where P yi is the covariance matrix of the y i feature, and it is extracted from the system covariance matrix P. If the larger eigenvalue of P yi is smaller than a centime of the distance between the feature and the camera, then it is assumed that the uncertainty of the visual features is small enough to consider it as an initial landmark.
Results
In this section, the results related to the proposed visual odometry approach for UAV localization in industrial environments are discussed. The results obtained using synthetic data from simulations as well as that obtained from experiments with real data are presented. The experiments were performed in order to validate the performance, accuracy, and viability of the proposed localization method, considering a real outdoor scenario. Previous works by the authors [7,18] and other researchers [2] have proven that similar solutions can reach real-time performance, being of more interest development of robust solutions that can be optimized further down the line. The proposed method was implemented in MATLAB©.
Experiments with simulations
The values of the parameters used to simulating the proposed method were taken from the following sources: (i) the parameters of the AHRS were taken from Ref. [8], (ii) the model used for emulating the GPS error behavior was taken from Ref. [23], and (iii) the parameters of the monocular camera are the same for real camera used in real-data experimentation.
In order to validate the performance of the proposal, two different scenarios were simulated (see Figure 7) as follows: 1. The environment of the aerial robot is composed of landmarks uniformly distributed over the ground. The quadrotor performs a circular flight trajectory, at a constant altitude, after the taking off.
2. The environment of the aerial robot is composed of landmarks randomly distributed over the ground. The quadrotor performs and eight-like flight trajectory, at a constant altitude, after the taking off.
In simulations, the data association problem is not considered, that is, it is assumed that visual features can be detected and tracked without errors. Also, it is assumed that the aerial robot can be controlled perfectly. In order to obtain a statistical interpretation of the simulations results, the MAE (mean absolute error) was computed for 20 Monte Carlo runs. The MAE was calculated for the following cases: 1. Trajectory estimated using only filtered data from the GPS.
2.
Trajectory estimated using visual information in combination with GPS data, during all the simulation.
3. Trajectory estimated using visual information and GPS data, but only during the initialization period.
In the results presented in Figure 8, it can be appreciated the typical SLAM behavior, every time when the aerial robot flight near to its initial position, when the visual-based scheme is used. In this case, note that when the initial visual features are recognized again, the error is minimized. On the other hand, in the case where GPS data is fused into the system during the whole trajectory, it can be appreciated an influence of the GPS error when the aerial robot flight near to its initial position. In this latter case, the error is less minimized. The above results can suggest that for some scenarios, it is better to navigate relying only on visual information.
Also, it is important to note that for trajectories where the aerial robot moves far away from the initial position, the use of GPS data can be very useful because an upper bound is imposed to the error drift which is inherent to the navigation scheme based only on vision. It is important to note that errors related to the slow-time varying bias of the GPS can be modeled in Eq. (2) by considering a bigger measurement noise covariance matrix. However, in experiments, it was found that if this matrix is increased too much then the convergence of initial visual features can be affected. Future work could include an adaptive approach for fusing GPS data, or for instance, to include the bias of the GPS into the system state in order to be estimated.
Experiments with real data
The custom build sample collector UAV was used to perform experiments with real data. In experiments, the quadrotor has been manually radio-controlled to capture data. The data captured from the GPS, AHRS, and frames from the camera were synchronized and stored in an ROS data set. The frames with a resolution of 320 Â 240 pixels, in gray scale, were captured at 26 fps. The flights of the quadrotor were conducted in industry-like facilities.
The surface of the field is mainly flat and composed of asphalt, grass, and dirt, but the experimental environment also included some small structures and other manmade elements.
In average eight to nine GPS, satellites were visible at the same time.
In order to evaluate the estimates obtained with the proposed method, the flight trajectory of the quadrotor was determined using an independent approach. In this case, the position of the camera was computed, at each frame, with respect to a known reference composed of four marks placed on the floor forming a square of known dimensions. The perspective on fourpoint (P4P) technique described in [24] was used for this purpose.
As it was explained earlier, an initialization period was used for incorporating GPS readings in order to set the metric scale of the estimates. After the initialization period, the estimation of the trajectory of the aerial robot was carried out using only visual information.
The same methodology used with simulated data was employed with real data. Therefore, the same experimental variants were tested: (i) GPS, (ii) GPS þ camera, (iii) camera (GPS only during the initialization period). All the results were obtained averaging 10 experimental outcomes. Figure 9 shows the evolution of the estimated flight trajectory along the time, which was obtained with each experimental variant. Table 1 shows a summary of the above experimental results. In this case, in order to compute the error in position, the trajectory computed with the P4P technique was used as ground truth.
It worthwhile to note that analyzing the above results, it can be verified similar conclusions that they were obtained with simulations: The use of GPS exclusively can be unreliable for determining the vehicle position in the case of fine maneuvers. In flight trajectories near to the initial reference, the error can be slightly low relying only on visual information.
Regarding the application of the proposed method in a real-time context, the number of features that are maintained in the system state is considerably below an upper bound that should allow a real-time performance, for instance by implementing the algorithm in C or Cþþ languages. In particular, since early works in monocular SLAM as Davison [25], it was Figure 9. Flight trajectory estimated with: (i) P4P visual reference, (ii) camera, (iii) camera þ GPS, and (iv) GPS. The position is expressed in north coordinates (upper plot), east coordinates (middle plot), and altitude coordinates (lower plot). In every case, an initialization period of 5 s was considered with GPS availability. shown the feasibility of real-time operation for EKF-based methods for maps composed of up to 100 features.
Conclusions
An industrial system to automatize the water sampling processes on outdoor basins in a wastewater treatment plant has been proposed, with novel research to solve the localization for navigation problem under unreliable GPS. The architecture of the whole system has been described, while specifications at the hardware level have been presented for those elements designed completely, including the sample collector UAV, and the proposed localization technique has been described and validate with experiments performed over simulated and real data. The localization technique presented can be described as a vision-based navigation and mapping system applied to UAV.
The proposed estimation scheme is similar to a pure monocular SLAM system, where a single camera is used for estimating concurrently both the position of the camera and a map of visual features. In this case, a monocular camera equipped in the aerial robot, which is pointing to the ground, is used as the main sensory source. On the other hand, in the proposed scheme additional sensors, which are commonly available in this kind of robotic platforms, are used for solving the typical technical difficulties which are related to purely monocular systems.
One of the most important challenges, regarding the use of monocular vision in SLAM, is the difficulty in estimating the depth information of visual features. In this case, a method based on a stochastic technique of triangulation is proposed for this purpose. | 9,060.8 | 2017-09-06T00:00:00.000 | [
"Computer Science"
] |
NON-FINANCIAL INDICATORS FOR EVALUATION OF BUSINESS ACTIVITY
It is well-known that financial reports are the main source of information about company performance, and basing on them the business activities and financial position of a company are evaluated. However, under conditions of contemporary economic development company management cannot rely only on the evaluation system of financial indicators in order to manage the company successfully. The main indicators of business activity are not found only in financial data. Such indicators as quality, clients’ satisfaction, innovations, market share quite often reveal the economic position of a company and opportunities for growth better than the financial indicators of company performance reflected in reports. The balanced scorecard indicators, which include the financial perspective, clients’ perspective, internal processes perspective and innovations and learning perspective, can be considered as the origin of non-financial indicators. The balanced scorecard indicators developed strategy maps were designed to trace and develop cause-effect links between long-term aims and implemented short-term activities. Initially the growth and learning perspective of the balanced scorecard indicators included employees’ skills, opportunities of the information system, and such behavioral factors as motivation and powers; however, some changes were introduced into the balanced scorecard indicators and the growth and learning perspective is characterized now as intangible assets which are divided into three groups: human capital, information capital and organizational capital. Researches prove that the results of non-financial activity positively influence the results of financial activity and an increasing number of company managers transform the evaluation system of their company performance to trace non-financial evaluation indicators and thus use their new strategies in competition. Despite the fact that in scientific literature the number of publication on this topic increases, researchers have not come to agreement about the content and structure of non-financial indicators as well as about the methods for their measurement and evaluation. The article reveals various theoretical approaches to understanding the nature, classification and measurement on non-financial indicators, which are in scientific literature, are considered as the indicators for measurement and evaluation of intellectual capital. The aim of the paper is to provide recommendations on the development of non-financial indicators system and its practical implementation in Latvian companies on the basis of the study, analysis and generalization of the scientific economic publications in the field of company performance. The research is based on the analysis and evaluation of special literature and scientific publications on the nonfinancial indicators of business activity and their role in the evaluation of company performance. General logical analysis and synthesis methods as well as content analysis and monographic analysis are used in the research. The article is focused on the analysis and evaluation of previous researches on the non-financial indicators of business activity, overview and systematization of non-financial indicators used in the evaluation of business activity, including small enterprises performance.
Introduction
In order to ensure long-term activity, increase competitiveness and attract investments it becomes topical for many companies in Latvia to create a system for evaluating non-financial performance indicators.
Traditional methods for evaluation of business activity are based on the calculations of financial indicators and their evaluation and do not indentify all factors influencing company development.The analysis of company performance basing just on financial indicators provide incomplete evaluation of company performance as the internal, usually immeasurable factors of the company describing company's internal potential and future perspectives are not taken into consideration.
Despite the great number of theoretical publication on the problems of evaluating company performance, basing on the system of financial and non-financial indicators (Hafeez, 2002;Philips, Louvieris 2005;Craig, Moores, 2005;Lau, Sholihin, 2005;Fernandes et.al., 2006;Prieto, Revila, 2006;Wier et.al., 2007;Chen et.al., 2009;Cardinaels, van Veen-Dirks, 2010) there are problems in practical application of this indicators' system as there is no single approach to identification, classification, measurement and evaluation of non-financial performance indicators.
The topic of the paper is not sufficiently researched in Latvia, thus, the authors of the article have analyzed and evaluated researches done abroad about non-financial indicators of business activity and have assessed the classifications of non-financial indicators.
The aim of the paper is to make recommendations on the development of non-financial indicators system and its practical implementation for Latvian companies on the basis of study, analysis and generalization of the scientific economic publications in the field of company performance.
The present article has the following tasks: • To investigate the essence of non-financial indicators and show their role in company activity performance; • To study and systematize possible approaches to designing content and structure of non-financial indicators as an essential part of company's intellectual capital; • To analyze the advantages and disadvantages of nonfinancial indicators; • To develop recommendations for practical implementation of non-financial indicators to evaluate company's business activity.The research is based on the analysis and evaluation of special literature and scientific publications about nonfinancial indicators of business activity and their role in evaluation of the company performance.General logical analysis and synthesis methods, content analysis and monographic analysis are used in the research.Kaplan and Norton (Harvard Business Review, 2008) have created a balanced scorecard (BSC), which includes financial and non-financial indicators providing information about the results of completed activities.Financial indicators are supplemented by three kinds of evaluating performance related to the clients' satisfaction, internal processes in a company and its ability to learn and develop.BSC lets managers to have a look at their business from four important perspectives and answer four important questions:
Non-financial indicators and balanced scorecard system
• How clients see us (clients' perspective); • Where we need to develop (internal perspective); • If we can improve and create a value (innovations and learning perspective); • How our shareholders see us (financial perspective).BSC method is widely used to evaluate company's performance around the world.For example, in England there are analyzed options using BSC to evaluate small and medium-sized enterprises performance (Sousa et.al.,2006), to organize manufacturing companies performance (Fernandes et.al.,2006), to evaluate hotel industry companies performance (Philips, Louvieris, 2005), to have strategic plans in family companies (Craig, Moores, 2005).
Taking into consideration the aim of the research, the authors of the article will analyze the non-financial indicators included in the balanced indicators system in their further research.Kaplan and Norton (Harvard Business Review, 2008) assert that non-financial performance measures are better indicators for future financial performance than lagged financial measures.Wiersma's opinion is that non-financial indicators have more information content than financial indicators.This claim is based on the observation that financial indicators only partially reflect the impact of current managerial actions.Financial indicators omit the impact of actions taken today; it takes some time before they accumulate in improved financial performance.Non-financial indicators are expected to record the impact of these actions earlier, because they more directly track the impact of actions taken (Wiersma, 2006) According to the researcher Hogue measurement of nonfinancial indicators includes three aspects: (1) customer; (2) internal business processes; and (3) learning and growth.The customer perspective includes: market share; customer
Table 1. Key non-financial performance indicators (designed by the authors)
Manufacturing (Fernandes et.al., 2006) Hotel sector (Philips, Louvieris, 2005) Family companies (Craig, Moores, 2005) satisfaction survey; on time delivery; customer response time and warranty repair cost.The second aspect "the internal business processes" is characterized as a material and labour efficiency variance; process improvement and reengineering; new product introduction; and long-tern relations with suppliers.The last "learning and growth perspective" includes: staff development and training; workplace relations; employee satisfaction; and employee health and safety (Hoque, 2005).
In Table 1 there are summarized non-financial performance indicators applying BSC in manufacturing, hotel industry and family companies.The data in the table prove that companies use different indicators within BIS.These indicators are grouped in three perspectives using BIS approach: customer-related, internal business processes, learning and growth.Kaplan and Norton (Harvard Business Review, 2008) consider that BSC is not a pattern to be applied to business in general or even within one industry.A different market situation, various product strategies and competition conditions require diverse systems of balanced indicators.In order to evaluate companies performance there are created adjusted BSC complying with the mission, strategy, technology and culture of a specific company.BSC provides managers information from four various perspectives, reduces cluttering of information by limiting it to a certain number of applied indicators.
Craig and Moores (Craig, Moores, 2005) describe perspectives using BSC method not only for a family company, but also for a family as primary social group.There have been named the main customer-related indicators: awareness of the family name, use of family in marketing initiatives, quality that reflects family brand.There have been named the following internal processes: investment in technology that will benefit future generations, professional work practices that will attract best family and non-family employees, philanthropic activities.The last group of indicators related to learning and growth includes: creating career paths for family members, making involvement in the business a privilege, encouraging and providing seed funding for new ventures presented by family members.
Within the framework of BSC method non-financial indicators have specific measurements or aims to be achieved which are compared by the authors to the actual measurements of these indicators (for example, Cardinaels, van Veen-Dirks, 2010;Coram et.al., 2011).Table 2 provides comparison of non-financial performance indicators, which are divided into three groups (customer-related, internal business processes, learning and growth).Among non-the financial performance indicators described by the authors there shall be marked common or identical indicators, for example, customers' satisfaction, and slightly different indicators, for example, hours of employee training/ sales training per employee as well as different indicators.
Non-financial indicators refer to the following business functions: manufacturing (product quality, relations with suppliers, delivery and services), sales and marketing, human resources, research and development, environment (workplace environment and surrounding environment) (Shaw, 1999).
Non-financial performance indicators are not divided into three perspectives of BSC as it is shown in Tables 1 and 2. In its turn, Table 3 displays non-financial performance indicators not distinguishing customer-related, internal business processes and learning and growth indicators.Non-financial performance indicators have a long-term aim emphasizing the role of increasing customer loyalty, attraction of new customers, improvement of perceived company image and reputation (Chen et.al.,2009).Non-financial performance has no significant value for company managers; however, it can be used as a determining financial performance indicator and especially as a future financial performance indicator, which is not reflected in the current accounting indicators (Prieto, Revila, 2006).Clients' satisfaction can be described as an increasing clients' wish to purchase new items or repurchase.Satisfied clients buy more frequently and in larger quantities as well as purchase other goods and use other services offered by the company.Besides, consistently offering goods and services, which satisfy clients, company's financial performance improves as failure-related costs reduce.The more clients, the higher profit a company gains.(Prieto, Revila, 2006;Abdel-Maksoud et.al., 2005) (see Table 3).Clients' loyalty can be described as clients' trust to a certain company, product or service.Under contemporary competitive conditions companies have to solve an ongoing problem: how to keep and strengthen their position in the market and at the same time maintain efficiency.Clients' loyalty can be described as one of the key factors for success in business.
A company having good reputation needs lower costs to attract new customers or employees.Good reputation can also help introduce new goods and services by reducing customer trial risk.Reputation can also benefit by establishing and maintaining relations with key suppliers, distributors and potential partners (Prieto, Revila, 2006).
Summarizing this part of the article it can be concluded that all authors consider non-financial indicators systems as a component of BIS; however, each author has his own perception of the non-financial indicators system.
Non-financial indicators as an essential part of intellectual capital
Non-financial indicators mentioned above, which measure the results of a certain kind of business activity, are invisible resources contributing to the company value.In economic literature such invisible resources are called "intangible assets", but in scientific literature -"intellectual capital".
The authors of the paper pay attention to the existing problems of the measuring non-financial indicators; the reason for which, in most cases, is the impossibility to apply quantitative methods to measure intellectual capital.
Classification of non-financial indicators lies on the basis of the model of its measurement (Kuzmina, 2008).Every measurement model is based on certain combinations of indicators, and therefore is unique.Scorecard method developing by Kaplan and Norton is one of them.
According to Kaplan and Norton (Harvard Business Review, 2008) BSC has 4 perspectives: financial perspective, client perspective, internal business processes perspective and innovation and learning perspective.BSC is strategy and vision oriented, not control oriented.It puts forward aims, but assumes that staff will adapt its behavior and take actions to reach them.Indicators are introduced to guide staff to a common vision.
Further development of BSC methodology is characterized by BIS as a basis for a new system of strategic management.Kaplan and Norton have developed a map of strategies which let establish cause-effect links between long-term aims (strategic management) and performance indicators (shortterm activities) (Духонин Е.Ю. и др., 2005).
Analyzing further the opinions of researchers about the nature of intellectual capital it can be concluded that in each case the definition of intellectual capital included a definite set of non-financial indicators, which are grouped in three areas or strategic perspectives.
For example, Webster M. considers intellectual capital to be a component of intangible assets.In his opinion, intangible assets include intellectual capital (patents, licenses, etc.) and other intellectual property (human capital -knowledge, loyalty to organization, motivation, etc..; client capital -clients' loyalty, distribution channels, etc.; organization (infrastructure) -information systems, management philosophy, corporate culture, etc.) (Webster M. et.al. 2004).
Kim Dong-Young and Kumar V. consider that intellectual capital includes three parts: human capital, structural capital and relations capital.In his opinion, the indicators characterizing human capital are employees' satisfaction, number of training hours and training costs per employee, etc. Structural capital is characterized by such indicators as the number of patents, corporate culture, staff ethics, etc., but relations capital has such indicators as clients ' satisfaction, clients' loyalty, brand value, etc. (Kim Dong-Young, Kumar V., 2009).
Ballow J.J. (Ballow J.J. et.al., 2004) thinks that intellectual capital is composed of three parts: relations capital, organization capital and human capital.The authors have marked clients' loyalty, quality of offered contracts, etc. as the indicators characterizing relations capital.In authors' opinion, organization capital has the following indicatorsreputation, structural opportunities, etc., but human capital is described by such indicators as staff loyalty, employees' reputation, etc.
We can see non-financial performance indicators among the indicators characterizing intellectual capital (see Tables 1, 2, 3), for example, clients' loyalty, employees' satisfaction, number of training hours per employee, etc.
Initially the learning and growth perspective included employees' skills, opportunities of information systems and such behavioral factors as motivation and powers (Henhall R.H., Langfield-Smith K , 2007), but within the recent approach of BIS the learning and growth perspective is described as intangible assets having three parts: human capital, information capital and organization capital (Kaplan R.S., Norton D.P., 2004).
There is a positive link between learning skills and nonfinancial performance indicators as well as non-financial performance results and financial performance results.Companies taking care of their clients, employees and society in general can reach better financial performance indicators.Improvement of non-financial performance indicators does not have immediate positive impact on financial performance indicators.Managers must understand that satisfied clients and shareholders will automatically increase company profit.In general, a long-term perspective is needed to measure the training effect and business activity results (Prieto, Revila, 2006).
The learning process as one of the non-financial indicators is considered to be an element of the knowledge-based management process.As a result of training employees gain knowledge and master skills, as well as change their attitude, behavior and motivation.The result of the complex system is the result of business activity, which is divided into two groups: financial indicators (ROI, ROA, ROE, ROS, turnover, productivity) and non-financial indicators (staff turnover, shortage, conflicts, product quality, services and innovations).The company's strategic role and impact on the learning process and results of business activity are noted (Thang et. al., 2010).Qualitative analysis, which includes analysis of financial indicators (analysis of financial reports, ROI, net present value, etc.) and non-financial indicators (staff training, communication practice, assessment of product and process knowledge, assessment of individual, context, satisfaction and process knowledge) is one of the knowledge-based management measurement methods (Huang et.al., 2007).
Despite standardized calculations to measure the results of intangible assets performance, for example, impact of brands in the 1990s, the non-financial performance identification, measurement and reporting mechanism is not sufficiently developed.Much of this lack of standardization has to do with the present voluntary nature of such reporting.Those organizations that put themselves under the onus of nonfinancial performance metrics do so of their own free will, thus there is much variation from one organization to another as they formalize their respective reporting formats (KLM Inc., 2004).
The authors consider that companies have a great deal of flexibility in the treatment of non-financial indicators to complete the reports on intellectual capital as an additional part of the financial statement of a company.Reporting rules do not fully determine actual reporting practice.
Further research on business activity non-financial indicators shall be related to detailed study of intangible assets measurement and assessment methods and approaches, which are analyzed in scientific literature in relation to intellectual capital.
Advantages and disadvantages of non-financial performance indicators
In the previous part of the paper it has been noted that in practice there are certain problems in measurement and assessment of non-financial indicators.Thus, business performance assessment basing on non-financial indicators has both advantages and disadvantages.More profound research was conducted by Ittner and Larcker (2000), who described the advantages and disadvantages on nonfinancial performance indicators in comparison to financial performance indicators; later in 2003 they completed their analysis by identifying company errors in measurement of non-financial performance and options for improving nonfinancial performance measurement.Within the paper the authors schematically summarized the opinions of Ittner and Larcker as displayed in Figure 1 and analyzed the opinions of the researchers.
The main advantages of non-financial performance indicators seen in Figure 1 can be described as the advantages of BSC method (closer link to company's long-term strategies, better displayed future financial performance results, indirect, quantitative indicators of intangible assets), as the aim the map of strategies and BSC is identifying and developing causeeffect links within the strategy implemented by a company.
Analyzing the disadvantages of non-financial performance indicators, the authors of the article have concluded that measurement and assessment of non-financial performance indicators are complicated by the time and cost factor, which can be described as the need for extra time to explain employees the principles and advantages of using a non-financial performance system as well as calculation and introduction of non-financial indicators require additional information.It is complicated to measure and compare non-financial performance indicators because it is difficult to measure the results of company performance or arrive to compromise between indicators if some indicators are expressed in time units, but others in quantitative units or percentage, or even in a free form.As a result, a variety of non-financial indicators measurement methods can lead to imprecise or even erroneous measurement of company's non-financial performance.BSC method determines identifying causal links in a company, but in case of unknown or unrecognized causal links a company may concentrate attention to wrong aims, thus complicating decision making by company managers.Ittner and Larcker (2003) are aware of companies' errors in measurement of non-financial performance resulting from identification and assessment of company's non-financial indicators advantages and disadvantages.BSC method, for example, does not determine which fields provide the greatest contribution to the company's financial results.Successful companies use causal models identifying causal links between the implemented measures and performance results.The companies using causal models can rarely actual improvement in non-financial performance measurement and its impact on future financial performance results.Besides, the researches of Ittner and Larcker (2003) prove that 70% of companies use indicators which are not sufficiently statistically grounded and reliable.
Options for improvement of non-financial indicators (see Figure 1) are based on the application of a causal model, continuous improvement of the model, use model data in making decisions, and assessment of model performance outcomes.In order to introduce a causal model successfully it is needed to summarize information and check causal links within the model by applying mathematical statistics and other methods (for example, target group method, interview method).
The authors consider that companies, including Latvian companies, can gain opportunities to identify better company performance results and measure the company's market value by applying the causal model and BSC and taking into consideration the potions for improvement of non-financial indicators offered by Ittner and Larcker (2003).
Conclusions
• Non-financial indicators reflect individual elements of company's intellectual capital; they are intangible resources comprising the company's value.• In scientific literature there is no common approach to the classification, measurement and assessment methods of non-financial indicators.As there are no common ways to reveal information, it causes difficulties in displaying information in relation to nonfinancial indicators.• The most widespread method for identifying, measuring and displaying non-financial performance indicators is the a balanced scorecard system, which includes financial perspective, clients perspective, internal business processes perspective and innovations and learning perspective.The balanced scorecard system developed, strategy-mapping was elaborated to trace and develop causal links between long-term aims and implemented short-term activities.• Despite the fact that standardized calculations have appeared to measure intangible assets performance results, for example, impact of brands in the 1990s, the non-financial performance identification, measurement and reporting mechanism is insufficiently elaborated.• Assessment of company's non-financial performance errors basing on the recognition and assessment of non-financial indicators advantages and disadvantages provides an opportunity to consider options for improvement of non-financial indicators measurement.On the basic of the above conclusions, it is possible to make the following recommendations for the practical implementation of non-financial indicators for evaluation of company's business activity: • Description of a company's strategy; • Identifying the subject of non-financial indicators for management purposes, because the classification of indicators defines the basis of the model of its measurement; • Selecting the appropriate method of value; Kaplan and Norton's strategy-mapping approach could be useful for Latvian companies; • Developing a report on intellectual capital as an additional part of a company's financial statement.
Performance can be monitored year by year, or can be compared to other similar organizations.A report on intellectual capital is considered to be a tool for financial and non-financial indicators measurement, management and enhancement of company's attractiveness for investors.• Despite the fact that scientific interest in the problem of implementation of non-financial indicators is constantly growing, the above presented theme is still being the subject of scientific discussions and disputes.
Figure 1 .
Figure 1.Advantages and disadvantages of nonfinancial performance indicators, companies' errors in their measurement and improvement of measurement errors (created by the authors based on the data of Ittner and Larcker) Ittner and Larcker (2000) consider that non-financial performance indicators lack statistical reliability because many non-financial data, for example, satisfaction indicators, are based on surveys having a certain number of respondents and quite few questions.Ittner and Larcker (2003) are aware of companies' errors in measurement of non-financial performance resulting from identification and assessment of company's non-financial indicators advantages and disadvantages.BSC method, for example, does not determine which fields provide the greatest contribution to the company's financial results.Successful companies use causal models identifying causal links between the implemented measures and performance results.The companies using causal models can rarely actual improvement in non-financial performance measurement and its impact on future financial performance results.Besides, the researches ofIttner and Larcker (2003) prove that 70% of companies use indicators which are not sufficiently statistically grounded and reliable.Options for improvement of non-financial indicators (see Figure1) are based on the application of a causal model, continuous improvement of the model, use model data in making decisions, and assessment of model performance outcomes.In order to introduce a causal model successfully it is needed to summarize information and check causal links within the model by applying mathematical statistics and other methods (for example, target group method, interview method). | 5,542.6 | 2012-12-01T00:00:00.000 | [
"Business",
"Economics"
] |
ALICE FIT Data Processing and Performance during LHC Run 3
During the upcoming Run 3 and Run 4 at the LHC the upgraded ALICE (A Large Ion Collider Experiment) will operate at a significantly higher luminosity and will collect two orders of magnitude more events than in Run 1 and Run 2. A part of the ALICE upgrade is the new Fast Interaction Trigger (FIT). This thoroughly redesigned detector combines, in one system, the functionality of the four forward detectors used by ALICE during the LHC Run 2: T0, V0, FMD, and AD. The FIT will monitor luminosity and background, provide feedback to the LHC, and generate minimum bias, vertex and centrality triggers, in real time. During the offline analysis FIT data will be used to extract the precise collision time needed for time-of-flight (TOF) particle identification. During the heavy-ion collisions, FIT will also determine multiplicity, centrality, and event plane. The FIT electronics is designed to function both in the continuous and the triggered readout mode. In these proceedings the FIT simulation, software, and raw data processing are briefly described. However, the main focus is on the detector performance, trigger efficiencies, collision time, and centrality resolution.
INTRODUCTION ALICE (A Large Ion Collider Experiment) is
the dedicated heavy-ion experiment at the CERN Large Hadron Collider (LHC) [1]. The main goals of ALICE are: physics of the strongly interacting matter at extreme energy densities and the formation of the Quark-Gluon Plasma (QGP)-a new phase of matter. The ALICE detector is undergoing a major upgrade during the Long Shutdown 2 (2019-2021). The main reason for the upgrade is the increased luminosity and interaction rate. The LHC will deliver Pb-Pb collisions at up to luminosity 6 × 10 27 cm −2 s −1 , corresponding to an interaction rate of 50 kHz. The goal of ALICE is to integrate a luminosity of 13 nb −1 for Pb-Pb collisions at √ s NN = 5.5 TeV, together with dedicated p-Pb and pp reference runs. Data from pp collisions will also be collected at the nominal LHC energy √ s = 14 TeV [2]. Run 3 at the CERN LHC is scheduled to start in 2022.
The ALICE upgrade includes: • A new, high-resolution, low material Inner Tracking System (ITS).
• New readout and trigger systems allowing for continuous data taking.
General Description
The new Fast Interaction Trigger [3] replaces four Run 2 detectors (T0, VZERO, FMD, and AD [4]) with three new subdetectors: FV0, FT0, and FDD. These three subdetectors, each utilizing a different technology, are placed at both sides of the interaction point, in the forward and backward rapidity regions, as shown in Fig. 1. The online functionality of FIT includes luminosity monitoring and the lowest-latency (<425 ns) minimum bias, vertex, and centrality triggers. Offline, FIT provides the precise collision time for the TOF-based particle identification, determines the centrality and event plane, and measures the cross section of diffractive processes. In addition, FIT can reject beam-gas events and provide vetoes for ultraperipheral collisions.
Construction of FIT Subdetectors
The FT0 subdetector has two modular arrays, FT0A and FT0C, placed at opposite sides of the interaction point. The FT0A array consists of 24 modules, and the FT0C array, of 28 modules. Each module has four, optically separated, 2-cm-thick quartz Cherenkov radiators, coupled to a customized PLANACON MCP-PMT. The anodes of the MCP-PMT are grouped into four outputs providing an independent readout channel from each individual radiator segment. As a result, FT0A delivers 96 readout channels and FT0C, 112 channels. The FT0C, being located close to the interaction point, has a concave shape to equalize the flightpath of primary particles and assure their perpendicular entry to the radiators. The intrinsic time resolution of each section of a module is ≈13 ps. The FT0 contributes to the minimum bias trigger, luminosity monitoring, and background rejection.
The FV0 is a large scintillator disk, assembled from 40 optically insulated elements, arranged in 8 sectors and 5 rings with progressing radii. Clear optical fibers deliver light from each element to a Hamamatsu R5924-70 PMT. Owing to its much larger size, each sector of the outermost ring is read by two PMTs. In total there are 48 FV0 readout channels. Time resolution is ≈200 ps. The FV0 provides inputs for minimum bias and multiplicity triggers at LM (Level Minus one) level and, because of its large acceptance, delivers data for centrality and event plane determination.
The FDD consists of two stations, FDDA and FDDC, covering very forward rapidity region at the opposite sides of interaction point. The stations are made of two layers of plastic scintillators, divided into four quadrants. Each quadrant has two wavelengths shifting (WLS) bars connected to individual PMTs via a bundle of clear optical fibers. Benefiting from the forward pseudorapidity coverage, the FDD will contribute to cross section measurements of diffractive processes [5] and studies of ultra-peripheral collisions, and participate in beam monitoring and beamgas rejection.
FIT Electronics
The FT0, FV0, and FDD utilize the same electronics scheme based on two custom-designed modules: the processing Module (PM) and the Trigger and Clock Module (TCM). The PM processes and digitizes input signals, packs the data for readout (in continuous or triggered mode), and makes the first stage calculations for trigger decision. The TCM processes data from PMs, makes the final trigger decisions, provides accurate clock reference, and serves as the slow control interface to the connected PMs.
FIT DATA PROCESSING
All ALICE detectors are integrated into a common Detector Control System and Online and Offline Computing system called O 2 [6].
The functional flow of the O 2 system includes a succession of steps. Data arrive at the First Level Processors (FLP) from the detectors. The first data compression is performed inside of an FPGA-based readout card (Common Readout Unit). The data is transferred from the detectors either in a triggered or in a continuous mode. Temporary simulated raw data were used for system preparation and performance studies. Heart-Beat triggers from Central Trigger Processor (CTP) are used to chop data in Sub-Time Frames (STF). The STF are assembled into Time Frames (TF) in the Event Processing Nodes (EPN). One TF packet includes 128 or 256 orbits. A second step of data aggregation is performed to assemble the data from all detector inputs. A global calibration, the first reconstruction and data compression using Graphics Processing Unit (GPU) is performed synchronously with the data taking. Compressed Time Frames (CTF) are stored permanently on tapes. Results of each step are monitored within the Quality Control (QC) framework.
In the asynchronous stage, a second (and possibly third) reconstruction with final calibration is run on the O 2 EPN farm and on the GRID. The final Analysis Object Data (AOD) is produced and stored permanently. The FT0 can produce trigger signals every 25 ns that is for each LHC bunch crossing. However, due to the limited acceptance, the efficiency has to be verified by simulations. Pythia8 [7] was used to simulate particles from pp collisions. Twenty thousand pp collisions were generated and transported through the ALICE setup. Cherenkov photons from relativistic charged particles traversing quartz radiators were utilized to produce digitized signals taking into account the detector response and possible pile-up. The signals were used to evaluate the efficiency of the following FT0 triggers: FT0A, signal only from the A side; FT0C, signal only from the C; Vertex, signals from both sides and vertex within given range. Figure 2 shows FT0 trigger efficiencies as a function of event multiplicity. Total vertex trigger efficiency is approximately 77%, fraction of triggered are replaced by electromagnetic interactions corresponding to photon-photon and photon-nuclear collisions. The main source of background comes from pair production (e + e − ), having orders of magnitude larger cross section than the hadronic processes. For instance, according to PYTHIA8, the Pb-Pb hadronic cross section at √ s NN = 5.5 TeV is 8 b while the cross section for electromagnetic collisions, used by the QED generator, developed especially for ALICE, is around 180 kb. Fortunately, QED events have a very low charged particle multiplicity. They can be rejected by setting a threshold value for the sum of the FT0A and FT0C amplitudes. Figure 4 shows the efficiency of the minimum bias trigger (coincidence between FT0A and FT0C) as a function of the impact parameter for hadron collisions. The squares (dots) show the results with (without) the selection on the amplitude. It is clear that, for events with an impact parameter below 12 fm, the amplitude cut does not affect the efficiency of the minimum bias trigger. The total efficiency of the FT0A and FT0C trigger is ≈92%. The vertex trigger (coincidence between FT0A and FT0C together with the requirement for the position of the z-vertex lie to within 10 cm around interaction point) efficiency is 83%. For central and semi-central events the efficiency of the vertex trigger is 100%.
Centrality determination with a good resolution is an important functionality of the FIT detector. Figure 5 shows the centrality resolution for Pb-Pb collision at √ s NN = 5.5 TeV calculated for FT0A, FT0C, and FV0 separately, and the combined resolution (FT0A + FT0C + FV0).
CONCLUSIONS
Our analysis has demonstrated that the simulated performance of the FT0 subdetector of FIT satisfies the design requirements of the ALICE experiment: • The minimum bias trigger efficiency matches that of the VZERO detector operated during the Run 1 and Run 2 of the LHC; • The collision time resolution is better than that of the T0 during the Run 1 and Run 2; • The vertex trigger has a 100% efficiency for semi-central and central events. | 2,240 | 2021-07-01T00:00:00.000 | [
"Physics"
] |
Combinatorial multivalent interactions drive cooperative assembly of the COPII coat
Using in vitro reconstitution assays and in vivo cellular phenotypes, Stancheva and colleagues dissect the protein interaction network that drives COPII coat assembly during vesicle formation from the endoplasmic reticulum, revealing the importance of multivalent interactions that mutually reinforce each other.
Introduction
Proteins within the secretory pathway are transported by vesicles generated by cytoplasmic coat proteins, which simultaneously recruit appropriate cargo and sculpt the donor membrane into spherical structures. Vesicle formation requires significant force to overcome the intrinsic rigidity of the lipid bilayer, and coat proteins solve this problem by organizing into oligomeric scaffolds that can impose structure on the underlying membrane (Stachowiak et al., 2013;Derganc et al., 2013). The COPII coat comprises five proteins that self-assemble on the cytosolic face of the ER membrane to traffic nascent secretory proteins toward the Golgi. COPII assembly is initiated upon GTP binding by the small GTPase Sar1, which exposes an amphipathic α-helix that embeds shallowly in the membrane (Hutchings et al., 2018). GTP-bound Sar1 recruits Sec23-Sec24 (Matsuoka et al., 1998), which is the cargo-binding subunit of the coat (Mossessova et al., 2003;Miller et al., 2002). The Sar1-Sec23-Sec24 "inner coat" complex in turn recruits the "outer coat," Sec13-Sec31, to drive vesicle formation (Antonny et al., 2001;Matsuoka et al., 1998). Sec13-Sec31 tetramers form rods (Fath et al., 2007) that can self-assemble in vitro into a cage-like structure that closely matches the size and geometry of vesicles observed by EM (Stagg et al., 2006). Sar1-Sec23-Sec24 can also form higher order assemblies (Zanetti et al., 2013), but Sec13-Sec31 is required to organize these arrays (Hutchings et al., 2018).
The GTP hydrolysis cycle is key to coat assembly and disassembly, thereby creating a dynamically metastable structure. Sar1 alone has low intrinsic GTPase activity, and requires its GTPase-activating protein, Sec23, for GTP hydrolysis (Yoshihisa et al., 1993). Sec31 further accelerates this reaction (Antonny et al., 2001) via a short "active fragment" that contacts both Sec23 and Sar1 . Since GTP hydrolysis triggers coat disassembly (Barlowe et al., 1994;Antonny et al., 2001), the maximal GTPase activity induced by recruitment of the outer coat creates a paradox. How is coat assembly stabilized to counter the destabilizing effect of GTP hydrolysis and prolong coat association sufficiently to produce a vesicle? First, the presence of cargo proteins prolongs coat association with the membrane after GTP hydrolysis (Hughes and Stephens, 2008;Sato and Nakano, 2005;Iwasaki et al., 2017). Another mechanism that likely promotes coat stability is the modulation of outer coat recruitment by regulatory factors, like Sec16 and TANGO1, that compete for binding to Sec23-Sec24-Sar1 (Yorimitsu and Sato, 2012;Kung et al., 2012;Saito et al., 2009;Ma and Goldberg, 2016). Finally, the inherent instability of the full coat upon GTP hydrolysis is likely countered by additional stabilizing interactions between the inner and outer coat layers (Ma and Goldberg, 2016;Hutchings et al., 2018). To fully understand the interplay between coat dynamics and stability, deeper insight into the interfaces that drive the coat to its oligomeric state is required.
A long, unstructured proline-rich domain of Sec31 mediates the only known means of interaction between the inner and outer coat Ma and Goldberg, 2016). The first interface involves the active fragment, which encompasses ∼50 residues of this domain. It occupies an extended surface on the inner coat with two key residues, W 922 and N 923 in Saccharomyces cerevisiae Sec31, inserted into the Sar1•Sec23 active site Hutchings et al., 2018). The functional importance of the active fragment interface is demonstrated by the F382L mutation in human Sec23A. This substitution occurs in the region of Sec23 bound to the active fragment and causes musculoskeletal defects (Boyadjiev et al., 2006;Fromme et al., 2007). When paired with Sar1B, the F382L mutation prevents in vitro recruitment of the outer coat and impairs budding. However, patient cells retain frustrated membrane budding profiles at ER exit sites, suggesting some membrane remodeling can still occur (Fromme et al., 2007).
The second known inner-outer coat interface is less well studied, and involves binding of triple-proline (PPP) motifs to the gelsolin domain of Sec23 (Hutchings et al., 2018;Ma and Goldberg, 2016). This interaction was initially identified for PPP motifs on the procollagen export receptors, TANGO1 and cTAGE5, each of which contains multiple PPP motifs within their unstructured proline-rich domains (Ma and Goldberg, 2016). The multivalent nature of these interaction motifs was speculated to promote local recruitment of Sar1-Sec23-Sec24 and template a helical arrangement of the inner coat compatible with formation of a coated tubule rather than a spherical bud (Hutchings et al., 2018;Ma and Goldberg, 2016). Sec31 also has multiple PPP motifs and competes for the TANGO1-Sec23 interaction, perhaps thereby displacing TANGO1 and completing coat assembly (Ma and Goldberg, 2016;Saito et al., 2009). Electron density that likely corresponds to the Sec23-Sec31 gelsolin-PPP interface is clearly observed in liposome tubules coated with yeast COPII proteins, confirming that this interface is a relatively stable part of the assembled coat (Hutchings et al., 2018). TANGO1 appears specific to the metazoan lineage (Klute et al., 2011), but PPP motifs are also found in the conserved regulatory protein Sec16, suggesting a more ancient origin for coat assembly by PPP recognition. The functional importance of PPP-mediated interactions in full coat assembly, and in cells, remains to be tested.
Here, we aimed to obtain a more complete picture of the interactions that drive coat assembly by testing the functional importance of interfaces that contribute to outer coat oligomerization and recruitment to the inner coat layer. We combine genetic perturbation with in vitro reconstitution assays to test the essentiality of individual interactions and determine how specific mutants are defective. We find that coat assembly is driven by a combination of evolutionarily conserved sequence features that create a multivalent interface between the inner and outer coat layers. These interactions counter the instability associated with GTP hydrolysis to promote productive vesicle formation. Moreover, outer coat cage assembly via a known Sec31-Sec31 structural interface reinforces these interactions, suggesting a feed-forward mechanism for coat propagation. Finally, we show that diverse protein sequences that preserve both global and local sequence elements can suffice for coat assembly and viability.
Results
PPP-driven interactions are dispensable for coat assembly but contribute to coat stability We first tested the importance of PPP motifs in COPII assembly by mutagenesis of the Sec23 gelsolin domain that interacts with PPP motifs. Four aromatic residues that form the PPP-binding cleft are conserved between yeast and humans ( Fig. 1 A), and mutation of these residues abrogated PPP binding to human Sec23A (Ma and Goldberg, 2016). We engineered a mutant that replaced key hydrophobic residues in the PPP-binding cleft with a glycine-serine-glycine tripeptide. This gelsolin loop mutant (sec23-Δgel) complemented a sec23Δ null strain, revealing that PPP binding is not essential for coat function (Fig. 1 B). We reasoned that this interaction might become more important under conditions of dynamic coat turnover. We therefore tested whether perturbations to the GTPase cycle of the coat might sensitize yeast to the loss of the PPP-binding interface. Sed4 is a nonessential accessory factor that is thought to assist Sec16 in Sar1 GTP regulation (Kung et al., 2012;Gimeno et al., 1995). Indeed, sec23-Δgel was inviable when SED4 was also deleted, suggesting that compromised PPP-binding by Sec23 becomes problematic when the GTP cycle of the coat is altered (Fig. 1 B). To gain further insight into the nature of the defect associated with perturbation of the gelsolin domain, we used an in vitro assay that reconstitutes vesicle formation from purified microsomal membranes (Barlowe et al., 1994). Vesicle formation following incubation with purified COPII proteins is monitored by the presence of cargo proteins (Erv46 and Sec22) in a slowly sedimenting vesicle fraction. This budding assay showed that Sec23-Δgel could drive vesicle formation with a non-hydrolyzable GTP analogue, GMP-PNP, but not with GTP ( Fig. 1 C). Thus, when coat assembly is stabilized by inhibiton of GTP hydrolysis, perturbation of the gelsolin-PPP interaction has minimal effect, but under the condition of GTP-dependent coat turnover (Antonny et al., 2001), loss of this interface impairs vesicle formation. These findings support the model that Sec23-Sec31 interactions help stabilize the assembling coat to counter instability triggered by GTP hydrolysis.
A GTP-specific vesicle budding defect was also observed with N-terminally His-tagged Sec31 (Hutchings et al., 2018), where the histidine tag is thought to interfere with interactions that drive assembly of the cage "vertex" (Fig. S1 A). Thus, in the context of coat turnover promoted by GTP hydrolysis, destabilization of interfaces that drive either outer coat oligomerization or inner/outer coat assembly is incompatible with coat function in vitro. We therefore tested whether loss of both interfaces would perturb coat function using a membrane-bending assay that measures tubulation of giant unilamellar vesicles (GUVs) in the presence of GMP-PNP. When combined with WT Sec31, Sec23-Δgel yielded straight lattice-coated tubules similar to WT (Figs. 1 D and S1 B). However, when GUV tubulation was induced using Sec23-Δgel and Nhis-Sec31, tubes were irregular, and although a coat layer was visible, no extended coat lattice was seen (Figs. 1 D and S1 B). We note that high concentrations of Sar1 alone can induce membrane tubulation (Lee et al., 2005), but the conditions we use require the full COPII coat to yield tubules ( Fig. S1 C). Nonetheless, when both outer coat oligomerization and inner/outer coat interactions are reduced, residual membrane curvature is likely driven by coat features such as the Sar1 α-helix and inner coat oligomerization (Hutchings et al., 2018;Lee et al., 2005).
Consistent with less robust residual membrane remodeling associated with loss of multiple coat interfaces, in vitro vesicle budding from microsomal membranes was not supported by Sec23-Δgel combined with Nhis-Sec31, (Fig. 1 E). One interpretation of these findings is that two separate interactions, Sec23-Sec31 binding via PPP motifs and cage assembly via Sec31-Sec31 vertex interfaces, mutually reinforce each other to propagate coat stability. This combinatorial interaction is especially important in the context of a cargo-replete membrane, which likely resists the membrane bending force of the coat (Copic et al., 2012). Together, our Sec23 mutagenesis experiments suggest that PPP binding contributes to, but is not essential for, coat assembly. The importance of the PPP interaction is revealed when the GTP cycle of the coat is altered, suggesting that the nucleotide-associated interface and PPP binding are mutually productive in COPII assembly.
We sought to further probe inner/outer coat interactions by dissecting the contribution of the different Sec23-binding elements within Sec31. We divided the Sec31 proline-rich region into three segments of equivalent length and generated constructs that preserved the structural domains (the N-terminal β-propeller that forms the cage vertex, the α-solenoid rods, and a C-terminal domain of predicted α-helical structure) but replaced the disordered region with shortened fragments. Each segment retained at least one PPP motif (Fig. 2 A). The first third (sec31 A ), with a single PPP motif, was unable to support viability, whereas each of the two subsequent segments (sec31 B and sec31 C ) complemented a sec31Δ null strain (Fig. 2 A). We further dissected the C-terminal portion of the disordered region into two smaller fragments, each of which contained at least two PPP motifs, but neither supported viability (sec31 D and sec31 E ; Fig. 2 A). We tested each of the shortened constructs in a coat recruitment assay that measures binding of purified proteins to small (400 nm) unilamellar liposomes in the presence of GMP-PNP. Shortened proteins that conferred viability supported coat recruitment, although Sec31 C bound less robustly than Sec31 B , suggesting reduced affinity (Figs. 2 B and S2 A). In contrast, the proteins that were not viable in cells were not recruited to liposomes, suggesting a critical loss in binding affinity. In the in vitro budding assay, we saw distinct effects for Sec31 B and Sec31 C (Fig. 2 C). Budding reactions with GTP were supported by Sec31 C , but not Sec31 B , consistent with the active fragment within Sec31 B stimulating GTP hydrolysis and thereby destabilizing the coat prematurely in the absence of additional stabilizing PPP motifs. Conversely, in the presence of GMP-PNP, Sec31 B supported vesicle release whereas Sec31 C was less effective. Again, this is consistent with the interface occupied by the active fragment, where the Sec23-Sar1•GMP-PNP complex would stabilize association with Sec31 B , prolonging interaction to drive vesicle formation.
We mutagenized the interaction surfaces in Sec31 B and Sec31 C to quantify the contributions of each binding element. We made point mutations in the active fragment region, changing W 923 and N 923 to alanine (termed the WN mutation). . Sec23 gelsolin domain is dispensable for coat assembly but contributes to coat stability. (A) Structural model (PDB accession no. 1M2O) of Sar1 (red) and Sec23 (wheat) highlighting the membrane-distal gelsolin domain (inset) that contains conserved aromatic residues (orange) that in human Sec23 contribute to PPP binding. The loop between the asterisks was replaced with a glycine-serineglycine linker to generate sec23-Δgel. (B) sec23Δ and sec23Δ sed4Δ strains were transformed with the indicated plasmids and grown on media containing 5-FOA, which counterselects for a SEC23::URA plasmid. Growth on 5-FOA is indicative of ability to serve as the sole copy of SEC23. Deletion of the gelsolin loop was tolerated in the sec23Δ strain but not in the sec23Δ sed4Δ double mutant. (C) In vitro budding experiments using yeast microsomal membranes incubated with Sar1, Sec23-Sec24, Sec13-Sec31, and the indicated nucleotides. Vesicle release from the donor membrane (Total) is measured by detecting incorporation of cargo proteins, Sec22 and Erv46, into a slowly sedimenting vesicle fraction. When WT Sec23 was replaced with Sec23-Δgel, budding was compromised only in the GTP condition. (D) Negative-stain electron microscopy of GUVs incubated with the indicated WT or mutant proteins. Scale bars, 100 nm. (E) In vitro budding as described for C, but where one reaction used Sec31 that was tagged with hexahistidine at the N-terminus (lane 2). Sec23-Δgel in combination with Nhis-Sec31 was compromised for budding even in the presence of GMP-PNP.
Density corresponding to these residues is clearly detected in the assembled coat visualized by electron cryotomography (Hutchings et al., 2018), suggesting this is a particularly stable part of the active fragment interface. The sec31 B -WN mutant did not support viability, whereas glycine/serine substitutions of the PPP motifs, either individually or in combination, was tolerated ( Fig. 2 D). In contrast, mutation of the PPP motifs in Sec31 C abrogated viability (Fig. 2 D). These growth phenotypes are consistent with the in vitro budding phenotypes. The active fragment is the dominant interface for the B-fragment, rendering its functionality sensitive to nucleotide. In contrast, the PPP motifs drive interaction for the C-fragment, which is independent of nucleotide state but less efficient in vitro, perhaps because of reduced overall affinity. Finally, the inviability of the shortest fragments, which retain multiple PPP motifs, may reflect a requirement for a minimum length of disordered region to span the inner/outer coat distance and perhaps bridge adjacent inner coat subunits. Together, the phenotypes of these constructs with shortened disordered regions demonstrate the importance of the active fragment and PPP motifs in isolation and confirm that multiple interfaces combine to drive robust assembly.
Having demonstrated the importance of individual inner coat interaction interfaces in the context of a shortened disordered region, we next tested their relevance in the context of fulllength Sec31. Neither complete deletion of the active fragment nor glycine/serine substitution of all six PPP motifs, plus an additional PPAP motif that might also bind to the gelsolin domain (Ma and Goldberg, 2016; sec31-ΔPPP), caused any growth defects, either alone or in combination ( Fig. 3 A). Moreover, unlike the Sec23 interface mutant, deletion of SED4 did not cause synthetic lethality with Sec31 mutants (Fig. 3 A). Thus, Sec23 mutations seem to render the coat more susceptible to GTP cycle perturbation, perhaps because they also abrogate interaction with PPP motifs on Sec16, which may modulate the GTP cycle (Kung et al., 2012;Yorimitsu and Sato, 2012). We next tested the effect of Sec31 interface mutations in the context of Nhis-Sec31, where cage assembly is perturbed. In this context, loss of the PPP motifs was lethal, whereas the W 922 A/N 923 A mutation and complete deletion of the active fragment was tolerated (Fig. 3 B). In the liposome binding assay, Nhis-Sec31-ΔPPP was recruited normally (Figs. 3 C and S2 B) and could support vesicle formation in the presence of GMP-PNP (Fig. 3 D). Thus, when the coat is stabilized by a non-hydrolyzable GTP analogue, impairment of Figure 2. The Sec31 disordered region contain multiple independent drivers of assembly. (A) Diagram of Sec31 showing structured elements (β-propeller and α-solenoids) and the proline-rich disordered region that contains the active fragment (gray box) with GTP stimulatory residues W 922 /N 923 residues (yellow diamond) and PPP motifs (orange circles). The disordered region of Sec31 was dissected into the indicated fragments, and the ability of these shortened regions to function in place of the full-length disordered region was tested by serial dilution on 5-FOA. Only the B and C fragments support viability. (B) Purified Sar1, Sec23-Sec24, and Sec13-Sec31 were incubated with synthetic liposomes in the presence of GMP-PNP and the liposomes purified by flotation through a sucrose gradient. Bound material (floated liposomes) was collected and examined by SDS-PAGE to visualize coat proteins. Recruitment of proteins with shortened disordered regions to liposomes correlates with viability: only B and C fragments were recruited. (C) In vitro budding from microsomal membranes with the indicated proteins also correlated with viability but showed nucleotide dependence. (D) Serial dilutions of Sec31 B and Sec31 C after mutation of the indicated binding elements reveals the importance of these sequences in the context of a shorter disordered region. multiple coat interfaces is tolerated, but under conditions of GTP-mediated coat turnover, combinatorial loss of these interactions is lethal. Together, mutagenesis of the known inner/ outer coat interfaces supports the model that multiple coat interactions cooperatively stabilize the assembly pathway to counter the destabilizing effects of GTP hydrolysis and reveals that these interactions do not explain the entirety of coat assembly.
A charge-driven interaction interface contributes to coat assembly The minimal phenotypic consequences of ablating the known interactions between coat layers (PPP plus active fragment; Fig. 3 A) suggests that additional interfaces contribute to coat assembly. We examined the Sec31 proline-rich disordered region for conserved features that might indicate functional importance (van der Lee et al., 2014), focusing on charge properties. We found clusters of charged residues across different regions of Sec31, including regions of net negatively charged residues in the structured domain, and smaller clusters of net positive charges in the disordered region (Fig. 4 A).
Positive charge clusters were also features of Sec31 orthologues in human and Arabidopsis thaliana (Fig. S3 A). We searched for a corresponding surface on Sec23 that may comprise a chargedriven interface. We identified a negatively charged surface adjacent to the gelsolin domain, separate from that occupied by the active fragment ( Fig. 4 B, top panel). Reversing the charges of this region on Sec23 (Fig. 4 B, bottom panel) led to lethality (Fig. 4 C). The mutant protein was stable and readily purified, and it was defective for recruitment of Sec13-Sec31 in the liposome binding assay (Figs. 4 D and S2 C). We propose that chargedriven interactions between Sec23 and Sec31 represent a novel third binding mode that contributes to inner/outer coat interactions. Indeed, recent structural analysis reveals that this region of Sec23 participates in a protein-protein interaction in the context of the fully assembled coat, consistent with our proposal that positively charged regions of Sec31 engage this site (Hutchings et al., 2020 Preprint).
Given the profound effect of loss of the Sec23 charge interface, we revisited the other interaction sites on Sec23 to test whether combinatorial mutations would similarly impact viability. We first tested the F380L mutation, equivalent to human Figure 3. Sec23-Sec31 interaction interfaces are dispensable for coat assembly. (A) Serial dilutions of sec31Δ and sec31Δ sed4Δ strains expressing the deletion and substitution mutants indicated were grown on 5-FOA, revealing no loss of fitness associated with the loss of any of these interfaces. (B) Serial dilutions of a sec31Δ strain expressing Nhis-Sec31 mutants as indicated were grown on 5-FOA, revealing lethality associated with the combination of the N-terminal His tag and loss of PPP motifs. (C) Recruitment of the coat to synthetic liposomes as described for Fig. 2 B revealed that each of the mutants indicated were recruited as well as WT. (D) In vitro budding from yeast microsomes with the proteins indicated revealed normal budding efficiency, except for the Nhis-tagged proteins, which were reduced in their ability to generate vesicles with GTP.
F382L, which impairs coat assembly (Fromme et al., 2007). As described previously (Fromme et al., 2007), the F380L mutation alone had no impact on viability, even when combined with a sed4Δ mutation (Fig. 4 E). Moreover, when F380L was added to the sec23-Δgel mutant, cells were still viable (Fig. 4 E). Thus, of the known interactions, abrogation of the charge-based interface has the most profound effect on COPII coat assembly. Our findings reveal that three different modes of interaction drive association between the inner and outer coats: a nucleotidedriven interface mediated by the active fragment, multivalent interaction supported by linear PPP motifs, and a charge-driven interface. Whether these different interactions bind to the same Sec23 molecule or instead bridge adjacent Sec23-Sec24 complexes to contribute to inner coat array formation remains to be determined.
Cargo contributions to coat assembly and function Cargo proteins are not inert participants in vesicle formation; rather, they influence coat function in multiple ways. On one hand, cargo proteins are thought to contribute to coat stability by retaining subunits after GTP hydrolysis (Sato and Nakano, 2005;Iwasaki et al., 2017). On the other hand, cargo can alter the local membrane bending energy, thereby imposing a requirement for a robust coat to enforce curvature (Copic et al., 2012;D'Arcangelo et al., 2015;Gómez-Navarro et al., 2020). We therefore tested whether deletion of the major ER export receptors would impact the viability of coat interaction mutants. We deleted EMP24, ERV29, or ERV14 in sec31Δ and sec23Δ backgrounds and first tested for synthetic sick/lethal interactions with coat interface mutants. We reasoned that deleting cargo might further destabilize coat assembly and impair vesicle formation when coat interfaces were weakened. Neither of the viable interface mutants, sec31-WN/ΔPPP and sec23-Δgel, showed reduced fitness when ER export receptors were abrogated (Fig. 5 A). Although these receptors are among the most abundant constituents of COPII vesicles (Otte et al., 2001;Ho et al., 2018), other essential cargo proteins like SNAREs probably still contribute significant recruitment capacity to stabilize the coat. We next tested whether cargo depletion might confer phenotypic rescue to coat mutants that are impaired in their assembly. In an emp24Δ background, reduced cargo burden creates a permissive condition that allows growth when coat scaffolding is compromised (Copic et al., 2012;D'Arcangelo et al., 2015). We reasoned that when the force required to induce membrane curvature is reduced, weakened coat interfaces might suffice for viability. Indeed, the lethal Nhis-sec31-ΔPPP mutant was rescued by deletion of EMP24 (Fig. 5 B). In contrast, the combinatorial mutants (Nhis-sec31-WN/ΔPPP and Nhis-sec31-ΔAF/ΔPPP) were not rescued by loss of EMP24 (Fig. 5 B), suggesting that the simultaneous loss of both interfaces causes an assembly defect too great to be compensated forcargo. Since the Nhis-sec31 mutants are compromised both for inner/outer coat interfaces and cage vertex formation, the phenotypic rescue in the emp24Δ background could be due to reduced scaffolding driven by cage assembly rather than a weakened coat assembly. We therefore tested other interface mutants that are lethal without destabilizing the Sec13-Sec31 cage. The shortened mutants (sec31 A and sec31 C -ΔPPP) were not rescued by EMP24 deletion, and the sec31 B -WN mutant was only marginally viable in this background (Fig. 5 C). Similarly, the sec23-Δcharge mutant remained inviable in a sec23Δ emp24Δ strain (Fig. 5 D). Together, these growth phenotypes indicate that deficits in membrane scaffolding can be compensated for by loss of cargo-driven membrane rigidity but that coat assembly interfaces are resistant to such rescue.
Coat assembly can be driven by diverse sequences with conserved properties Our bioinformatics analysis of the Sec31 proline-rich disordered region revealed conserved properties across species even though primary sequence similarity was low (Fig. S3, A and B). This is in contrast to sequence conservation across the structured domains, which is relatively high (Fig. S3 B). We reasoned that if coat assembly is driven by combinations of weak interactions rather than specific protein sequence properties, then a disordered region from an orthologous Sec31 might functionally replace the yeast disordered region. We created chimeric proteins where the yeast disordered region was replaced with that of human Sec31A, human Sec31B or A. thaliana Sec31A. When these constructs were expressed in yeast, only the chimera with the disordered region of HsSec31A (sec31-Hs31A DR ) supported viability (Fig. 6 A). We tested the two yeast-human chimeric proteins in budding, where activity mirrored the in vivo phenotypes, with only Sec31-Hs31A DR able to drive budding from microsomal membranes (Fig. 6 B). Deletion of the active fragment or mutation of either the PPP motifs or the W/N GTPase stimulatory residues in the yeast-human hybrid construct reduced viability Figure 5. Cargo receptor mutants influence viability of coat assembly mutants. (A) EMP24, ERV29, and ERV14 were deleted in sec31Δ and sec23Δ strains and growth of sec31-WN/ΔPPP and sec23-Δgel tested on media containing 5-FOA. Deletion of cargo receptors had no impact on viability of these interface mutants. (B) Growth of each of the Nhis-Sec31 mutants indicated was tested in sec31Δ and sec31Δ emp24Δ backgrounds, revealing that loss of Emp24 reversed the lethality associated with Nhis-sec31-ΔPPP, but not other mutants. (C) The Sec31 fragment mutants indicated were tested in sec31Δ and sec31Δ emp24Δ backgrounds, revealing no rescue. (D) The plasmids indicated were introduced into sec23Δ or sec23Δ emp24Δ strains and tested for growth on 5-FOA.
( Fig. 6 C), confirming that these interfaces are a conserved mechanism of driving coat assembly.
Given that the human Sec31A disordered region can function in yeast despite divergence in protein sequence (∼30% similarity), we sought to test whether a more distantly related domain could also function in assembly. We first asked whether the disordered regions of Sec16 could replace that of Sec31. Sec16 has two intrinsically disordered regions that lie upstream and downstream of a structured helical domain that interacts with Sec13 (Whittle and Schwartz, 2010; Fig. S4 A). The upstream disordered region (DR1) interacts with Sec24 (Gimeno et al., 1996) and contains a single PPP motif and several positive charge clusters. The downstream disordered region (DR2) interacts with Sec23 (Gimeno et al., 1996) and contains multiple PPP motifs and significant areas of both net positive and negative charge (Fig. S4 A). We replaced the Sec31 disordered region with Sec16 DR1 or Sec16 DR2 and tested each chimera for their ability to function in place of Sec31. Sec31-16 DR1 could complement a sec31Δ strain, despite low amino acid similarity (∼35% local pairwise similarity), whereas Sec31-16 DR2 was not viable (Fig. 6 D). Vesicle formation in vitro mirrored the growth phenotype, with Sec31-16 DR1 supporting budding and Sec31-16 DR2 inactive (Fig. 6 E). The in vitro budding experiment used Nhis-Sec31-16 DR1 because cleavage of the affinity tag used for purification caused the protein to degrade. Nonetheless, Nhis-Sec31-16 DR1 supported moderate budding with GTP, in contrast to Nhis-Sec31 (Fig. 6 D). This difference likely stems from the inability of the Sec16 disordered region to stimulate GTP hydrolysis, thereby stabilizing coat assembly even in the presence of GTP.
We next sought to further test plasticity of the coat assembly interface by coopting affinity from evolutionarily diverged proteins that share similar properties. We searched the yeast genome for proteins unrelated to the COPII pathway that have predicted disordered regions and multiple PPP motifs. We identified a number of such proteins that participate in endocytosis and actin regulation and in RNA processing. In a recent analysis of yeast disordered segments (Zarin et al., 2019), several of the endocytosis/actin proteins had multiple properties within disordered regions that were similar to Sec31 (Fig. S4 B). We therefore tested whether some of these disordered regions could function in COPII coat assembly when replacing the endogenous Sec31 disordered region. Indeed, the disordered region of Las17 could functionally replace that of Sec31 to support viability (Fig. 6 F). Las17 is an actin assembly factor that uses its prolinerich domain to nucleate actin filaments (Urbanek et al., 2013). This mode of interaction is shared by other components of the actin/endocytic system and has parallels with the COPII coat in that multivalent weak interactions drive assembly (Sun et al., 2017). The Las17 region we tested shares several key features relevant to Sec31 function, including multiple PPP motifs, clusters of net-positive charge in the absence of net negative charge, and significant length. Since Las17 has no active fragment region, clearly these features alone suffice to generate a functional coat, highlighting the plasticity of the inner/outer coat interaction.
Discussion
Together, our mutagenesis and domain replacement experiments reveal a remarkably plastic mechanism of coat assembly, where multiple independent assembly interfaces mutually stabilize each other. Each individual interaction is dispensable, but in concert, they drive function and stabilize the coat to prevent premature disassembly. Our extensive mutagenesis of Sec31 (Fig. 2) reveals how different assembly elements contribute: the active fragment provides nucleotide-dependent affinity contributed in large part by the W/N motif; linear PPP motifs form weak but specific binding interfaces; a charge-driven interface provides essential affinity; and multivalence of interactions contributes to robust assembly. The ability for loss of one element to be compensated for by another suggests that the different interfaces are partially redundant but reinforce each other to make a robust system. Orthologous Sec31 proteins across species may alter how they employ these interfaces to create functional proteins that might exhibit distinct assembly properties. Our analysis further suggested length effects may also be important, because multivalent PPP-containing stretches of disordered sequence below a certain length threshold were not viable. The precise role of disordered region length remains to be explored; one possibility is that a shortened disordered region (for example the Sec31 E fragment, which still contains multiple interaction sites) fails to assemble because of an inability to bridge adjacent Sec23 complexes.
A coat assembly/disassembly mechanism driven by multiple relatively weak interactions provides a possible mechanistic explanation for how a metastable structure like a vesicle coat can form in a controlled manner. By building up from initially weak, transient interactions, the coat can be recruited locally but is not committed to full assembly until a threshold of both inner and outer coat components is reached. Moreover, upstream regulatory factors, like Sec16 and TANGO1, that use the same interactions can prime an exit site for Sec31 recruitment and organize the inner coat. Once Sec31 is engaged, its disordered region can act as molecular velcro, being strong yet readily reversible. Coat propagation could feed forward as cage vertex interactions (mediated by the β-propeller domain of Sec31) bring in additional outer coat rods to in turn organize more inner coat complexes. As the diversity of Sec31 paralogs expands, as seen in multicellular organisms, disordered regions may fuel evolution of altered interfaces that allow for fine-tuning of vesicle assembly, disassembly, and geometry (Babu et al., 2012). For instance, absence of GTP stimulatory residues may yield a more stable coat structure, whereby the GTPase activity of the coat is not accelerated. Such stability may prolong initial events during coat assembly to favor formation of noncanonical carriers (Hutchings and Zanetti, 2019).
The multiple interfaces between Sec23 and Sec31 contribute to persistence of the polymerized coat even after GTP hydrolysis by Sar1 and its release from the membrane. Evidence for preserved coat elements on Sar1-free vesicles comes from immunogold labeling experiments that revealed persistent Sec23 and Sec13 on COPII vesicles generated with GTP (Barlowe et al., 1994). Similarly, multivalent combinatorial interactions provide an excellent mechanism for how a metastable coat polymer can be disassembled without the need for uncoating chaperones that expend energy. A combinatorial binding mode allows a stable structure to "breathe" via dynamic weak individual interactions that permit an opposing similarly weak interaction to compete and destabilize the structure. In the case of the COPII coat, local uncoating might be triggered by Golgilocalized kinases like Hrr25 (Lord et al., 2011) that would reverse a charge-based interaction and help ensure the directionality of vesicle traffic.
Our findings of COPII coat assembly driven by a network of interactions involving physicochemical properties like disorder and net charge, along with multivalent weak motif-driven interactions, provide a blueprint for how intrinsically disordered regions are used for dynamic assembly and disassembly. Each of the major coat complexes have significant disordered regions within their subunits (Pietrosemoli et al., 2013), and the flexible nature of the clathrin light chain is important in coat assembly (Wilbur et al., 2010). Our approach and the findings that we present here provide a conceptual framework to now discover and investigate how such disordered regions of coat complexes evolve to accommodate specific trafficking needs of an organelle, cell, and organism.
Strains and plasmids
Yeast strains used in this study are listed in Table S1. Strains were constructed using standard genetic knockout and LiAc transformation methods. Cultures were grown at 30°C in standard rich medium (YPD: 1% yeast extract, 2% peptone, and 2% glucose) or synthetic complete medium (SC: 0.67% yeast nitrogen base and 2% glucose supplemented with amino acids) as required. For testing viability, strains were grown to saturation in SC medium selecting for the mutant plasmid overnight at 30°C. 10-fold serial dilutions were made in 96-well trays before spotting onto 5-fluoroorotic acid (5-FOA) plates (1.675% yeast nitrogen base, 0.08% complete supplement mixture, 2% glucose, 2% agar, and 0.1% 5-FOA). Plates were scanned at day 2 or 3 after spotting and growth at 30°C.
Plasmids used in this study are listed in Table S2. Standard cloning methods were used, including PCR amplification of yeast genes from genomic DNA, cloning into standard yeast expression vectors (Sikorski and Hieter, 1989), site-directed mutagenesis using the QuikChange system (Agilent), and Gibson Assembly (New England Biolabs) as per manufacturers' instructions.
Protein expression and purification Sar1 was prepared as described (Shimoni and Schekman, 2002). Briefly, GST-Sar1 was expressed in bacterial cells by induction for 2 h with IPTG. Cells were lysed by sonication, and lysates were clarified and incubated with glutathione-agarose beads. The beads were then washed, and GST-free Sar1 was generated by thrombin cleavage.
Sec23-Sec24 (Sec23 and His-Sec24) and Sec13-Sec31 (Sec13 and His-Sec31) complexes were in Sf9 cells using the pFastbac system. 500 ml of protein-expressing cells were collected and washed with PBS before freezing in liquid nitrogen. Cell pellets were lysed with a Dounce homogenizer in cold lysis buffer (20 mM Hepes, pH 8, 250 mM sorbitol, 500 mM KOAc, 1 mM DTT, 10 mM imidazole, and 10% vol/vol glycerol). Lysates were cleared in JA 25-50 rotor (22,000 rpm, 1 h, 4°C), and the supernatant was filtered through a 0.45-µm membrane before loading onto a HisTrap HP column (GE Healthcare). The following steps were done using theÄKTA purifier system (GE Healthcare), where elution buffer was lysis buffer supplemented with 500 mM imidazole. The column was washed with 4% and 10% elution buffer followed by elution using a linear gradient to 100% elution buffer. Peak fractions were checked by SDS-PAGE followed by Coomassie staining. Verified fractions were mixed in a 3:1 ratio with QA buffer (20 mM Tris, pH 7.5, 1 mM MgOAc, 1 mM DTT, and 10% vol/vol glycerol) and loaded onto a HiTrap Q HP column (GE Healthcare). The protein was eluted using a linear salt gradient to a final concentration of 1 M NaCl. Peak fractions were verified using SDS-PAGE and Coomassie staining and flash-frozen in liquid nitrogen. For removal of the 6xHis tag on Sec31, an overnight cleavage with His-tagged Tobacco Etch Virus Protease was included after the verification of Ni-immobilized metal affinity chromatography fractions. The cleavage was done simultaneously with dialysis into Lysis buffer. Uncleaved protein and Histagged Tobacco Etch Virus Protease were removed by flowing through the HisTrap HP column before continuing to the ionexchange step.
In vitro budding from microsomal membranes Microsomal membranes were prepared from yeast as described previously (Wuestehube, 1992). Briefly, yeast cells were grown to mid-log phase in YPD (1% yeast extract, 2% peptone, and 2% glucose), harvested and resuspended in 100 mM Tris, pH 9.4/ 10 mM DTT to 40 OD 600 /ml, and then incubated at room temperature for 10 min. Cells were collected by centrifugation, resuspended to 40 OD 600 /ml in lyticase buffer (0.7 M sorbitol, 0.75× YPD, 10 mM Tris, pH 7.4, and 1 mM DTT + lyticase 2 µl/OD 600 ), and then incubated at 30°C for 30 min with gentle agitation. Cells were collected by centrifugation, washed once with 2X JR buffer (0.4 M sorbitol, 100 mM KOAc, 4 mM EDTA, and 40 mM Hepes, pH 7.4) at 100 OD 600 /ml, and then resuspended in 2X JR buffer at 400 OD 600 /ml before freezing at −80°C. Spheroplasts were thawed on ice, and an equal volume of ice cold dH20 added before disruption with a motor-driven Potter Elvehjem homogenizer at 4°C. The homogenate was cleared by low-speed centrifugation and crude membranes collected by centrifugation of the low-speed supernatant at 27,000 ×g. The membrane pellet was resuspended in ∼6 ml buffer B88 (20 mM Hepes. pH 6.8, 250 mM sorbitol, 150 mM KOAc, and 5 mM Mg(OAc) 2 ) and loaded onto a step sucrose gradient composed of 1 ml 1.5 M sucrose in B88 and 1 ml 1.2 M sucrose in B88. Gradients were subjected to ultracentrifugation at 190,000 ×g for 1 h at 4°C. Microsomal membranes were collected from the 1.2 M/1.5 M sucrose interface, diluted 10-fold in B88, and collected by centrifugation at 27,000 ×g. The microsomal pellet was resuspended in a small volume of B88 and aliquoted in 1 mg total protein aliquots until use.
Budding reactions were performed as described previously (Miller et al., 2002). Briefly, 1 mg of microsomal membranes per six to eight reactions was washed three times with 2.5 M urea in B88 and three times with B88. Budding reactions were set up in B88 to a final volume of 250 µl at the following concentrations: 10 µg/µl Sar1, 10 µg/µl Sec23-Sec24, 20 µg/µl Sec13-Sec31, and 0.1 mM nucleotide. Where appropriate, an ATP regeneration mix was included (final concentration: 1 mM ATP, 50 µM GDPmannose, 40 mM creatine phosphate, and 200 µg/ml creatine phosphokinase). Reactions were incubated for 30 min at 25°C and a 12-µl aliquot collected as the total fraction. The vesiclecontaining supernatant was collected after pelleting the donor membrane (15,000 rpm, 2 min, 4°C). Vesicle fractions were then collected by centrifugation in a Beckman TLA-55 rotor (50,000 rpm, 25 min, 4°C). The supernatant was aspirated, and the pelleted vesicles were resuspended in SDS sample buffer and heated for 10 min at 55°C with mixing. The samples were then analyzed by SDS-PAGE and immunoblotting for Sec22 (Miller laboratory antibody) and Erv46 (a gift from Charles Barlowe, Dartmouth College, Hanover, NH). Budding reactions were performed at least three times for each mutant, and results were consistent across replicates. One representative example is shown.
Bioinformatic analyses
For analyses of sequence features of disordered regions, sequences of S. cerevisiae Sec31, Sec16, Ab1p, Aim3, Bbc1, and Las17, human Sec31A and Sec31B, and A. thaliana Sec31A were collected from the UniProt database. The residue-specific disorder propensity was calculated using IUPred2A using long disorder mode (Mészáros et al., 2018). Charge properties, including fraction of charged residues and net charge per residue, were calculated using package localCIDER (Holehouse et al., 2017), with a sliding window size of 20. PPP motifs are identified as PPP n (n > 2), PPAP, or PAPP. The analyses were performed in Python. Data were assembled and plotted by R using custom-written scripts. For sequence similarities between disordered regions of Sec31 orthologues, yeast Sec31 (749-1,174), human Sec31A (780-1,112), human Sec31B (811-1,078), and A. thaliana Sec31A (722-860) were chosen for analysis. Each sequence was further divided into three parts: an IDR, an N-terminal structured region, and a C-terminal structured region, with the IDR in each sequence separating the N-terminal structured region and C-terminal structured region. The residues of the IDR in each orthologue are defined as yeast Sec31 (764-1,174), human Sec31A (800-1,091), human Sec31B (793-1,053), and A. thaliana Sec31A (719-870). Disordered regions including Abp1 (361-528), Aim3 (401-788), Bbc1 (670-843), and Las17 (306-529) were also chosen to compare similarity. These residues were predicted to have high disorder propensity (IUPred >0.5) and chosen to match the experimental constructs. Due to the striking difference among the lengths of the IDRs, pairwise local alignment was used to retrieve pairwise sequence similarity between the yeast Sec31 regions and corresponding structured or disordered regions. Sequence alignments were performed using EMBOSS water with default parameters (Madeira et al., 2019).
Data and materials availability
All data are available in the article or supplemental material; plasmids and strains described can be obtained from E.A. Miller. Fig. S1 shows the structure of Sec13-Sec31, electron microscopy of GUV experiments with WT and mutant Sec23 and Sec31, and control experiments that demonstrate membrane tabulation requires the full COPII coat. Fig. S2 shows quantification of protein recruitment in liposome binding assays. Fig. S3 shows charge/disorder plots for Sec31 orthologues from different species and a protein similarity comparison for those proteins. Fig. S4 shows charge/disorder plots for Sec16 and other unrelated PPP-containing proteins used in complementation experiments shown in Fig. 6. Table S1 describes the yeast strains used in this study. Table S2 describes the plasmids used in this study. Figure S3. Sec31 disordered domains share several features across species. (A) Charge-disorder plots for Sec31 orthologues from S. cerevisiae (ScSec31), human (HsSec31A and HsSec31B), and A. thaliana (AtSec31A). The black curve indicates predicted disorder propensity as calculated by IUPred. A value of IUPred >0.5 (dashed line) suggests a strong propensity for being intrinsically disordered. Each gray bar corresponds to fraction of charged residues (FCR) in a sliding window of 20 amino acids, centered on the residue indicated. Red/blue bars at each position correspond to net charge per residue (NCPR) in a sliding window of 20 amino acids. PPP motifs are indicated by orange circles, and WN motifs are indicated by yellow diamonds. (B) Local pairwise sequence similarity between the ScSec31 structural domain (βprop/ACE) and disordered regions (Pro-rich) and corresponding domains in orthologues. None of the orthologues share significant similarity to ScSec31 within their disordered regions, whereas the structured regions have higher homology. Provided online are two tables. Table S1 describes the yeast strains used in this study. Table S2 describes the plasmids used in this study. Figure S4. Charge-disorder analysis for unrelated PPP-containing proteins. (A) Diagram and charge-disorder plot of Sec16, highlighting the conserved central domain (CCD) and two disordered regions, DR1 (light blue) and DR2 (purple), that interact with Sec24 and Sec23, respectively. Plot is annotated as described in Fig. S2 A. (B) Charge/disorder plots for a subset of proteins identified in the yeast genome that contain multiple PPP motifs that were selected for testing as replacements for the Sec31 disordered region. Fragments selected for splicing into Sec31 are indicated by the blue bars. | 10,070.2 | 2020-09-30T00:00:00.000 | [
"Biology",
"Chemistry"
] |
Pharmacological Inhibition of FAK-Pyk2 Pathway Protects Against Organ Damage and Prolongs the Survival of Septic Mice
Sepsis and septic shock are associated with high mortality and are considered one of the major public health concerns. The onset of sepsis is known as a hyper-inflammatory state that contributes to organ failure and mortality. Recent findings suggest a potential role of two non-receptor protein tyrosine kinases, namely Focal adhesion kinase (FAK) and Proline-rich tyrosine kinase 2 (Pyk2), in the inflammation associated with endometriosis, cancer, atherosclerosis and asthma. Here we investigate the role of FAK-Pyk2 in the pathogenesis of sepsis and the potential beneficial effects of the pharmacological modulation of this pathway by administering the potent reversible dual inhibitor of FAK and Pyk2, PF562271 (PF271) in a murine model of cecal ligation and puncture (CLP)-induced sepsis. Five-month-old male C57BL/6 mice underwent CLP or Sham surgery and one hour after the surgical procedure, mice were randomly assigned to receive PF271 (25 mg/kg, s.c.) or vehicle. Twenty-four hours after surgery, organs and plasma were collected for analyses. In another group of mice, survival rate was assessed every 12 h over the subsequent 5 days. Experimental sepsis led to a systemic cytokine storm resulting in the formation of excessive amounts of both pro-inflammatory cytokines (TNF-α, IL-1β, IL-17 and IL-6) and the anti-inflammatory cytokine IL-10. The systemic inflammatory response was accompanied by high plasma levels of ALT, AST (liver injury), creatinine, (renal dysfunction) and lactate, as well as a high, clinical severity score. All parameters were attenuated following PF271 administration. Experimental sepsis induced an overactivation of FAK and Pyk2 in liver and kidney, which was associated to p38 MAPK activation, leading to increased expression/activation of several pro-inflammatory markers, including the NLRP3 inflammasome complex, the adhesion molecules ICAM-1, VCAM-1 and E-selectin and the enzyme NOS-2 and myeloperoxidase. Treatment with PF271 inhibited FAK-Pyk2 activation, thus blunting the inflammatory abnormalities orchestrated by sepsis. Finally, PF271 significantly prolonged the survival of mice subjected to CLP-sepsis. Taken together, our data show for the first time that the FAK-Pyk2 pathway contributes to sepsis-induced inflammation and organ injury/dysfunction and that the pharmacological modulation of this pathway may represents a new strategy for the treatment of sepsis.
Sepsis and septic shock are associated with high mortality and are considered one of the major public health concerns. The onset of sepsis is known as a hyper-inflammatory state that contributes to organ failure and mortality. Recent findings suggest a potential role of two non-receptor protein tyrosine kinases, namely Focal adhesion kinase (FAK) and Proline-rich tyrosine kinase 2 (Pyk2), in the inflammation associated with endometriosis, cancer, atherosclerosis and asthma. Here we investigate the role of FAK-Pyk2 in the pathogenesis of sepsis and the potential beneficial effects of the pharmacological modulation of this pathway by administering the potent reversible dual inhibitor of FAK and Pyk2, PF562271 (PF271) in a murine model of cecal ligation and puncture (CLP)induced sepsis. Five-month-old male C57BL/6 mice underwent CLP or Sham surgery and one hour after the surgical procedure, mice were randomly assigned to receive PF271 (25 mg/kg, s.c.) or vehicle. Twenty-four hours after surgery, organs and plasma were collected for analyses. In another group of mice, survival rate was assessed every 12 h over the subsequent 5 days. Experimental sepsis led to a systemic cytokine storm resulting in the formation of excessive amounts of both pro-inflammatory cytokines (TNFa, IL-1b, IL-17 and IL-6) and the anti-inflammatory cytokine IL-10. The systemic inflammatory response was accompanied by high plasma levels of ALT, AST (liver injury), creatinine, (renal dysfunction) and lactate, as well as a high, clinical severity score. All parameters were attenuated following PF271 administration. Experimental sepsis induced an overactivation of FAK and Pyk2 in liver and kidney, which was associated to p38 MAPK activation, leading to increased expression/activation of several pro-inflammatory markers, including the NLRP3 inflammasome complex, the adhesion molecules ICAM-1, VCAM-1 and E-selectin and the enzyme NOS-2 and
INTRODUCTION
Sepsis is a major public health concern, responsible for high mortality and morbidity rates, followed by a reduced quality of life for survivors (1)(2)(3). The latest data on the global sepsis burden reported 48.9 million incident cases and 11.0 million sepsis-related deaths, representing 19.7% of all deaths worldwide (4). Despite the progress in clinical and basic research, the prognosis of septic patients remains remarkably poor, prompting the World Health Organization and World Health Assembly to recognize sepsis as a global health priority and, thus, adopting a resolution to improve the prevention, diagnosis and management of sepsis (5). The morbidity and mortality in sepsis are driven by organ dysfunction (6) and, among the multitude of triggering mechanisms, acute hyperinflammation has a substantial impact on host responses, which in turn contributes to sepsis-related multiple organ failure (MOF) (7). Very recently, two non-receptor proteins tyrosine kinase, belonging to the Focal adhesion kinase (FAK) family, namely FAK and Proline-rich tyrosine kinase 2 (Pyk2), have been identified as new key players in mediating the inflammatory response involved in the pathogenesis of endometriosis (8), atherosclerosis (9) and asthma (10) as well as in tumorigenesis and metastasis formation (11,12). FAK and Pyk2 are ubiquitously expressed and they share the same three-domain organization, with two non-catalytic domains, the band 4.1, ezrin, radixin and moesin (FERM) domain and the focal adhesion targeting (FAT) domain, linked by a central kinase domain (13). In addition to the kinase function, both FAK and Pyk2 act as scaffold proteins and play a crucial role in downstream integrin signaling (14,15). Despite their homology, the signaling events that lead to activation of these two kinases differ. FAK is mainly activated by integrins, growth factor receptors and cytokine receptors, leading to overproduction of several pro-inflammatory mediators and cytokines (16). In contrast, Pyk2 has the exclusive ability to sense calcium ions and it can be overactivated under lipopolysaccharide (LPS) stimulation resulting in overproduction of chemokines that regulate migration and infiltration of monocytes/macrophages, including monocyte chemotactic protein-1 (MCP-1), through a p38-MAPK pathway dependent mechanism (17,18). Besides, a compensatory upregulation in Pyk2 levels has been documented following pharmacological or genetic inhibition of FAK (19,20). Thus, the selective and simultaneous inhibition of both proteins may represent an innovative pharmacological approach to counteract FAK-Pyk2-mediated inflammatory response and recently published papers have demonstrated the efficacy of a FAK-Pyk2 dual inhibitor, PF562271 (PF271), to counteract leukocyte infiltration and vascular inflammation in both in vitro and in vivo conditions (21,22). Despite few studies have shown that LPS-induced endotoxemia evokes FAK activation, which contributes to exacerbate the inflammatory response, leading to organ damage and increased mortality (23,24), so far, no experimental data have been reported on the potential effects of pharmacological modulation of the FAK-Pyk2 signaling pathway against sepsis. Thus, the present study was designed to address this issue by investigating the potential beneficial effects of a potent reversible dual inhibitor of both FAK and Pyk2 (PF271) in a murine model of polymicrobial sepsis.
PF271 is an ATP-competitive, reversible inhibitor for both FAK and Pyk2 with IC 50 of 1.5 nmol/L and 13 nmol/L, respectively (25), which has completed a phase I clinical trial (NCT00666926) in patients with advanced solid tumors (26,27).
Animals and Ethical Statement
Male, five-month-old C57BL/6 mice (from Envigo laboratories, IT), weighing 30-35 g were used and kept under standard laboratory conditions. The animals were housed in an environment with temperature (25 ± 2°C) and light/dark cycle (12/12 h) automatically controlled, as well as ad libitum access to food and water. The animals were kept under standard laboratory conditions for four weeks before the start of the experimental procedures. The animal protocols are reported in compliance with the ARRIVE guidelines (28) and the MQTiPSS recommendations for preclinical sepsis studies (29) and they were approved by the local Animal Use and Care Committees as well as the National Authorities (Ethical number approvals: 420/ 2016-PR, Italy and 7936220321 Brazil).
Polymicrobial Sepsis
Sepsis was induced through the cecal ligation and puncture (CLP) introduced by Wichterman and co-workers (1980) (30) with slight modifications. Mice were anesthetized with 3% isoflurane (IsoFlo, Abbott Laboratories) together with 0.4 L/min oxygen in an anaesthesia chamber. Once mice were sedated, they were kept anesthetized by administration of 2% isoflurane (via nosecone) together with 0.4 L/min oxygen throughout the surgical procedure. Through a thermal blanket, the body temperature was maintained at 37°C and constantly monitored using a rectal thermometer. Mice were submitted to a mid-line laparotomy of approximately 1.0 cm and, after location and exposure, the cecum was ligated with a cotton thread right below the ileocecal valve. A single through-and-through puncture of the cecum was carried out with a sterile 21 G needle and the cecum was lightly compressed to leak a small amount (droplet) of feces. Then, the cecum was carefully relocated into peritoneal cavity, which was sutured with silk thread. Sham-operated mice underwent the same procedure, but without CLP. Immediately after the surgical procedures, each mouse received an analgesic agent (Carprofen, 5 mg/kg, s.c.) and resuscitation fluid (37°C, 0.9% NaCl, 50 mL/kg, s.c.) in order to support the hemodynamic situation of the animals and to induce a hyperdynamic shock phase (31,32). Mice were left on a homeothermic blanket, being constantly monitored until they recover from anesthesia and then placed back into fresh clean cages.
Twenty-four hours post-CLP, the body temperature was recorded, and a blinded assessment of a clinical score was performed to evaluate the symptoms consistent with murine sepsis. The 6 following signs were used to score the health of experimental mice: lethargy, piloerection, tremors, periorbital exudates, respiratory distress, and diarrhea. An observed clinical score >3 was defined as developing severe sepsis, while a score ≤3 indicated the development of moderate sepsis (33). Details on the clinical score are reported as supplementary material.
Survival Study
To evaluate the potential therapeutic effect of PF271 on sepsis, 28 mice were subjected to CLP-induced sepsis and then randomly allocated into two groups, receiving either Vehicle or PF271 (25 mg/kg) once subcutaneously, one hour after CLP; n = 14 for each group. Survival rates were assessed every 12 h over the subsequent 5 days.
Plasma and Organ Collection
Twenty-four hours after surgery, mice were anesthetized using isoflurane and euthanized by cardiac exsanguination. Blood samples were collected in microtubes containing EDTA (17.1 µM/mL of blood) and then centrifuged at 13,000 g at R.T. to obtain plasma content. Liver and kidney samples were harvested and conserved in cryotubes containing or not optimal cutting temperature (OCT) compound and frozen in liquid nitrogen.
Samples were stored at -80°C until analysis, which were performed blindly.
Myeloperoxidase (MPO) Activity Assay
MPO activity assay has been described previously (35). About 100 mg of liver and kidney samples were homogenized, centrifuged and assayed for MPO activity by measuring the H 2 O 2 -dependent oxidation of 3,3′,5,5′-tetramethylbenzidine (TMB). MPO activity was expressed as optical density (O.D.) at 650 nm per mg of protein.
Statistical Analysis
Power analysis for the study design through G-Power 3.1 ™ software (37). Data are expressed as dot plots for each animal and as mean ± SD of 5-10 mice. The standard distribution of data was verified by Shapiro-Wilk normality test and the homogeneity of variances by Bartlett test. The statistical analysis was performed by one-way ANOVA, followed by Bonferroni's post-hoc test. Data that were not normally distributed, non-parametric statistical analysis was applied through Kruskal-Wallis followed by Dunn's post hoc-test. Differences in the survival study were determined with a logrank (Mantel-Cox) test. A P value <0.05 was considered significant. Statistical analysis was performed using GraphPad Prism ® software version 7.05 (San Diego, California, USA).
Materials
Unless otherwise stated, all reagents were purchased from the Sigma-Aldrich Company Ltd. (St. Louis, Missouri, USA). The BCA Protein Assay kit was from Pierce Biotechnology Inc. (Rockford, IL, USA). Antibodies were from Cell-Signaling Technology (Beverly, MA, USA).
PF271 Improves Severity Score and Prolongs Survival of Septic Mice
Mice that underwent CLP surgery developed clinical signals of severe sepsis (score >3) ( Figure 1A) associated with hypothermia (24.51 ± 0.24°C) at 24 h after the onset of experimental sepsis ( Figure 1B). Interestingly, treatment with PF271 demonstrated a protective effect in septic mice, as the severity score (score=0) and body temperature (36.0 ± 0.29°C) did not differ (P>0.05) from those observed in the Sham group (Figures 1A, B). To further determine the overall long-term effect of PF271 treatment, we performed a survival study in septic mice. As shown in Figure 1C, the median survival was 24 h in the CLPmice treated with vehicle and 48 h in the CLP-mice treated with PF271. PF271 significantly prolonged the survival time of septic mice since 93% of the CLP+Vehicle mice died within 120 h, while the CLP-mice treated with PF271 had significantly reduced mortality of 64% (hazard ratio: 0.33; 95% confidence interval: 0.12-0.89; P < 0.05).
PF271 Administration Counteracts the Cytokine Storm Evoked by Experimental Sepsis
Twenty-four h after surgery, septic control mice developed a systemic cytokine storm, with a massive increase in the levels of pro-inflammatory cytokines, specifically TNF-a, IL-1b, IL-17, IL-6, and the proteins PAI-1 and Resistin, when compared to Sham animals. Most notably, treatment with PF271 completely abolished the CLP-induced increase in these markers (Figures 3A-D, G, H). Interestingly, sepsis also evoked a robust increase in plasma level of the anti-inflammatory cytokine IL-10, whose systemic concentration was not significantly affected by PF271 treatment ( Figure 3E). On the contrary, neither the intervention (CLP) nor the PF271treatment significantly affected the systemic levels of IFN-g ( Figure 3F). Mice were randomly selected to undergo Sham or CLP surgery. One hour later, CLP mice were treated once with either Vehicle or PF271 (25 mg/kg s.c.). Twenty-four hours after Sham or CLP procedure, severity score (A) and body temperature (B) were recorded. Data are expressed as dot plots for each animal and as mean ± SD of 5-10 mice per group. The Kruskal-Wallis followed by Dunn's post hoc-test was applied to assess the severity score, whereas for body temperature, one-way ANOVA followed by Bonferroni's post hoc-test was used. *p < 0.05 CLP vs Sham/CLP+PF271. The mortality rate was recorded over a 5-day period (C). A log-rank test was used for the comparison of the survival curves (n = 14 mice per group). *p < 0.05 CLP vs CLP+PF271.
PF271 Reverses Local Neutrophil Infiltration and Inflammation Evoked by Sepsis
As PF271 displayed protective effects against CLP-induced liver and kidney dysfunctions, we next investigated the possible underlying mechanisms. CLP injury doubled MPO activity in liver and kidney when compared to Sham mice, suggesting an increase in local polymorphonuclear cells (mainly neutrophils) infiltration. In contrast, the degree of MPO activity in CLP-mice treated with PF271 was similar to those recorded in sham mice ( Figures 4A, B). The dramatic increase in MPO activity in liver (0.445 ± 0.02 vs 1.015 ± 0.10) and kidney (0.487 ± 0.02 vs 0.855 ± 0.05) following CLP injury was confirmed by immunohistochemistry analysis. While MPO immunopositivity was widely detected in the liver ( Figure 4C) the increase in staining for MPO in the kidney was restricted to the renal corpuscles ( Figure 4D). Interestingly, in both organs, treatment with PF271 abolished the increase in MPO-staining induced by experimental sepsis (Figures 4C, D). The increase in MPO-activity/staining seen in animals with CLP-sepsis and the prevention of this effect by PF271 treatment were mirrored by similar effects of CLP and PF271 on iNOS expression. As shown in Figure 5, iNOS immunopositivity was found to be increased in both liver (panel A) and kidney (panel B) of septic mice. Specifically, a high and wide expression/distribution of iNOS was observed in liver, being predominant in the periphery of blood vessels, whereas renal iNOS immunopositivity was increased mainly in the renal corpuscles and in the cortical tubules. Interestingly, PF271 treatment resulted in substantial reduction of the iNOS expression in both liver and kidney of septic mice ( Figures 5A, B). The impact of the septic insult and the drug treatment on local inflammation was also confirmed by the geneexpression analysis of the adhesion molecules ICAM-1, VCAM-1 and E-Selectin, whose levels were drastically increased following the sepsis injury and significantly reduced by PF271 administration (Figures 6A-F).
PF271 Reduces Local FAK-PyK2 Activity in Septic Mice
To demonstrate that the above-mentioned effects were correlated with modulation of the pharmacological targets, we evaluated the local activation of FAK-PyK2 pathway. As shown in Figure 7, in liver and kidney both the septic insult and the drug treatment did not significantly modify the total expression of the FAK and PyK2 proteins. However, tissue homogenates of mice that underwent CLP exhibited a significant increase in both phosphorylation of Tyr 397 on FAK (panels A-B) and Tyr 402 on PyK2 (panels C-D), suggestive of increased enzyme activation. Furthermore, an overactivation of the downstream p38 MAPK protein triggered by FAK-PyK2 was also documented, as shown by a robust increase in the phosphorylation of Thr 180 /Tyr 182 on p38 MAPK in liver ( Figure 7E) and kidney ( Figure 7F) of septic mice. As expected, mice treated with PF271 showed reduced activation of either FAK-PyK2 and its downstream effector p38 MAPK, confirming the ability of PF271 to interfere with its
NLRP3 Inflammasome Expression and Activity Are Attenuated by FAK-PyK2 Inhibition During Polymicrobial Sepsis
We and others (33,38,39) have previously documented a pivotal role of the NLRP3 inflammasome pathway in mediating deleterious effects in sepsis. We, therefore, explored a potential cross-talk mechanism linking FAK-PyK2 inhibition to impaired NLRP3 activity. As shown in Figure 8, experimental sepsis evoked a robust increase in the assembly of the NLRP3 complex in liver and kidney, which was associated with a subsequent increase in the cleavage of caspase-1, when compared to animals in the Sham group. In contrast, mice subjected to CLP and treated with PF271 showed a significant reduction in the renal and hepatic expression and activation of the NLRP3 inflammasome complex.
DISCUSSION
Preliminary in vitro and in vivo studies have previously suggested a role of the FAK-Pyk2 pathways in inflammation and, ultimately, organ damage caused by LPS (23,24,40,41). Here we show, for the first time, that administration of PF271, a selective dual inhibitor of both FAK and Pyk2, ameliorates the severity score and prolongs the survival in a murine model of CLP-sepsis. Mice subjected to sepsis (and treated with vehicle) developed local and systemic inflammation, severe organ injury/dysfunction and this was associated with a high mortality. In contrast, administration of the reversible inhibitor for both FAK and Pyk2, PF271, counteracted all these abnormalities caused by sepsis and resulted in long-term protection and improved survival. In this study, we confirmed that sepsis does, indeed, lead to phosphorylation and, hence, activation of both FAK and Pyk2 and we show that PF271 attenuates FAK-Pyk2 phosphorylation, and thus activation, in septic mice. In addition, we explored the molecular mechanisms involved in the deleterious effects A B FIGURE 5 | Effect of PF271 on tissue expression of iNOS during experimental sepsis. Mice were randomly selected to undergo Sham or CLP surgery. One hour later, CLP mice were treated once with either Vehicle or PF271 (25 mg/kg s.c.). Twenty-four hours after Sham or CLP procedure, liver and kidney were harvested. Tissue sections were prepared to identify iNOS expression through immunohistochemistry assay in liver (A) and kidney (B). Representative photomicrographs at 10x and 20x magnification were recorded of 5 animals per group.
attributed to FAK-Pyk2 overactivation during sepsis. In the liver and kidney of septic mice, we documented that PF271 attenuated the upregulation of the cellular adhesion molecules (CAMs) VCAM-1, ICAM-1 and E-Selectin, which exert a pivotal role in driving leukocyte infiltration and the following excessive inflammatory response, which ultimately lead to the development of MOF (42)(43)(44). We also documented that PF271 administration was associated with a significant reduction in local (liver and kidney) expression and activity of either MPO, a wellknown biomarker of neutrophil infiltration (45), and iNOS, which is detectable in neutrophils, leading to overproduction of nitric oxide (NO) and the following generation of other reactive species (46). Our study does not allow the identification of the specific cell types involved in PF271-mediated responses. Nevertheless, it is well described that cells of the innate immune system and endothelial cells are the most prominent type involved in the exorbitant release of inflammatory cytokines, which in turn drives septic inflammation (47) and the same cells express the FAK-PyK2 cascade, whose role in the cellular production of inflammatory mediators has been clearly demonstrated (17,22,40,(48)(49)(50). We may, thus, speculate that the beneficial effects of PF271, including those related to the preservation of organ function in sepsis, are due, at least in part, to a direct effect of PF271 on both leukocytes and endothelial cells. One hour later, CLP mice were treated once with either Vehicle or PF271 (25 mg/kg s.c.). Twenty-four hours after Sham or CLP procedure, liver and kidney were harvested, and the total mRNA was extracted from them. Real time qPCR was performed for the following genes: ICAM-1 in liver (A) and kidney (B) extractions; VCAM-1 in liver (C) and kidney (D) extractions; E-Selectin in liver (E) and kidney (F) extractions. Relative gene expression was obtained after normalization to housekeeping genes (b-actin and GAPDH) using the formula 2 -DDCT and folds change was determined by comparison to Sham group. Data are expressed as dot plots for each animal and as mean ± SD of 4-5 mice per group. Statistical analysis was performed by one-way ANOVA followed by Bonferroni's post hoc test. *p < 0.05 CLP vs Sham/CLP+PF271.
One of the most relevant downstream signaling events directly activated by FAK-Pyk2 is the MAPK p38-dependent signaling pathway, which in turn activates the NF-kB transcription factor, leading to cytokines overproduction and, thus, the cytokine storm typical of the systemic inflammation and organ dysfunction associated to sepsis (17,40,50,51). Here we confirmed a local overactivation of the p38 MAPK signaling during experimental sepsis, which was significantly reduced by PF271 treatment. The sepsis-induced activation of p38 MAPK was paralleled by a massive increase in the systemic levels of proinflammatory cytokines, such as TNF-a, IL-1b, IL-17 and IL-6, as well as the anti-inflammatory cytokine IL-10. Most notably, we report that all pro-inflammatory cytokines had an impressive reduction when mice were treated with PF271, whereas high levels of the anti-inflammatory cytokine IL-10 were unaffected by treatment. In keeping with clinical studies (52,53), we also documented a significant sepsis-induced upregulation of the recently discovered inflammatory cytokine resistin and, most notably, we demonstrated here, for the first time, that its systemic concentrations may be affected by the pharmacological modulation of the FAK-Pyk2 pathway. The reduction in resistin levels caused by PF271 in sepsis could contribute, at least in part, to the prolonged survival of septic mice following drug treatment, as recent findings have demonstrated that resistin significantly impairs neutrophil killing of Grampositive and Gram-negative organisms, thus contributing to the development of immunosuppression, which is central to sepsis-related morbidity and mortality (54). However, further studies are needed to support this hypothesis. One of the cytokines showing a relevant modulation by either sepsis and PF271 was IL-1b. IL-1b is primarily released from the activated NLRP3 inflammasome, which plays a pivotal role in the pathogenesis of sepsis (55). We thus evaluated the potential impact of the FAK-Pyk2 pharmacological modulation on the NLRP3 complex formation and activation showing that the treatment with PF271 abolished the NLRP3 overexpression and caspase-1 overactivation secondary to sepsis, which was paralleled by systemic reduction in IL-1b. Our data strengthen previously published experimental data demonstrating that Pyk2 directly phosphorylates monomeric apoptosis-associated specklike protein containing CARD (ASC) subunits of the NLRP3 inflammasome, bringing ASC into an oligomerizationcompetent state to then activate caspase-1 and trigger IL-1b secretion (21). During experimental sepsis, we also documented a significant overproduction of plasminogen activator inhibitor-1 (PAI-1), the serum levels of which have been reported to rapidly increase in the early stages of sepsis and are positively correlated to sepsis severity in humans (56). PAI-1 overproduction may also contribute to the FAK activation observed in our experimental model as PAI-1 has been demonstrated to induce FAK phosphorylation leading to processes of macrophage infiltration (57). The slight reduction in PAI-1 levels observed in septic mice following PF271 administration could be due, at A B D C FIGURE 8 | Effect of PF271 on tissue activation of the NLRP3 inflammasome during experimental sepsis. Mice were randomly selected to undergo Sham or CLP surgery. One hour later, CLP mice were treated once with either Vehicle or PF271 (25 mg/kg s.c.). Twenty-four hours after Sham or CLP procedure, liver and kidney were harvested, and the total protein was extracted from them. Western blotting analysis for NLRP3 in the liver (A) and kidney (B) were corrected against b-actin and normalized using the Sham related bands; Cleaved caspase-1 in the liver (C) and kidney (D) were corrected against b-actin and normalized using the Sham related bands. Densitometric analysis of the bands is expressed as relative optical density (O.D.). Data are expressed as dot plots for each animal and as mean ± SD of 4-5 mice per group. Statistical analysis was performed by one-way ANOVA followed by Bonferroni's post hoc test. *p < 0.05 CLP vs Sham/CLP+PF271. least in part, to the reduced systemic concentrations of cytokines, mainly IL-6, which has been reported to exert a key role in regulating PAI-1 expression in vascular endothelial cells (58). We also found high plasmatic levels of lactate, a marker of tissue perfusion, during experimental sepsis and we reported the treatment with PF271 brought its levels to values not different from the control group. Plasma lactate is not only a marker of tissue perfusion, but it is currently used as a diagnostic criterion to determine whether a patient is in septic shock (1). Thus, our findings also demonstrate the ability of PF271 to improve hemodynamic factors during sepsis. However, we must acknowledge that the lack of direct measurements of hemodynamic parameters does not allow us to reach a solid conclusion on this specific issue.
Our study has further limitations, including the lack of evidence of any effects of PF271 on other important organs involved in MOF associated with sepsis, such as the lungs and the heart, as well as the use of only male mice, which does not allow to get a better understanding of the role of gender dimorphism in the effects of PF271. Although the experimental procedures reported here are in keeping with most of the recommendations reported in the MQTiPSS consensus guidelines (29), we did not consider the use of repeated analgesic and antimicrobial treatments as well as continuous fluid resuscitation. Furthermore, all efficacy endpoints that we evaluated, and the overall survival were assessed only at day 1 and day 5 post-CLP, respectively, and multiple and longer kinetics are needed for a better understanding of the overall efficacy of PF271 on the dynamic changes taking place in sepsis. Thus, future studies are required to improve the clinical relevance of our findings as well as to gain a better insight into the safety profile of the proposed drug treatment.
CONCLUSIONS
This study shows for the first time that the development of organ dysfunction induced by a clinically relevant polymicrobial sepsis model is associated with activation of the FAK-Pyk2 pathway, while the pharmacological inhibition of this pathway results in protective effects and prolonged survival. This beneficial effect is likely to be due to the prevention of leukocyte infiltration and related excessive local inflammation through the modulation of FAK-Pyk2 downstream inflammatory cascades, including p38-MAPK and NLRP3 inflammasome. As PF271 has already been tested in a phase I clinical trial in oncology, our finding may provide an opportunity for the repurposing of this compound for the use in patients with sepsis.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
The animal study was reviewed and approved by Ministero della Salute, UFFICIO VI, Tutela del benessere animale, igiene zootecnica e igiene urbana veterinaria. | 6,463.8 | 2022-02-01T00:00:00.000 | [
"Biology",
"Medicine"
] |
Phase change materials in energy sector-applications and material requirements
Phase change materials (PCMs) have been applying in many areas. One of them is energy field. PCMs are interesting for the energy sector because their use enables thermal stabilization and storage of large amount of heat. It is major issue for safety of electronic devices, thermal control of buildings and vehicles, solar power and many others energy domains. This paper contains preliminary results of research on solid-solid phase change materials designed for thermal stabilisation of electronic devices.
Introduction
Energy sector is important issue for the present society.It includes wide range of topics.Some of them are linked to phase change materials.Part of those applications is presented below [1].
Building applications
One of the most important applications of phase change materials in energy sector is use PCMs in housing and public facilities.It can be applicable in various ways, among others: -building materials and components with the content of PCMs, -latent heat storage containers (with PCMs) integrated in building structures.
By the use of PCMs in buildings application it is possible to reduce requirement for heat and reduce indoor overheating.
Solar energy
Phase change materials can be applied in solar energy application for: -solar heating, -solar cooling, -solar hot water, -photovoltaic system.
Thermal energy storage
Phase change materials can be applied for short-term, medium-term and long-term thermal energy storage.Selection of appropriate material depends on demand -several hours excess, daily overload of thermal energy or even seasonal heat surplus.
Thermal energy storage can have a positive impact in the form of: increasing the reliability of the system, improving the functioning of the power plants and energy systems, reducing energy purchase costs by shifting energy surplus from periods of lower to periods of higher demand.
At the time when cost of energy is getting higher and higher it is important to improve old and find new ways of energy saving.One of them is thermal energy storage and use stored energy in the future.
Electronic devices
There is also possibility of using PCMs for electronic devices: -PCM battery jacket -which cover and keep battery at optimum [2], -PCM pad -which protect portable computers from overheating [3], -thermal management for mobile phones [4], -heat sinks modified by use of PCMs [5].
Authors of the paper have decided to investigate one of listed above application of PCMs -for computer heat sink modification.emission or heat absorption which is called latent heat of fusion.Temperature of transition and amount of latent heat of fusion are distinctive for each material.PCMs are capable to store thermal energy for short or long time.
Phase change materials classification
There is many methods of phase change materials classification [6].The most typical is grading with respect to type of transition: -solid -liquid, -solid -solid, -gas -liquid, -solid -gas.
Another important type of classification concern to chemical composition.It is also important because PCMs properties are strongly depend on type of composition.Classification because of type of chemical composition is as below: -organic, -inorganic, -eutectics.
Properties of phase change materials relevant for use in the energy sector
PCMs are selected taking into account among the othersthermophisical, structural and economical properties: a) thermophisical: -phase change transition temperature, -latent heat of fusion, -thermal conductivity, -stability of properties at many work cycles.b)structural: -small volume change, -chemical stability, -compatibility with different materials, -non-toxic, -non-flammable.c)economical: -low price, -recycling possibility.
The most important properties of PCMs is temperature range of phase transition.Phase change materials can be used for many applications where heat storage or temperature stabilization is needed.For example: thermal energy storage, buildings thermal stabilization, off peak power utilization, waste heat recovery, application at solar power plants, food transport, blood and medicines transport, temperature stabilization of electronic devices.It is important to find PCM suitable for application.
Solid -solid phase change materials
The best known and the most commonly used are solidliquid phase change materials(S-L PCMs).However during few last years interest in solid-solid phase change materials(S-S PCMs) has grown.S-S PCMs are materials with solid-solid transition, so materials with change of crystallographic structure.The fact that typical phase transition does not occur in case of S-S PCMs favours their use for application where the following requirements are necessary to complied: -minimal or no volume change, -no liquid or gas generation.Predominance of solid-solid over solid -liquid transition involves minimization of volume change or liquid/gas generation possibility.It decreases probability of corrosion or damage and enables using S-S PCMs for wide range of applications.This work contains analysis of the possibility of use PCMs for thermal stabilisation of electronic devices 3 Experimental procedures
Synthesis of solid-solid phase change materials and its thermophisical properties
Solid-solid phase change material has been synthesized in order to assess their suitability for thermal stabilisation of electronic devices.Main components of synthesized material are polyethylene glycol (PEG) and ethyl cellulose (EC).To select material characterized by suitable properties, group of materials have been synthesized, examined and their properties have been compared.Details concerning type and percentage of polyethylene glycol used for the synthesis are presented in table 1.It has been audited that increase of content of polyethylene glycol in the synthesized material affects the growth of latent heat of fusion of this material.Also type of used polyethylene glycol is important-the higher molecular weight of PEG -the higher value of latent heat of fusion.Simultaneously, it has been observed that if it is used too little cellulose, it is not possible to obtain solid-solid phase change material.This is because the cellulose is responsible for the creation of a rigid structure which is filled with polyethylene glycol.PEG aimed to give characteristics typical for phase change materials.
Figure 2 present results of Fourier transform infrared spectroscopy for synthesized material (PEG 4000-75%) and used for its synthesis: pure PEG 4000 and cellulose.It can be inferred that polyethylene glycol has been grafted into the cellulose frame.Mentioned dependencies were observed with use of differential scanning calorimetry (DSC) technic.Two charts below -figure 3 and 4 present relations for material synthesized with use of different PEG type(Figure 3) and with use of various mass ratio of PEG4000 (Figure 4).-As large as possible value of thermal conductivity Due to improve thermal conductivity selected material has been modified with use of graphite and carbon nanotubes.Percentages of additives used for synthesis are presented in table 3. The best solution will be selected taking into consideration degree of thermal conductivity and economic reasons.indicate that thermal conductivity of PCM will be significantly improved.-Structural properties: small volume change, no leakage problem, chemical stability, compatibility with different materials, non-toxic.All those requirements have been included at the project stage.Solid-solid PCM has been chosen to avoid changes in the volume and the problem of leakage.
-Stability of properties at many work cycles To assess the ability to work in multiple cycles material will be tested on the testing rig.
Testing rig
Synthesized phase change material will be tested in order to: -examine stability of material properties after working in multiple cycles, -compare work characteristic for heating and cooling cycles, -determination of characteristics of enthalpy with temperature, -determination of characteristics of specific heat with temperature.
Tests will be carried out on the testing rig consisting of: main module, heating system with power measurement, computer, temperature measurement system, image acquisition system, data acquisition system.
Conclusions
Energy sector is growing and it is important to find new and improve the existing technologies.One of the possibilities which can be used to achieve this goal is application of phase change materials.Due to great advantages solid-solid PCMs are becoming more and more popular.
Group of novel solid-solid phase change materials based on polyethylene glycol and cellulose has been synthesized.Materials were examined and one of them has been chosen for further work.Chosen material will be tested for the possibility of thermal stabilisation of electronic devices.Preliminary results of tests on the material confirmed possibility of its use for mentioned application.
2. 1
Phase change materials Phase change materials are substances characterised by ability to phase transition at certain temperature range.Scheme of the phase change transition is presented on figure 1.During phase transition there is always heat DOI: 10.1051/ C Owned by the authors, published by EDP Sciences, 2015
Figure 1 .
Figure 1.Solid -liquid phase change transition as an example of phase change transition cycle.
Figure 5 .Table 2 .
Figure 5. DSC results-curves for materials with different PEG4000 ratio: 50%(grey), 65% (blue), 75% (red), 80%(green).Material has been synthesized taking into account several important requirements: -Properly selected transition temperature.Temperature in the range of 40-50 o C has been chosen considering the maximum temperature of safe work for selected computer processors.-As large as possible value of latent heat of fusion.Due to quite high value of latent heat of fusion and suitable transition temperature material containing 75 % of PEG 4000 has been chosen.Heat of fusion and transition temperature of synthesized material are presented in table 2.Table 2. Heat of fusion and transition temperature for synthesized material (PCM containing 75% of PEG4000)
Figure 6 presents
Scanning Electron Microscope (SEM) image of PCM PEG4000-75% and its composite with 10% of multiwall carbon nanotubes.It can be observed that cellulose and PEG in synthesized material are well consistent and carbon nanotubes are dispersed uniformly.It suggest that components of PCM are compatible and carbon nanotubes distribution is quasi uniform.It can Wavenumber, cm-1 Transmittance, %
Figure 7 .
Figure 7. Scheme of the testing rig.
Table 1 .
PEG type and percentage used for the synthesis
Table 3 .
Type and percentage of appendix | 2,181.2 | 2015-05-01T00:00:00.000 | [
"Engineering"
] |
Novel Mechanisms for Heme-dependent Degradation of ALAS1 Protein as a Component of Negative Feedback Regulation of Heme Biosynthesis*
In eukaryotic cells, heme production is tightly controlled by heme itself through negative feedback-mediated regulation of nonspecific 5-aminolevulinate synthase (ALAS1), which is a rate-limiting enzyme for heme biosynthesis. However, the mechanism driving the heme-dependent degradation of the ALAS1 protein in mitochondria is largely unknown. In the current study, we provide evidence that the mitochondrial ATP-dependent protease ClpXP, which is a heteromultimer of CLPX and CLPP, is involved in the heme-dependent degradation of ALAS1 in mitochondria. We found that ALAS1 forms a complex with ClpXP in a heme-dependent manner and that siRNA-mediated suppression of either CLPX or CLPP expression induced ALAS1 accumulation in the HepG2 human hepatic cell line. We also found that a specific heme-binding motif on ALAS1, located at the N-terminal end of the mature protein, is required for the heme-dependent formation of this protein complex. Moreover, hemin-mediated oxidative modification of ALAS1 resulted in the recruitment of LONP1, another ATP-dependent protease in the mitochondrial matrix, into the ALAS1 protein complex. Notably, the heme-binding site in the N-terminal region of the mature ALAS1 protein is also necessary for the heme-dependent oxidation of ALAS1. These results suggest that ALAS1 undergoes a conformational change following the association of heme to the heme-binding motif on this protein. This change in the structure of ALAS1 may enhance the formation of complexes between ALAS1 and ATP-dependent proteases in the mitochondria, thereby accelerating the degradation of ALAS1 protein to maintain appropriate intracellular heme levels.
Heme is an essential molecule to almost all organisms. Heme functions as a prosthetic group on several types of proteins, including cytochromes, catalases, hemoglobin, and myoglobin. Moreover, it has been reported that heme is also involved in numerous regulatory systems in mammals (1), including those that govern transcription (2), translation (3), microRNA processing (4), and the circadian rhythm (5). Excess quantities of heme or heme precursors result in the generation of reactive oxygen species, causing oxidative stress in cells (6). Thus, the production of heme and heme precursors must be tightly regulated. This regulation occurs through the precise control of the intracellular expression of nonspecific 5-aminolevulinatye synthases (ALAS-N or ALAS1). ALAS1 is the first and the ratelimiting enzyme of the heme biosynthetic pathway in mammalian cells, except for in erythroid cells, in which erythroid-specific 5-aminolevulinate synthase (ALAS-E or ALAS2) regulates the first step of heme biosynthesis (7). Although ALAS2 expression increases during erythroid differentiation, ALAS1 expression is suppressed by heme at the transcriptional, translational, and post-translational levels (8). ALAS1 and ALAS2 are encoded by independent genes (9); however, they both contain a conserved amino acid sequence called the heme regulatory motif (HRM), 2 which is involved in heme-dependent inhibition of ALAS1 and ALAS2 translocation into the mitochondrial matrix (10). The HRM has also been referred to as the "CP motif" because it includes a core dipeptide motif composed of cysteine and proline residues (2). In humans, three CP motifs have been identified within the ALAS1 precursor protein. Two of these motifs (CP1 and CP2) are located within the presequence for mitochondrial translocation, and the remaining motif (CP3) is located within a region close to the N-terminal end of the mature ALAS1 protein (10). Interestingly, in addition to CP1 and CP2, CP3 has been reported to be involved in hemedependent inhibition of the mitochondrial import of the ALAS1 protein, although CP3 is not located within the mitochondrial translocation presequence on the protein. Following mitochondrial import, this presequence is proteolytically removed, after which the mature ALAS1 protein catalyzes the condensation of glycine and succinyl-CoA to produce 5-aminolevulinic acid in the mitochondrial matrix (11). Although the mature ALAS1 protein has been suggested to retain CP3 within its N-terminal region (10), how CP3 affects the mature ALAS1 within the mitochondrial matrix remains unknown.
Several proteases and peptidases have been reported to play important roles in protein quality control within the mitochondrial matrix (12,13). To date, two proteases, LONP1 (14) and ClpXP (15), have been reported to function as ATP-dependent proteases in the mitochondrial matrix of human cells. The human LONP1 protein acts as a homo-oligomeric hexamer (16), whereas ClpXP is a heteromultimer of two different proteins, CLPX and CLPP. CLPP consists of double heptameric rings, whereas CLPX is composed of two hexametric rings bound on each side of the CLPP ring (15). It has been suggested that CLPX recognizes and unfolds the tertiary structures of target proteins, after which CLPP degrades the unfolded target proteins (17). In fact, mammalian ClpXP has been reported to exhibit protease activity against model substrates of ClpXP, such as casein (15,18); however, a specific substrate for mammalian ClpXP has not been identified. Moreover, Kardon et al. (18) recently reported that mammalian CLPX can associate with and activate ALAS by facilitating the insertion of pyridoxal 5-phosphate, a cofactor for ALAS, into the catalytic site of ALAS, whereas human CLPX does not act as a component of the ClpXP protease for ALAS. Conversely, it has been reported that LONP1 recognizes several different proteins, including mitochondrial aconitase (19) and COX4-1 (20). Moreover, Tian et al. (21) presented evidence that LONP1 was involved in heme-mediated proteolysis of ALAS1 in mitochondria, although the precise mechanism underlying the heme-dependent degradation of ALAS1 remains unclear.
In the present study, we aimed to identify proteins that associate with and regulate intracellular ALAS1 protein. Using immunoprecipitation followed by mass spectrometry analysis, we successfully identified several proteins. Interestingly, we found that ClpXP can form a protein complex with ALAS1. Additional experiments revealed that the formation of a complex between ALAS1 and ClpXP was inhibited by the suppression of endogenous heme biosynthesis and enhanced by the addition of hemin. Thus, we further examined the role of ClpXP in the regulation of heme biosynthesis in human cells.
Results
Several Different Proteins Co-immunoprecipitated with Human ALAS1-To identify proteins involved in the post-translational regulation of ALAS1, human ALAS1 was expressed as a FLAGtagged protein in FT293 cells, designated FT293 ALAS1F . The tagged protein, which was designated ALAS1F, was immunoprecipitated using anti-DDDDK-agarose (Medical and Biological Laboratories, Nagoya, Japan). Then components of the immunoprecipitated proteins were identified using nanoflow LC-MS as described under "Experimental Procedures." FLAGtagged firefly luciferase protein (designated LucF) was used as a control to determine what nonspecific proteins were pulled down during the immunoprecipitation. The proteins identified from the LucF immunoprecipitates were excluded as background from the list of proteins identified from the ALAS1F immunoprecipitates. As a result, we identified ϳ60 different proteins capable of forming complexes with the ALAS1F protein specifically (Table 1). In addition to seven mitochondrial proteins (in boldface type in Table 1), several non-mitochondrial proteins, including cytosolic proteins, cytoskeletal proteins, the translation initiation protein complex, and RNAbinding proteins, were identified. To the best of our knowledge, the involvement of these proteins in the regulation of ALAS1 has not been reported previously. As such, the roles of these proteins in regulating ALAS1 remain unknown. Although we aim to elucidate the roles of all of the identified proteins in the regulation of ALAS1 in the near future, in the current work, we focused on the isolated mitochondrial matrix proteins because the increased number of total peptide spectrum matches identified by nanoflow LC-MS suggested a high correlation between mitochondrial matrix proteins and ALAS1 protein. Of the seven identified mitochondrial proteins, we chose to further analyze CLPX and CLPP, both of which function as ATP-dependent ClpXP proteases within the mitochondrial matrix, for their potential involvement in the degradation of the ALAS1 protein in the mitochondrial matrix.
The ALAS1 Protein Forms a Complex with ClpXP in a Hemedependent Manner in the Mitochondria-We first confirmed that a complex forms between ALAS1F and ClpXP using immunoprecipitation followed by Western blotting analysis. As shown in Fig. 1A, the co-immunoprecipitation of ALAS1F and ClpXP was reproducible (lane 2), whereas ClpXP did not coimmunoprecipitate with LucF (lane 1). Interestingly, ClpXP was not detected in the immunoprecipitates of the presequence-deleted ALAS1F protein (lane 3), which lacks a signal sequence for mitochondrial translocation. This result suggests that the formation of the ALAS1F-ClpXP complex occurs within mitochondria. The mitochondrial localization of ALAS1F, but not of LucF or ALAS1F(⌬preseq), was also verified by immunofluorescence (Fig. 1B).
To determine the role of intracellular heme in the formation of a complex between ALAS1F and ClpXP, LucF-or ALAS1Fexpressing cells were treated with succinylacetone (SA), which specifically inhibits heme biosynthesis, for 24 h. The cells were then further incubated with or without hemin for 30 min before being harvested. As shown in Fig. 1C, ALAS1F formed protein complexes with ClpXP (lane 4), whereas the suppression of endogenous heme synthesis by SA treatment prevented the ClpXP-ALAS1F complex formation (lane 5). Importantly, additional treatment with hemin restored the formation of ClpXP-ALAS1F complexes within 30 min (lane 6). Co-immunoprecipitation of LucF and ClpXP was not observed after SA or hemin treatment (lanes 1-3), although the expression levels of CLPX and CLPP in cell lysates were similar to those in cells expressing LucF (lanes 7-9) or ALAS1F (lanes 10 -12). These results suggest that ALAS1F specifically forms complexes with ClpXP within the mitochondrial matrix in a heme-dependent manner. The immunofluorescence study revealed that the FLAGtagged ALAS1 still localized in the mitochondria after treatment with SA and/or hemin, although SA treatment of these cells resulted in the enlargement of some mitochondria due to extensive accumulation of FLAG-tagged ALAS1 (Fig. 1D).
A Specific Heme-binding Motif on the ALAS1 Protein Is Involved in the Heme-dependent Association between ALAS1 and ClpXP-It has been reported that heme binds to ALAS1 via a conserved amino acid sequence, termed the HRM (10). This motif contains a cysteine-proline dipeptide motif (CP motif) as a core sequence (2). Human ALAS1 and ALAS2 proteins each contain three independent CP motifs, which are involved in regulating the mitochondrial translocation of ALAS1 and ALAS2 (10,22). Two of these CP motifs (CP1 and CP2) are located within the presequences found on ALAS proteins, whereas the final CP motif (CP3) is located within the N-termi-nal regions of mature ALAS proteins ( Fig. 2A). Interestingly, it has been suggested that CP3, along with CP1 and CP2, is also involved in regulating the mitochondrial translocation of ALAS1 (22). However, the role of CP3 with respect to mature ALAS1 proteins present in the mitochondria remains unclear.
To determine the independent roles of CP1, CP2, and CP3 in the formation of the ALAS1-ClpXP complex, site-directed mutagenesis was utilized to substitute each cysteine residue within each CP motif to alanine, which inhibits the binding of heme to the motif (2). The resultant constructs expressed ALAS1F proteins in which both CP1 and CP2, only CP3, or all CP motifs were mutated; these constructs were designated ALAS1F⌬CP1⅐2, ALAS1F⌬CP3, and ALAS1F⌬CP1-3, respectively. As shown in the left panel of Fig. 2B ALAS1F⌬CP1⅐2 was immunoprecipitated (lanes 4 -6). However, CLPX and CLPP proteins were barely detectable in the immunoprecipitates of the mutant ALAS1 proteins in which only CP3 (ALAS1F⌬CP3; lanes [7][8][9] or all CP motifs (ALAS1F⌬CP1-3; lanes 10 -12) were mutated, whereas CLPX and CLPP expressed similarly in cells (lanes 1-12 in the right panel of Fig. 2B). These results strongly suggest that the CP3 motif on ALAS1 is involved in the formation of complexes between ALAS1 and ClpXP proteins.
Endogenous ALAS1 and ClpXP Form a Complex in Mitochondria in HepG2
Cells-To confirm that endogenous ALAS1 and ClpXP form a complex in mitochondria in the HepG2 human hepatic cell line, we first prepared the mitochondriarich fraction and confirmed the purity of the fraction. As shown in Fig. 3A, mitochondrial proteins (ALAS1, CLPX, and COXIV), cytosolic protein (GAPDH), and cytoskeletal protein (-actin) were abundantly expressed in the total lysate, whereas GAPDH and -actin proteins were marginally detected in the mitochondria-rich fraction, suggesting that the mitochondria were concentrated in this mitochondria-rich fraction. Thus, to identify proteins that associate with endogenous ALAS1 in the mitochondria, mitochondria-rich fraction prepared from HepG2 cells was subjected to immunoprecipitation of endogenous ALAS1 protein. Using an anti-human ALAS1 mouse , wild-type ALAS1 (ALAS1), or presequence-deleted ALAS1 (⌬preseq) was expressed in FT293 cells as FLAG-tagged protein and immunoprecipitated using anti-DDDDK-agarose. FLAG-tagged proteins, CLPX, and CLPP in the immunoprecipitates (left lanes, FLAG-IP) and in total cell lysates (right lanes) were detected by Western blotting analysis using specific antibodies. GAPDH was detected as an internal control. M, molecular size marker. B, immunofluorescence analysis of FT293 cells expressing FLAGtagged proteins. Nucleus, mitochondria, and FLAG-tagged proteins were stained with DAPI (blue), anti-Tom20 (red), and anti-FLAG (green), respectively. C, intracellular heme levels are influenced by the formation of ALAS1 protein complexes. Shown is expression of FLAG-tagged luciferase (LucF) or ALAS1 (ALAS1F) proteins in FT293 cells following incubation with or without 1 mM SA for 24 h. The SA-treated cells were subsequently incubated with or without hemin for 30 min before harvest. In A and C, the loading volumes of the immunoprecipitates were adjusted according to the intensities of the FLAG-tagged proteins; therefore, each sample contains a similar quantity of FLAG-tagged protein. For the total cell lysate, 10 g of protein was loaded into each lane. p or m, precursor or mature FLAG-tagged ALAS1 protein, respectively. D, immunofluorescence analysis of FT293 cells expressing FLAG-tagged ALAS1 protein after hemin and/or SA treatment. Nucleus, mitochondria, and FLAG-tagged ALAS1 protein were stained with DAPI (blue), anti-Tom20 (red), and anti-FLAG (green), respectively. monoclonal antibody for immunoprecipitation led to detectable levels of CLPX and CLPP proteins in ALAS1 immunoprecipitates (Fig. 3B, lane 1). However, treating the cells with SA decreased the quantity of CLPX and CLPP proteins in the immunoprecipitates (lane 2). Conversely, incubating the cells with hemin for 30 min resulted in increased quantities of CLPX and CLPP proteins in the immunoprecipitates (lane 3). Treatment of the cells with hemin increased the quantities of immunoprecipitated CLPX and CLPP proteins even after treatment with SA (lane 4). These results strongly suggest that ClpXP forms complexes with ALAS1 in mitochondria in HepG2 cells in a heme-dependent manner. Another ATP-dependent mitochondrial protease, LONP1, has been reported to be involved in the heme-dependent degradation of ALAS1 (21). We therefore also examined whether the immunoprecipitates of ALAS1 protein included LONP1. As shown in Fig. 3B, LONP1 was detected in the immunoprecipitates of ALAS1 without (lane 1) or with hemin treatment (lane 3) but was hardly detected after treatment with SA (lane 2). Interestingly, LONP1 was not clearly recovered in the immunoprecipitates after treatment with hemin for 30 min (lane 4), suggesting that the mode of complex formation between ALAS1 and LONP1 is different from that between ALAS1 and ClpXP.
Next, we used siRNA to suppress CLPX or CLPP expression to examine the roles of these proteins in regulating ALAS1 expression. As shown in Fig. 3C, the introduction of siCLPX-1 (lanes 7-9) and siCLPX-2 (lanes 10 -12), both of which effectively suppressed CLPX expression, increased ALAS1 expression in HepG2 cells. Conversely, the introduction of the siNegative control (lanes 1-3) or siGAPDH (lanes 4 -6), a positive control for siRNA transfection, did not influence ALAS1 expression. Surprisingly, transient suppression of CLPP using siCLPP1 (lanes 13-15) or siCLPP2 (lanes 16 -18) had only a marginal effect on ALAS1 expression at 48 h after transfection. Therefore, we prolonged the suppression of CLPP expression by performing sequential transfection of siRNA 4 days after the initial transfection. Ten days after the initial transfection of siCLPP to suppress CLPP expression, endogenous ALAS1 expression was found to increase in HepG2 cells (Fig. 3D, lanes 5 and 6) compared with that found in cells transfected with the siNegative control (Fig. 3D, lane 2). The prolonged suppression of CLPX more effectively induced the accumulation of ALAS1 protein in HepG2 cells (lanes 3 and 4). These results suggest that a complex forms between ClpXP and ALAS1 proteins in HepG2 cells and that ClpXP is involved in the degradation of ALAS1 protein in these cells.
CLPX Is Essential for Heme-dependent Degradation of Endogenous ALAS1 Protein in FT293 Cells-To determine which protein, CLPX or CLPP, is closely associated with ALAS1, we immunoprecipitated FLAG-tagged ALAS1 (ALAS1F) after the FLAG-IP total cell lysate introduction of siRNA against CLPX or CLPP in FT293 ALAS1F cells. As shown in Fig. 4A 3, 4, 7, and 8). Endogenous ALAS1 protein was immunoprecipitated using an anti-ALAS1 monoclonal antibody, and the loading volume of each eluate was adjusted according to the intensity of each immunoprecipitated ALAS1 protein; therefore, each sample contained a similar amount of ALAS1 protein (lanes 1-4). CLPX is inducible in a doxycycline-dependent manner. Fig. 4B presents the target sequence for Cas9 nuclease and the genotype of the FT293 ⌬CLPX cells we established. As shown in Fig. 4C, CLPX was not detected in FT293 ⌬CLPX in the absence or presence of doxycycline (lane 3 or 4, respectively). Although CLPX was highly expressed after incubation with doxycycline (lane 6), it was marginally detected in CLPX ind /FT293 ⌬CLPX cells even in the absence of doxycycline (lane 5). Although the reason for the basal expression of CLPX in CLPX ind / FT293 ⌬CLPX cells is unclear, it is possible that the FBS used to culture CLPX ind /FT293 ⌬CLPX cells contained tetracycline, as indicated in the manual for the Flp-In T-Rex system (Invitrogen). Using these cells, we examined the expression level of endogenous ALAS1 protein in the presence or absence of CLPX. As shown in Fig. 4C, deletion of CLPX resulted in an increase in the expression of the ALAS1 protein (lanes 3 and 4) compared with that in FT293 cells (lanes 1 and 2). ALAS1 expression in CLPX ind /FT293 ⌬CLPX cells was also increased without doxycycline (lane 5), whereas it was decreased after the induction of CLPX protein expression (lane 6). Using these cells, we have further examined the effect of hemin treatment on the expression level of ALAS1 (Fig. 4D) 3 and 4). These results strongly suggested that the expression of CLPX is essential for heme-dependent degradation of ALAS1 protein.
Mutation of the CP3 Motif on the ALAS1 Protein Extends the ALAS1 Protein Half-life in Mitochondria-Because specific knockdown of CLPX or CLPP expression resulted in the accumulation of ALAS1 protein in HepG2 cells, we hypothesized that ClpXP is involved in the degradation of ALAS1 proteins within mitochondria. To test this hypothesis, we investigated the turnover rates of ALAS1F proteins in which the presequence (⌬preseq), CP1 and CP2 (CP1⅐2) motifs, or all three CP motifs (CP1-3) were mutated, and we compared these rates with that of WT ALAS1F protein. Each expression vector was introduced into FT293 cells, which allowed transcription to be induced by the addition of doxycycline. At 24 h after inducing the expression of each protein, translation was inhibited using cycloheximide (CHX). ALAS1F protein expression was examined 1.5, 3, and 6 h after the addition of CHX. A representative result is shown in Fig. 5A. The intensity produced by the mature protein in each lane was measured for statistical analysis. The Fig. 5, B (without SA) and C (with SA). As shown in Fig. 5B, the relative expression levels of ALAS1F⌬CP1⅐2 at 3 and 6 h after the addition of CHX are significantly lower than those of the other proteins (**, p Ͻ 0.05). The relative expression levels of WT ALAS1 and ALAS1F⌬CP1-3 at 6 h after the initiation of CHX treatment were lower than those of luciferase or ALAS1(⌬preseq) (*, p Ͻ 0.05). As shown in Fig. 5B, the expected half-lives of WT ALAS1 and ALAS1F⌬CP1-3 ALAS1 are each ϳ6 h, whereas the half-life of ALAS1F⌬CP1⅐2 is ϳ2 h. These results suggest that the presence of CP3 in the mature ALAS1 protein is related to the shorter half-life of mature ALAS1. Although we expected WT ALAS1 to have a similar half-life to that of ALAS1F⌬CP1⅐2, the half-life was actually longer than that of ALAS1F⌬CP1⅐2. This discrepancy might be caused by the accumulation of precursor WT proteins in the cells, which might partially compensate for degraded mature proteins by translocating into mitochondria. Indeed, the accumulation of precursor wild-type ALAS1 protein was detected in cells not treated with SA (Fig. 5A, top, SA (Ϫ)), whereas there was virtually no accumulation of ALAS1F⌬CP1⅐2 or ALAS1F⌬CP1-3, both of which contain mutated CP motifs within their presequences. Thus, to determine the functional role of the CP3 motif on ALAS1F with respect to the half-life of the mature protein in mitochondria, it might be suitable to compare the half-life of ALAS1F⌬CP1⅐2 with that of ALAS1F⌬CP1-3. Conversely, following treatment with SA, WT ALAS1, ALAS1F⌬CP1⅐2, and ALAS1F⌬CP1-3 ALAS1F all became very stable, and no differences were found in the relative expression levels within each sample at any time point (Fig. 5C). These results suggest that the presence of the CP3 motif on ALAS1 correlates to the short half-life of this protein in mitochondria, although this effect is masked when endogenous heme biosynthesis is suppressed by treatment with SA.
The CP3 Motif Is Involved in the Oxidation of ALAS1 Protein and the Recruitment of LONP1 into the ALAS1 Protein Complex-Because it has been reported that the binding of free heme to the CP motif on iron regulatory protein 2 (IRP2) leads to the oxidation of the IRP2 protein (23), we attempted to determine whether the CP3 motif on ALAS1 is involved in heme- mediated oxidation of ALAS1. To accomplish this, we incubated ALAS1F-expressing cells in which endogenous heme production was suppressed by SA with hemin for several hours before immunoprecipitation. Following this, we examined the carbonylation of immunoprecipitated ALAS1 because car-bonylation is the most common oxidative modification of a protein. As shown in Fig. 6A (top left), the immunoprecipitated ALAS1F proteins contained carbonylation, and their molecular sizes correspond to precursor and mature ALAS1F proteins (lane 1). Furthermore, treating the cells with SA slightly inhib- A, cells expressing FLAG-tagged ALAS1 (ALAS1F) or ALAS1⌬CP3 (ALAS1F⌬CP3) were treated with succinylacetone for 24 h. Following this, the cells were treated with hemin for the indicated times before immunoprecipitation. The loading volume for each immunoprecipitated sample was adjusted according to the intensity of the FLAG-tagged protein so that similar quantities of FLAG-tagged protein were loaded into each lane. One membrane was used to detect carbonylated protein (top), and another membrane was used to detect co-immunoprecipitated proteins (ALAS1F, CLPX, CLPP, LONP1, and GLRX5; bottom five panels) because treating the membrane with dinitrophenylhydrazine (DNPH) to detect carbonylated proteins modified the reactivity of some proteins toward their specific antibodies (data not shown). B, untreated (cont.), SA-treated and hemin-treated (6 h) samples of immunoprecipitated FLAG-tagged Luciferase (LucF), wild-type ALAS1 (ALAS1F), and CP3-mutated ALAS1 (⌬CP3) were selected for comparison. p or m, precursor or mature FLAG-tagged ALAS1 protein, respectively.
ited the oxidation of the protein (lane 2), whereas incubating the cells with hemin after the SA treatment enhanced the accumulation of carbonylated proteins, the molecular size of which corresponded to mature ALAS1 protein. This effect lasted for up to 6 h (lanes 3-6). We also attempted to determine whether immunoprecipitated ALAS1F⌬CP3 proteins showed an accumulation of carbonylated proteins because the CP3 motif is present in the mature ALAS1F protein. We found that carbonylated proteins were only faintly detectable in ALAS1F⌬CP3 immunoprecipitates, even after a 6-h incubation with hemin (Fig. 6A, top right, lanes 7-12). These results suggest that the presence of the CP3 motif on ALAS1 and the longer exposure of cells to hemin are both related to the oxidation of the ALAS1 protein. Interestingly, CLPX and CLPP proteins were barely detectable in ALAS1F⌬CP3 immunoprecipitates, even after a 6-h-long hemin treatment (Fig. 6A, middle panels, lane 12). These results suggest that the CP3 motif on ALAS1 is involved in the heme-dependent oxidative modification of the ALAS1 protein and in the formation of a complex between ClpXP and ALAS1. However, it is unclear whether ClpXP is involved in the degradation of oxidized ALAS1 because the quantities of CLPX and CLPP proteins in the ALAS1F immunoprecipitates did not change during the incubation with hemin (Fig. 6A, middle panels, lanes [3][4][5][6]. Previously, it was reported that another ATPdependent mitochondrial matrix protease, LONP1, is involved in the heme-dependent degradation of ALAS1 (21) and in the removal of oxidized mitochondrial proteins (19,24). We thus examined whether ALAS1F immunoprecipitates contained LONP1, although LC-MS analysis of immunoprecipitated ALAS1F proteins did not identify LONP1 protein. As shown in the second panels from the bottom in Fig. 6A, LONP1 protein was detected in WT ALAS1 immunoprecipitates (lane 1). Furthermore, the quantity of LONP1 protein in ALAS1F immunoprecipitates increased after the cells were incubated with hemin (lanes 3-6). It should be noted that the presence of LONP1 in the ALAS1F immunoprecipitates is also related to the presence of the CP3 motif on ALAS1 because LONP1 was barely detectable in ALAS1F⌬CP3 immunoprecipitates, even after incubation with hemin (lanes 7-12). We also assessed whether ALAS1F immunoprecipitates contained GLRX5, which was identified in these immunoprecipitates via LC-MS analysis. As shown in the bottom panels of Fig. 6A, GLRX5 was detected in ALAS1F and ALAS1F⌬CP3 immunoprecipitates, and treatment of cells with SA and/or hemin did not influence the formation of complexes between these proteins. These results suggest that intracellular heme levels and the presence of the CP3 motif on ALAS1 do not affect the formation of complexes between GLRX5 and ALAS1 (lanes 1-12). Several of the above samples as well as immunoprecipitates of LucF were selected and loaded onto the same acrylamide gel for comparison against each other (Fig. 6B). The results clearly demonstrated differences between ALAS1F (lanes 4 -6) and ALAS1F⌬CP3 (lanes 7-9) with respect to heme-dependent oxidative modifications and the formation of complexes with CLPX, CLPP, and LONP1. In contrast, no heme-dependent carbonylation or complex formation was detected in the LucF immunoprecipitates (lanes 1-3). Again, only GLRX5 was similarly detected in the immunoprecipitates of ALAS1F and ALAS1F⌬CP3 (bot-tom). This finding was true regardless of whether the cells were treated with SA or hemin before immunoprecipitation (Fig. 6B, bottom, lanes 4 -9).
Discussion
In the present study, using immunoprecipitation followed by MS analysis, we identified several proteins capable of forming complexes with ALAS1. The presence of CLPX, CLPP, or GLRX5 proteins in the immunoprecipitates of FLAG-tagged ALAS1 proteins was also confirmed using Western blotting analysis. We further demonstrated that hemin treatment stimulates the formation of complexes between ALAS1 and ClpXP in human hepatic cells. Moreover, siRNA-mediated suppression of CLPX or CLPP resulted in the accumulation of ALAS1 proteins in mitochondria, suggesting that ClpXP is involved in the degradation of the ALAS1 protein within mitochondria. Furthermore, using CLPX ind /FT293 ⌬CLPX cells, we successfully demonstrated that the presence of CLPX is essential for hememediated degradation of endogenous ALAS1 protein. Interestingly, the HRM on the mature ALAS1 protein (CP3), which has previously been reported to be involved in heme-dependent suppression of the mitochondrial translocation of the ALAS1 precursor protein, was found to play a crucial role in the formation of complexes between ALAS1 and ClpXP and in the hemedependent degradation of ALAS1. Taken together, our results demonstrate that ClpXP is involved in negative feedbackmediated regulation of heme biosynthesis via the heme-dependent degradation of ALAS1.
Recently, Kardon et al. (18) reported that Mcx1, a yeast homolog of CLPX, stimulates heme biosynthesis in yeast by enhancing the insertion of co-enzymes into the Hem1 protein.
These authors also demonstrated that human CLPX can activate ALAS2 both in vitro and in vivo, whereas recombinant ClpXP, which degrades the typical ClpXP substrate, is unable to digest recombinant human apoALAS2 proteins in vitro. The referenced work indicates a novel function for mammalian CLPX as an activator of ALAS2 enzymatic activity in vertebrates. However, it is also important to note that recombinant ClpXP, which consists of mouse CLPX and human CLPP, is able to digest the general ClpXP substrate "casein," confirming that mammalian ClpXP is an active protease. Indeed, other groups have previously reported on the proteolytic ability of human ClpXP using casein as a substrate (15,25), although the specific substrate used by human ClpXP remains unclear. Thus, based on our results, we hypothesize that ClpXP can recognize and modify ALAS1 for degradation under special conditions, such as in the presence of heme, as in our assay conditions. It is still unclear whether ClpXP directly digests ALAS1, which can be determined in vitro using a combination of recombinant CLPX, CLPP, and ALAS1 proteins. Such experiments are ongoing projects in our laboratory, and we are actively attempting to express and purify these recombinant proteins to create an in vitro assay system. Although a procedure for purifying recombinant active human ClpXP protein has already been reported (25), a method for purifying recombinant human ALAS1 protein has not yet been established. Indeed, human ALAS1 could not be purified using the same methods that we used to purify recombinant ALAS2 (26, 27) because the recombinant ALAS1 protein degraded during purification (data not shown). Thus, we are currently attempting to modify our method for the purification of recombinant ALAS1 for the purpose of creating an in vitro assay system.
In our initial experiment using LC/MS, we did not identify LONP1 in ALAS1F immunoprecipitates (Table 1), although Tian et al. (21) reported that heme-mediated breakdown of ALAS1 protein is dependent on the presence of LONP1. However, our Western blotting analysis revealed the presence of LONP1 in ALAS1 precipitates from HepG2 cells (Fig. 3B) and in ALAS1F immunoprecipitates from FT293 cells (Fig. 6A), and LONP1 levels were enhanced following hemin treatment (Fig. 6A). Thus, it is possible that LONP1 and ClpXP cooperatively regulate ALAS1 degradation within the mitochondria of mammalian cells. Interestingly, the sequential exposure of cells to SA and hemin restored the formation of complexes between ClpXP and ALAS1F within 30 min after the initiation of the hemin treatment. LONP1 became evident in these immunoprecipitates several hours after the initiation of the hemin treatment (Fig. 6A). Furthermore, increases in LONP1 quantities in the ALAS1F immunoprecipitates appeared to be related to the accumulation of oxidized ALAS1 in the immunoprecipitates, suggesting that heme-dependent oxidation of ALAS1 might be required for LONP1-dependent degradation. Conversely, ClpXP recognizes heme-bound ALAS1 as an earlier response to intracellular increases in free heme, mediating the modification of ALAS1 for degradation by LONP1. It should be noted that the CP3 motif on mature ALAS1 plays an important role in the formation of complexes between ALAS1 and ClpXP and in the oxidation of the ALAS1 protein. These results suggest that the binding of heme to the CP3 motif triggers the recognition of ALAS1 by ClpXP and the oxidative modification of ALAS1.
Our study revealed that both of the examined ATP-dependent proteases might be involved in the heme-dependent degradation of the ALAS1 protein within the mitochondrial matrix. This degradation occurs as a part of a negative feedback mechanism for heme biosynthesis and might be initiated by the direct binding of heme to the CP3 motif on ALAS1. In addition to the above-discussed proteins, several other proteins were identified as possible candidates for the formation of complexes with ALAS1. However, it is still unclear whether these proteins form complexes independently or form one large complex together. The answer to this question should be better elucidated in future work performed by our group, in which we intend to separate immunoprecipitates using isoelectric focusing or glycerol gradient centrifugation before LC/MS analysis.
Experimental Procedures
Reagents-Unless otherwise noted, all chemicals were purchased from Sigma-Aldrich, Wako Pure Chemical Industries (Osaka, Japan), or Nacalai Tesque (Kyoto, Japan). Complete EDTA-free protease inhibitor tablets were purchased from Roche Diagnostics GmbH (Mannheim, Germany). Anti-DDDDK-agarose and DDDDK peptides for the purification of FLAG-tagged proteins were purchased from Medical and Biological Laboratories Co., Ltd. (Nagoya, Japan).
cDNA Cloning and Site-directed Mutagenesis-Human ALAS1 cDNA (GenBank TM number NM_000688) encoding an ALAS1 precursor protein (GenBank TM number CAA39794) was amplified by PCR with the following primers: 5Ј-CTC-AGCGCAGTCTTTCCACAGG-3Ј and 5Ј-GTCGACGCT-AGCCTGAGCAGATACCAACTTG-3Ј. The amplified products were cloned into a pGEM-T Easy Vector (Promega Corp., Madison, WI). The resultant plasmid was digested with the SalI restriction enzyme to isolate ALAS1 cDNA, and the isolated fragment was then used to replace ALAS2 cDNA in a pGEM-AET vector (28). The resultant pGEM-ALAS1F vector contained ALAS1 cDNA that encoded an ALAS1 precursor protein with a FLAG tag at its C-terminal end. Using a PrimerStar Max site-directed mutagenesis kit (Takara Bio, Shiga, Japan), the mutations c.22_23TGϾGC, c.97_98TGϾGC, and c.322_323TGϾGC were introduced into the pGEM-ALAS1F plasmid, respectively, resulting in p.C8A, p.C33A, and p.C108A amino acid substitutions within the ALAS1 protein (Takara Bio). Because p.Cys-8, p.Cys-33, and p.Cys-108 correspond to conserved cysteine residues within CP1, CP2, and CP3, respectively, each mutation was expected to inhibit heme binding to these CP motifs within the HRM (10). To prepare cDNA encoding the mature ALAS1 protein, which lacks a mitochondrial targeting signal, the pGEM-ALAS1F plasmid was subjected to amplification by PCR using the following primers: 5Ј-GCGGCCGCGATGGAACA-GATCAAAGAAACCCCTC-3Ј and 5Ј-GTCGACGCTAGC-CTGAGCAGATACCAACTTG-3Ј. Amplified products were cloned into a pGEM-T Easy Vector, the plasmid was digested with NotI and SalI, and the isolated fragments were used to replace ALAS2 cDNA within a pGEM-AET plasmid. The resultant plasmid, pGEM-⌬preseqALAS1F, contained cDNA encoding the mature ALAS1 protein with a FLAG tag at its C-terminal end. The above-described plasmids were digested with NotI, and each cDNA construct was cloned into the NotI site of a pcDNA5/FRT/TO vector (Invitrogen). The resultant plasmids pFRT-ALAS1F, pFRT-ALAS1⌬preseq, pFRT-ALAS1⌬CP1⅐2, pFRT-ALAS1⌬CP3, and pFRT-ALAS1-⌬CP1-3 were used to express FLAG-tagged, wild-type, presequence-deleted ALAS1 proteins containing mutations in both HRM1 and HRM2, in HRM3 alone, and in all three HRMs, respectively. Each plasmid was then co-transfected with a pOG44 vector in Flp-In T-REx 293 cells (FT293 cells, Invitrogen) to establish stable transformants. These transformants individually expressed each FLAG-tagged ALAS1 protein construct (ALAS1F) in a tetracycline/doxycycline-inducible manner. The conditions used to select the transformants and the establishment of cells expressing FLAG-tagged luciferase (LucF) were performed as described previously (27).
Cell Culture-The culture conditions used to grow the FT293 cells were as described previously (27). HepG2 cells were purchased from the European Collection of Cell Cultures (ECACC), a public cell culture collection maintained in the United Kingdom. The cells were maintained in Eagle's minimum essential medium supplemented with 10% FBS, 2 mM glutamine, 1% non-essential amino acids, 50 units/ml penicillin, and 50 g/ml streptomycin.
Immunoprecipitation and Western Blotting Analysis-Unless otherwise noted, the preparation, incubation, and centrifugation of the collected samples were performed at 4°C. The cells were lysed in lysis buffer (20 mM HEPES, pH 7.5, 150 mM NaCl, 1% Triton X-100, 10% glycerol, complete EDTA-free protease inhibitor mix, 1 mM EDTA, 1 mM NaF, and 0.4 mM Na 3 VO 4 ) by pipetting vigorously. The samples were then incubated on ice for 10 min and centrifuged at 21,000 ϫ g for 15 min. Supernatant protein concentrations were determined using Pierce 660nm protein assay reagent (Thermo Fisher Scientific) with bovine serum albumin as a standard. An aliquot of total cell lysate was subjected to Western blotting analysis. The majority of the samples were incubated with anti-DDDDK-agarose beads in tubes appropriately sized for immunoprecipitation, which was performed by rotating the tubes for 90 min at 4°C. Then the beads were washed once with lysis buffer, followed by three sequential washes with wash buffer (20 mM HEPES, pH 7.5, 150 mM NaCl, 0.1% Triton X-100, and 10% glycerol). FLAG-tagged proteins were eluted with elution buffer (0.1 mg/ml DDDDK peptides in wash buffer). For immunoprecipitation of endogenous ALAS1 protein, cell lysates were incubated with a mouse monoclonal anti-Alas1 antibody (ab54758, Abcam, Cambridge, UK) overnight at 4°C, and the proteins were then purified using MACS Protein G MicroBeads and a MACS column (Miltenyi Biotec GmbH, Bergisch Gladbach, Germany) according to the manufacturer's instructions. For immunoprecipitation of endogenous ALAS1 in HepG2 cells, the mitochondria-rich fraction was prepared using the Qproteome mitochondria purification kit (Qiagen GmbH Germany) before immunoprecipitation. For Western blotting analysis, samples were mixed with 6ϫ SDS-PAGE sample buffer with reducing reagent (Nacalai Tesque) and were boiled for 10 min before being loaded onto a TGX acrylamide gel (Bio-Rad). The samples were electrophoresed according to the manufacturer's instructions and then electrically transferred onto PVDF membranes. For the detection of specific proteins, these membranes were blocked with 5% skim milk in Tris-buffered saline containing 0.05% Tween 20 (TBS-T) for 1 h. The blocked membranes were incubated with diluted primary antibody for 1 h at room temperature and then washed with TBS-T three times for 5 min each. The membranes were then incubated with an HRPconjugated secondary antibody for 1 h at room temperature and washed with TBS-T three times for 10 min each at room temperature. Signals were detected using Clarity ECL Western substrate (Bio-Rad) or Immobilon Western chemiluminescent HRP substrate (Millipore Corp., Billerica, MA) and visualized using an ImageQuant LAS500 image analyzer (GE Healthcare, Uppsala, Sweden). For the detection of FLAG-tagged proteins and GAPDH protein, an HRP-conjugated primary antibody was used. Signal intensity was measured using ImageQuant TL software (GE Healthcare). A protein carbonyl Western blot detection kit (Shima Laboratories, Tokyo, Japan) was used for the detection of oxidized proteins according to the manufacturer's instructions. An anti-FLAG, HRP-conjugated M2 monoclonal antibody (A8592); anti-GAPDH, HRP-conjugated monoclonal antibody (G9295); anti-LONP1 antibody (HPA002192); and anti-GLRX5 (HPA042465) antibody were purchased from Sigma-Aldrich. Anti-CLPX (ab168338), anti-CLPP (ab124822), and anti-ALAS1 (ab54758, ab154860) antibodies were purchased from Abcam.
Nano-HPLC/MS/MS Analysis and Protein Sequence Database Searching-Immunoprecipitated FLAG-tagged ALAS1 protein was digested with trypsin and dried as described previously (29). The dried peptide extract (20 g) was dissolved in 80 l of sample solution (5% acetonitrile and 0.1% TFA). Each sample (1.25 g/5 l) was injected into an EasynLC-1000 system (Thermo Fisher Scientific), which was connected to an EASY-Spray column (25-cm length ϫ C18 ODS 75 m, Thermo Fisher Scientific). Peptides were eluted with a 180-min gradient of 4 -25% solvent B (0.1% formic acid in acetonitrile, v/v) in solvent A (0.1% formic acid in water, v/v) at a flow rate of 300 -400 nl/min. Peptides were then ionized and analyzed using a Fusion mass spectrometer (Thermo Fisher Scientific) coupled to a nanospray source. High-resolution full scan MS spectra (from m/z 380 to 1800) were acquired in the Orbitrap with a resolution of 140,000 at m/z 400 and lock mass enabled (m/z at 445.12003 and 391.28429), followed by MS/MS fragmentation of the 10 most intense ions in the linear ion trap with collisionally activated dissociation energy of 35%. The exclusion duration for the data-dependent scan was 0 s, and the isolation window was set at 10.0 m/z.
The MS/MS data were analyzed by sequence alignment with variable and static modifications using Mascot and Sequest algorithms. We utilized the UniProt protein database to search each tryptic peptide sequence. The specific parameters used to search the protein sequence database included oxidation of methionine, deamination of asparagine or glutamine, acetylation of N-terminal of peptide, and pyroglutamation as variable modifications and carbamidomethylation as a static modification. Other parameters used in our data analysis included two allowed missing cleavages, a mass error of 10 ppm for precursor ions, and 0.02 Da for fragment ions. Charge states of ϩ2 to ϩ4 were considered for parent ions. If more than one spectrum was assigned to a peptide, only the spectrum with the highest Mascot score was selected for manual analysis. All peptides with Mascot scores Ͼ20 were manually examined using rules described previously (30).
Knockdown of Endogenous Proteins-Gene-specific siRNAs (Silencer Select validated siRNA) were purchased from Thermo Fisher Scientific. Silencer Select GAPDH siRNA was used as a positive control for knockdown, and Silencer Select negative control no. 1 siRNA was used as a negative control. Each siRNA was introduced into HepG2 cells using Lipofectamine RNAiMax reagent (Thermo Fisher Scientific) according to the manufacturer's instructions. Forty-eight hours after transfection, the cells were harvested and lysed, and the total cell lysates were subjected to Western blotting analysis. For prolonged suppression of the CLPX or CLPP protein, a second siRNA transfection was performed 4 days after the initial transfection, and the cells were cultured for an additional 7 days before harvest. The culture medium for the siRNA-transfected cells was replaced with fresh medium at least every 4 days. Total cell lysate was prepared using lysis buffer as described above.
Establishment of CLPX-knock-out FT293 Cells and Doxycycline-inducible Expression of CLPX-The pSpCas9(BB)-2A-Puro(PX459) expression vector was a gift from Feng Zhang (Addgene plasmid 48139), and the knock-out of the CLPX gene was constructed as described with slight modifications (31). The sequences of the oligonucleotides, which included a guiding RNA sequence, were 5Ј-CACCGCGGTGCTTGTACTTGCG-GCG-3Ј and AAACCGCCGCAAGTACAAGCACCGC-3Ј. After annealing, these oligonucleotides were inserted into the BbsI site of pSpCas9(BB)-2A-Puro(PX459). The resultant plasmid, PX459-CLPX, was used to transfect FT293 cells using Lipofectamine 2000 (Invitrogen). After cloning transfected cells, several clones were analyzed for the CLPX gene mutation using the Guide-it mutation detection kit (Clontech, Mountain View, CA). Positive clones were further subjected to Western blotting analysis to detect the CLPX protein. The selected CLPX negative clone was designated FT293 ⌬CLPX . The CLPX locus targeted by Cas9 in FT293 ⌬CLPX cells was amplified by PCR and subcloned into T-Vector pMD20 (Takara Bio) for sequencing. To construct the human wild-type CLPX protein expression vector, the CLPX cDNA carrying a silent mutation at Ala11 was inserted into the BamHI and NotI restriction sites of the pcDNA5/FRT/TO vector. The resulting expression vector, pcDNA5/FRT/TO-CLPX, was co-transfected with the pOG44 vector into FT293 ⌬CLPX cells to obtain stable transformants. After cloning the transformants as described (27), each clone was examined for the doxycycline-inducible expression of CLPX and zeocin sensitivity. The resultant clone was designated CLPX ind /FT293 ⌬CLPX . For the induction of CLPX, cells were treated with 1 g/ml doxycycline for 8 days and then further incubated with hemin for 6 h before harvesting cells.
Author Contributions-Y. Kubota, Y. Katoh, and K. F. designed the study, performed the experiments, and wrote the manuscript. K. N., R. Y., and K. K. performed the experiments and analyzed the data. | 9,667 | 2016-08-05T00:00:00.000 | [
"Biology",
"Chemistry"
] |
An Artificial Intelligence Approach towards Investigating Corporate Bankruptcy Ş tefan
Corporate bankruptcy analysis is very important for investors, creditors, borrowing companies, as well as governments. The assessment of business failure provides tremendous information for governments, investors, shareholders, and the management based on which financial decisions are taken towards preventing potential losses. Likewise, by researching corporate downfall there could be gathered an early warning signal, together revealing the fields encountering problems. Moreover, nowadays the corporations are facing the senior staff retirement, thus being confronted by the loss of knowledge. Artificial intelligence (AI) seeks the promotion of systems related with human intelligence, comprising reasoning, learning, and problem solving. The most powerful applied field of AI is the area of expert systems (ES). However, the ES are applications that could reproduce the knowledge and experience of a human expert. This paper aims at designing and implementing an ES prototype towards corporate bankruptcy analysis. Therefore, we have considered a couple of production rules based on indebtedness ratios (e.g. General Indebtedness Ratio, Global Financial Autonomy Ratio, Financial Leverage Ratio), as well as solvency ratios (e.g. General Solvency Ratio, Patrimonial Solvency Ratio). For this purpose, Exsys Corvid® was used since it transforms expert knowledge into a structure that enables rendering of guidance and prescription to refine performance, capability, and efficiency, alongside lowering training and costly errors.
Introduction
The greatest economic recession since the 1930s was widely assigned to poor management in lending, investment, and company debt management.Thus, beyond the downfall of renowned organizations such as WorldCom and Enron, there was ascertained the fact that the world economies have become circumspect of the risks implicated in corporate liability (Aziz & Dar, 2006).Generally, business failure is viewed as a situation that a corporation cannot pay lenders, preferred stock shareholders, and suppliers, a bill is overdraw, or the law makes the corporation go bankruptcy (Dimitras et al., 1996).Withal, a bankruptcy problem emphasizes a case within a group of individuals which have rights over a property, but the property is not huge enough to overspread their joint claims (Albizuri et al., 2010).Unfortunately, corporate bankruptcy engenders massive economic losses to investors and others, at the same time with a considerable social and economical cost to the state (Shuai & Li, 2005).Ooghe and De Prijcker (2008) discovered four different types of failure processes: the failure process of a fruitless start-up, the malfunctioning process of a striving for growth company, the failure process of a dazzled growth company, and the failure process of a listless established company.Therefore, the investigation of bankruptcy provides an early warning signal and reveals the fields emphasizing faintness.Likewise, there are several benefits including cost cutback in credit investigation, better oversight, alongside an augmented debt collection rate (Lee & Choi, 2013).
The most widely well-known univariate study is that of Beaver (1966).Subsequently, Altman (1968) developed the first multivariate study.However, Altman (1968) and Deakin (1974) employed the discriminant analysis to predict corporate bankruptcies, whereas Ohlson (1980) used logit and probit models.At long last, Tam and Kiang (1992) used artificial neural networks towards predicting business failure.In fact, multifarious statistical techniques (such as linear discriminant analysis, LDA; multivariate discriminant analysis, MDA; quadratic discriminant analysis, QDA; multiple regression; logistic regression, logit; probit; factor analysis, FA), neural networks topologies (such as network architectures including multi-layer perception, MLP; radial basis function network, RBFN; probabilistic neural network, PNN; auto-associative neural network, AANN; self-organizing map, SOM; learning vector quantization, LVQ; cascade-correlation neural network, Cascor), as well as other intelligent techniques (such as vector machines, fuzzy logic, isotonic separation) have been applied to settle the bankruptcy prediction problem (Kumar & Ravi, 2007).
Financial decision making is a very complex method since the managers are confronted on a daily basis with a huge amount of information that should be analyzed in order to make the final decision as regards the performance or the viability of a corporation, the granting or denying of a credit request, the construction and management of a portfolio, the choice of an investment, or the construction of a financial marketing plan (Xidonas et al., 2009).The decision process covers several problem solving activities, experience, and heuristics.When a corporation has to make a decision an expert consultancy is employed.Besides, when decisions on significant investments, integration, or advertising strategy should be taken, an expert will be hired in order to provide advice (Grahovac & Devedzic, 2010).In fact, financial experts own knowledge gathered in practice and which cannot be discovered within literature or acquired in any other way, but which is inestimable towards the business success of a corporation or a financial institution (Nedović & Devedžić, 2002).
Artificial intelligence (hereinafter "AI") is a science, as well as a technology, its goal consisting in developing systems which emphasize aspects of intelligent behavior, likewise simulating the human capabilities of thinking and sensing.However, the most important applied field of AI is expert systems.An expert system (hereinafter "ES") incorporates the human expertise into a computer program in order to enable the software to execute tasks normally requiring a human expert (O'Keefe & O'Leary, 1993).As well as, Klein & Methlie (1995) stated that an ES should be viewed as a computer program that represents the knowledge and inference procedures of an expert to enlighten complex problems, giving possible solutions or recommendations.Further, Rada (2008) emphasized that ES could be related to knowledge-based systems or technologies such as the neural networks or genetic algorithms.In fact, these technologies describe the "evolutionary computation" discipline.Besides, an inaccurate system will produce pricey errors or will not execute up to foresights.
The ES technology is based on the sphere knowledge of the problem being analyzed.A problem within a particular field covers the objects, properties, tasks, and events within which a human expert operates, as well as the heuristics that skilled professionals have learned to use in order to execute better (Klein & Methlie, 1995).Unfortunately, the acquisition of the domain knowledge from the experts and the representation of this knowledge in the most suitable form represent the greater hindrance within the process of ES development process.Because the experts are regularly unavailable due to time constraints, gathering knowledge from them depicts a very difficult and time consuming approach.Besides, there is faced a lack of communication between the knowledge engineer and the expert.Therefore, this paper aims at developing an ES prototype in order to assist risk managers towards valuation business failure risk.Moreover, current manuscript exclusively considers the ES technology within the knowledge or the rule-based frame.However, by considering the fact that financial ratios are a key indicator of financial soundness of a business, we will assess a couple of ratios as regards indebtedness and solvency.In order to implement the ES, Exsys Corvid® will be used, being wide-spread towards designing and fielding interactive knowledge automation ES-for the Web (server or client-side), as well as stand-alone systems.
The paper is structured as follows: the fundamentals of ES are provided in Section 2; a review of ES in the economics field is revealed in Section 3; Exsys Corvid® development software is discussed within Section 4; the ES prototype for valuation business failure risk is disclosed in Section 5; concluding remarks and recommendations for further research are proposed in Section 6.
Fundamentals of Expert Systems
Nowadays, knowledge management shows a key role in the search for success.The fundamentals, alongside the primordial purpose of an ES consist in its capacity to replicate human logic and reasoning, to set conclusions, and to supply matching explanations as regards these conclusions (Metaxiotis et al., 2006).Moreover, the entire set of issues are of critical importance for the financial decision-making practice, because it implies several judgmental procedures that decision makers (covering managers of companies, managers of credit institutions, individual investors) have to pursue so as to construct the suitable decisions (Metaxiotis, 2005).According to Mannan (2005), the development of an ES suppose crossing the following typical stages: (1) system concept, (2) feasibility study, (3) outline specification, (4) preliminary knowledge acquisition, (5) knowledge representation, (6) tool selection, (7) prototype development, (8) main knowledge acquisition, (9) revised specification, (10) system development, (11) testing and evaluation, and (12) handover.However, the process is an iterative one, with looping back between some of these stages.Feigenbaum (1982) has defined ES as "an intelligent computer program that uses knowledge and inference procedures to solve problems that are difficult enough to require significant human expertise for their solutions".Goodall (1985) stated that "an expert system is a computer system that uses a representation of human expertise in a specialist domain in order to perform functions similar to those normally performed by a human expert in that domain.The system operates by applying an inference mechanism to a body of specialist expertise represented in the form of knowledge".Likewise, Turban and Aronson (1998) noticed that an ES is "a system that uses human knowledge captured in a computer to solve problems that ordinarily require human expertise".Besides, rule-based ES are ES in which the knowledge is represented by production rules.Production rules are IF-THEN condition-action pairs.Moreover, a set of production rules and a computational engine that construes the rules is entitled a production system (Sears & Jacko, 2008).In a system based on production rules, each unit of knowledge is depicted by a single IF-THEN logical statement, whilst an inference engine, assessing the existing data and statements, chooses which statement to execute next (Jenders, 2006).Besides, production systems are one of the major means of enforcing ES.A production system shows three key attributes: the rule base, a working memory, and the inference engine.The rule base comprises the set of rules that embody the expertise of the system.The working memory is provided with the input data, or facts, on the problem to which the rules are to be employed.The inference engine controls the operation of the rules to infer conclusions from these data (Mannan, 2005).
The first ES entitled Dendral (Dendritic Algorithm) was developed in mid 60s by the artificial intelligence researcher Edward Feigenbaum and the geneticist Joshua Lederberg of Stanford University in California, U. S., towards analyzing organic compounds to determine their structure.Subsequently, in early 70s, there was developed MYCIN to help physicians regarding diagnoses infectious diseases.Developed in mid 70s, another famous ES is PROSPECTOR designed for decision-making problems in mineral exploration.In accounting were not available ES until 1977when McCarthy (1977) developed the earliest tax application of an ES entitled TAXMAN.Further, MACSYMA was a large interactive mathematics ES which could manipulate mathematical expressions symbolically.ONCOCIN and INTERNIST were two other early medical ES towards planning treatment for cancer sufferers, as well as diagnosing multiple medical conditions.XCON was developed to customize a network system to meet the customer's needs.Pathfinder is an ES towards supporting pathologists as regards accurate diagnoses in the domain of lymph-node pathology.Figure 1 Therefore, an ES comprises four main components: • A natural language required in order to interface and interact with the user; • A knowledge base containing the rules from which the decisions can be made; • A database of facts specific to the domain of analysis; • An inference engine required to solve problems; there are linked the knowledge base rules with the database by means of heuristics or "rules of thumb" logic.
According to Romiszowski (1987), the user initiates a consultation through the interface system.Further, the system questions the user through this same interface with the purpose to gather the essential information upon which a decision is to be made.Moreover, there are two other sub-systems: • The knowledge base which covers all the domain-specific knowledge that human experts use when solving that type of problems; • The inference engine, respectively the system that performs the necessary reasoning and uses knowledge from the knowledge base in order to come to a decision regarding the problem placed.
An ES is different than conventional computer programs since there is a clear separation of the rules forming a knowledge base from information about the input data and inference rules to be applied to the knowledge and data bases.
The advantages of ES are discussed below (Gonciarz, 2014): • Improved disposable-Knowledge is accessible on any appropriate computer hardware.An ES can be considered to be a mass production of expertise; • Lowered cost-The cost of ensuring expertise per user is deeply mitigated; • Reduced risk-ES can be used in circumstances that may be assessed unsafe to a human; • Everlasting-The expertise is long-drawn, contrasting human experts who might retire, quit, or die; • Manifold expertise-The knowledge of several experts can be made accessible to work concurrently and endlessly on a hobble day or night; • Enlarged trustworthiness-ES boost confidence that the accurate decision was made by providing a second view to a human expert or break a tie in case of disagreements by many human experts; • Clarification-ES can obviously clarify in detail the logic that led to a conclusion, whilst a human however may be too exhausted, reluctant, or powerless to do this all the time; • High-speed reply-According to the software and hardware used, an ES may act in response more rapidly and is more readily on hand than a human expert; • Steady, unresponsive, and complete response permanently-This may be vital in real time and emergency situations when a human may not run at top efficiency because of pressure or weariness; • Smart database ES can be used to access a database in an intelligent way; • Intelligent tutor-ES may proceed as a smart trainer by letting the student run sample programs and explaining the system's reasoning.
Besides, based on Klein & Methlie (1995), Turban et al. (2006), andZopounidis et al. (1996), ES technology shows several benefits as follows: ES operate and set conclusions by the means of the knowledge and experience of human experts; ES hammer out conclusion more rapidly than humans, particularly in complex problem areas where an outsized volume of information and data should be processed and investigated; ES ensure the manipulation of partial information and vagueness; the estimations of ES are consistent; a novice can study the procedure, the heuristics, and the problem-solving methodology that an expert would use to solve a particular problem.
The disadvantages of ES are discussed below (Gonciarz, 2014): • Answers may not constantly be truthful-Experts regularly make mistakes, so it can be anticipated that ES will also make mistakes.Unfortunately, such errors could be relatively expensive at times; • Knowledge restricted to the domain of expertise-ES always try to infer a solution, despite of whether or not the problem at hand is within the system's area of knowledge.A human expert, in contrast, will know the limits of their abilities and knowledge, and as a result they will not struggle to solve problems outside of their expertise; • Lack of common sense knowledge can be thorny to represent in ES; • ES can provide an excellent approach for solving a huge class of problems, but each application must be selected with awareness so this technology is properly applied.
The differences between conventional computer programs and ES (Durkin, 1990) are provided in Table 1.However, the basic difference is depicted by the fact that conventional programs process data, whereas ES process knowledge.A comparison between a human expert and an ES is revealed in Table 2. Durkin (1990) stated that one can establish several general reasons towards employing an ES, respectively replacement of human expert, assistant to human expert, or transfer of expertise to novice.
A Review of Expert Systems in the Economics Field
According to Nedović & Devedžić (2002), there are several groups of ES for finance according to the problem they treat: FINEVA (from the field of financial analysis), PORT-MAN (banking management), INVEX (investment advisory), FAME (financial marketing), and DEVEX (an ES for currency exchange advising in international business transactions).Koster and Raafat (1990) depicted a prototype ES towards auditing workers' compensation insurance premium.Srinivasan and Ruparel (1990) described an expert support system for credit granting (CGX) in nonfinancial firms.Biack and Grudnitski (1991) pointed out a tax ES (TaXpert) to establish constructive ownership of corporate stock under the rules of 60 sections of the Internal Revenue Code.Bohanec et al. (1995) showed a computer-based ES for the assessment of research and development projects.Kailay and Jarratt (1995) developed a qualitative based prototype ES designed for small to medium-sized commercial organizations (RAMeX) aiming to help management towards security decisions and planning.Grahovac and Devedzic (2010) developed a cost management ES (COMEX).Lee and Jo (1999) designed an ES covering patterns and rules which could predict future stock price movements.Zargham and Mogharreban (2005) built an ES entitled PORSEL (PORtfolio SELection system), which used a small set of rules to select stocks, consisting of three components: the Information Center, the Fuzzy Stock Selector, and the Portfolio Constructor.There was noticed that the portfolios constructed by PORSEL consistently outperform the S&P 500 Index.Xidonas et al. (2009) discussed an ES methodology as regards supporting decisions related to the selection of equities, on the basis of financial analysis.By using the Dempster-Shafer theory, Dymova et al. (2010) illustrated another way to develop stock trading ES.Fasanghari and Montazer (2010) suggested a fuzzy ES in order to evaluate the stocks of the Tehran Stock Exchange, subsequently making the portfolio and recommending it to the target customers based on their preferences and stocks pay off. Lee and Lee (2012) discussed a causal knowledge-based ES for planning an Internet-based stock trading system (CAKES-IST).Yunusoglu and Selim (2013) developed a fuzzy rule based ES to assist portfolio managers in their middle term investment decisions.Rao et al. (2005) proposed a knowledge-based prototype system for productivity analysis (PET, productivity evaluation technology).By using artificial neural networks, Kengpol and Wangananon (2006) developed an ES to appraise customer satisfaction on fragrance notes.Lee and Kwon (2008) proposed an intelligent negotiation support system (CAKES-NEGO, CAusal Knowledge-driven Expert System), by employing causal knowledge as well as inference mechanism supported by fuzzy cognitive map.Bobillo et al. (2009) suggested a semantic fuzzy ES which applies a generic framework for the balanced scorecard.Arias-Aranda et al. ( 2010) created a fuzzy ES tool (ESROM) in order to help managers to imitate strategic environments to gather useful information regarding the levels of strategy, flexibility, and performance compulsory in the operations management area.Oh et al. (2012) suggested an ES for portfolio analysis aiming to help decision-making for new product development project portfolio management.Chung (2014) developed and evaluated an intelligent system (BizPro) for extracting and categorizing the business intelligence factors from news articles.
Exsys Corvid® Development Software
An expert system tool, also known as shell depicts a software development environment covering the fundamentals components of ES.Exsys Corvid® was released in 2001 by Exsys Inc., being a very influential environment towards developing knowledge automation systems which allows the logical rules and procedural steps used to make a decision to be transformed to a "rule" representation that can be delivered on-line.An Exsys Corvid® knowledge automation system comprises the logic of the decision-making process, as well as the end user interface.ES development with Exsys Corvid® has the following main parts: entirely capturing the decision-making logic and process of the domain expert; wrapping the system in a user interface with the desired look-and-feel for online deployment; integrating with other IT resources.The main advantage is that Exsys Corvid® provides non-programmers a path towards developing interactive Web applications that capture the logic and processes used to solve problems, delivering it online, in stand-alone applications, and embedded in other technologies.Exsys Corvid® provides the following main options towards system delivery: running as a Java Applet in a web page; running as a Java Servlet using HTML; running as a Java Servlet using Adobe Flash; running standalone (off-line) as a Java executable; embedded under another program that provides the end user interface.
The logic in Exsys Corvid® is emphasized by employing the specific variables.In fact, the variables are the building blocks that Exsys Corvid® employs in order to create the rules and describe the logic.When the system is run, the variables utilized in the IF part of rules will require to be assigned a value coming by directly asking the system user to provide a value, being derived from other rules, or from other sources such as a database.
Expert System Prototype for Valuation Business Failure Risk
Hereinafter is discussed an ES prototype for valuation business failure risk, in this sense, Exsys Corvid® shell being used.For this purpose, a couple of production rules are designed based on indebtedness ratios (e.g.General Indebtedness Ratio, Global Financial Autonomy Ratio, Financial Leverage Ratio), as well as solvency ratios (e.g.General Solvency Ratio, Patrimonial Solvency Ratio).General Indebtedness Ratio emphasizes the percentage of total assets that were financed by creditors, liabilities, debt.Global Financial Autonomy Ratio shows the percentage of company financing that comes from creditors and investors.Financial Leverage Ratio depicts the proportion of equity and debt the company is using to finance its assets.General Solvency Ratio shows the relationship of the total assets of the corporation to the portion owned by shareholders.Patrimonial Solvency Ratio reveals how much shareholders would receive in the event of a company-wide liquidation.
The formula for each selected financial ratio is provided below: • General Indebtedness Ratio = Total Debt/Total Assets; • Global Financial Autonomy Ratio = Total Debt/Shareholders' Equity; • Financial Leverage Ratio = Bank Loans/Shareholders' Equity; • General Solvency Ratio = Total Assets/Shareholders' Equity; • Patrimonial Solvency Ratio = Shareholders' Equity/Total Assets.However, by using the ES, the financial risk manager should not compute the ratios previously mentioned since the ES performs the entire task that would otherwise be fulfilled by a human expert.Hence, the financial risk manager should know only the values related to Total Assets, Shareholders' Equity, Total Debt, and Bank Loans, the source of data being the Balance Sheet.Accordingly, Exsys Corvid® Expert System Development Tool is employed in order to implement the ES.Moreover, there was chosen the default option, respectively running the system with the Corvid Applet Runtime.The acquired knowledge is represented through production rules.Rule based representation is one of the widest known and implemented forms for knowledge representation in the development of ES.Production rules have a very simple syntax form, they are easily understandable, while their implementation provides a great degree of flexibility to the ES as they are easy to modify and update.With a rule base, knowledge can be developed by either data-driven or goal-driven search.Data-driven or forward chaining suppose that one has a supply of facts and persistently employs legal moves or rules to produce new facts to get hopefully to the goal.Goal-driven or backward chaining implies that one repeatedly considers the possible final rules that produce the goal and from these creates successive sub goals Exsys Corvid® decision-making logic is described and constructed using "nodes".Exsys Corvid® uses IF-THEN rules of thumb ("heuristics"), being individual steps or factors which provide the global decision, based on variables.Hereupon, a node can generally be thought of as a statement in the IF, or THEN part of a rule.The rules have a Left-Hand Side (LHS) entitled antecedent, premise, condition, or situation, as well as a Right-Hand Side (RHS) named consequent, conclusion, action, or response.The proposition on the LHS may be a compound one with a number of propositions ANDed together.However, a proper set of rules or productions, should be used to form the basis of a production system (Mannan, 2005).
The employed variables are showed in Figure 2. Besides, Exsys Corvid® has a unique way to define, organize, and structure rules into logically related modules.Thus, a Logic Block (hereinafter "LB") comprises one or more structured logic diagrams.
The logic may be a simple structure corresponding more to a single rule or a complex branching tree covering all possible input cases.The rules from the LB integrate a group of related heuristics and provide an explanation how to resolve each potential decision point in a system.The rules are added to the knowledge-base by experts using text or graphical editors that are integral to the system shell.The LB and the related rules are disclosed in Appendix A.
As well as, Command Blocks (hereinafter "CB") control the procedural flow of the system.The CB of current ES is provided in Figure 3.The LB provide the rules of how to make a decision, whilst the CB tell the system what to do and how the rules should be used.The user is asked gradually if he agrees to analyze the corporate indebtedness level based on the General Indebtedness Ratio, Global Financial Autonomy Ratio, and/or Financial Leverage ratio, and then if he admit the investigation of solvency based on the General Solvency Ratio and/or Patrimonial Solvency Ratio (see Appendix B).Subsequently, the user is requested to enter the values related to Total Assets, Shareholders' Equity, Total Debt, and Bank Loans (see Appendix C).Finally, the ES provides a brief report, but vital, in order to assess the business failure risk (see Appendix D).In fact, based on a couple of financial ratios, a financial manager could establish if there are corporate shortcomings.However, even if there were not employed several corporate measures, the output gathered is significant since the selected ratios are fundamental within financial management.
Concluding Remarks and Further Research
Nowadays, business decisions cannot wait for an expert advisor.However, ES are essential for people in order to solve complex decision-making problems without learning the underlying logic or requiring specialized training.Moreover, by means of Web any individual could access the ES, as well as employees that are online can also run the same systems stand-alone.By using Exsys Corvid® Expert System Development Tool, ES could be developed quickly, even if the person in not a programmer.Therefore, by using Exsys Corvid®, an ES prototype in order to assist risk managers towards valuation business failure risk was proposed.In fact, by selecting a couple of data out of the Balance Sheet, such as Total Assets, Shareholders' Equity, Total Debt, and Bank Loans, the ES suggested within current paper provides a brief report, but vital, in order to assess the business failure risk.Besides, by using the ES, the financial risk manager should not compute several ratios as regards indebtedness (e.g.General Indebtedness Ratio, Global Financial Autonomy Ratio, Financial Leverage Ratio) or solvency (e.g.General Solvency Ratio, Patrimonial Solvency Ratio) since the ES performs the entire task that would otherwise be fulfilled by a human expert.Hence, ES are an appreciable tool for corporations.However, there is suggested for the companies to keep in mind that humans should make the final decision instead of computers.Accordingly, humans still own the comprehension and perception, whereas until now the computer does not possess such features.The limitations of current manuscript are depicted by the reduced number of financial ratios which were selected.As such, as future research avenues, several ratios with the purpose of valuation business failure risk should be employed.
The LB displayed above is equivalent to the following rules: The company records solvency risk: Confidence = 70 exhibits the common organization of an ES.
Figure 1 .
Figure 1.The common organization of an expert system Source: Romiszowski, A. 1987.Artificial intelligence and expert systems in education: Progress, promise and problems.Australian Journal of Educational Technology, 3(1), 6-24.
Figure 3 .
Figure 3. Exsys Corvid® Command Block window to analyze the corporate indebtedness level based on the general indebtedness ratio?NO THEN: You have not selected the analysis of the corporate indebtedness level based on the general indebtedness ratio!: Confidence = 70 IF: Do you want to analyze the corporate indebtedness level based on the general indebtedness ratiodependent on loans.The current financial state is ALARMING!!!: Confidence = 70 IF: Do you want to analyze the corporate indebtedness level based on the general indebtedness to analyze the corporate indebtedness level based on the global financial autonomy ratio?NO THEN: You have not selected the analysis of the corporate indebtedness level based on the global financial autonomy ratio!: Confidence = 70 IF: Do you want to analyze the corporate indebtedness level based on the global heavily on lenders and the related risk is higher: Confidence = 70 IF: Do you want to analyze the corporate indebtedness level based on the global financial autonomy ratio?YES AND: [ind_total_debt]/[ind_shareholders_equity]<0.5 THEN: The company records global financial autonomy: Confidence = 70 a very high level of indebtedness: Confidence = 70 IF: Do you want to analyze the corporate indebtedness level based on the financial leverage ratioto analyze the corporate solvency based on the general solvency ratio?NO THEN: You have not selected the analysis of the corporate solvency based on the general solvency ratio!: Confidence = 70 IF: Do you want to analyze the corporate solvency based on the general solvency ratio?YES AND: [ind_total_assets]/[ind_shareholders_equity]>1.5 THEN: The company shows the ability to return the loans: Confidence = 70 IF: Do you want to analyze the corporate solvency based on the general solvency ratio?
Table 1 .
Conventional programs versus expert systems Research review: Application of expert systems in the sciences.Ohio Journal of Science, 90(5), 171-179.
Table 2 .
Comparison between a human expert and an expert system Knowledge Automation ES with Exsys Corvid® software and services were developed worldwide within multifarious fields such as: Diagnostics-Predictive Maintenance-Repair; | 6,676.6 | 2015-05-19T00:00:00.000 | [
"Economics"
] |
Properties of Voids and Void Galaxies in the TNG300 Simulation
We investigate the properties of voids and void galaxies in the TNG300 simulation. Using a luminous galaxy catalog and a spherical void-finding algorithm, we identify 5078 voids at redshift z = 0. The voids cover 83% of the simulation volume and have a median radius of 4.4 h −1 Mpc. We identify two populations of field galaxies based on whether the galaxies reside within a void (“void galaxies”; 75,220 objects) or outside a void (“nonvoid galaxies”; 527,454 objects). Within the voids, mass does not directly trace light. Instead, the mean radial underdensity profile as defined by the locations of void galaxies is systematically lower than the mean radial underdensity profile as defined by the dark matter (i.e., the voids are more “devoid” of galaxies than they are of mass). Within the voids, the integrated underdensity profiles of the dark matter and the galaxies are independent of the local background density (i.e., voids-in-voids versus voids-in-clouds). Beyond the void radii, however, the integrated underdensity profiles of both the dark matter and the galaxies exhibit strong dependencies on the local background density. Compared to nonvoid galaxies, void galaxies are on average younger, less massive, bluer in color, less metal enriched, and have smaller radii. In addition, the specific star formation rates of void galaxies are ∼20% higher than nonvoid galaxies and, in the case of galaxies with central supermassive black holes with M BH ≳ 3 × 106 h −1 M ⊙, the fraction of active void galaxies is ∼25% higher than active nonvoid galaxies.
INTRODUCTION
The large-scale structure of the Universe is an interconnected network of walls, sheets, filaments, and galaxy clusters, between which lie vast regions of near nothingness known as cosmic voids (see, e.g., Giovanelli & Haynes 1991).The origin of the "cosmic web" dates back to the early Universe during the epoch of inflation, when quantum fluctuations near the end of the inflationary period gave rise to slight anisotropies in the post-inflation matter density field.These anisotropies are observed in the cosmic microwave background (CMB), with regions that deviate by a few micro-Kelvin from the average CMB temperature (see, e.g., Planck Collaboration et al. 2020), and they are responsible for the structure that is seen in the cosmic web today.
Perturbations in the density field are unstable.Regions that are overdense experience an inward gravitational force, causing them to increase in density as they collapse and accrete surrounding matter (Bertschinger 1998).The opposite is true for regions that are initially underdense.Matter in these regions experiences an outward gravitational force that attracts it towards nearby, higher density regions.As matter streams out, these regions become more underdense, increasing the relative outward gravitational pull and causing them to expand even further.Since these expanding regions never become virialized, their growth can always be modelled by linear perturbation theory (see, e.g., Goldberg & Vogeley 2004).Primordial overdensities in the post-inflation matter density field evolved in a bottom-up, hierarchical fashion consistent with Cold Dark Matter (CDM) theory (see, e.g., Liddle & Lyth 1993), forming halos, clusters, walls, and filaments.
Voids have properties that make them interesting regions for performing various cosmological tests (Sheth & van de Weygaert 2004), and the interiors of mature voids can be described as low density Friedmann-Lemaître-Robertson-Walker universes (see, e.g., Icke 1984;van de Weygaert & van Kampen 1993).This makes voids excellent laboratories for tests of the expansion rate and geometry of the Universe, the dark energy equation of state, and modified theories of gravity (see, e.g., Li et al. 2012;Clampitt et al. 2013;Gibbons et al. 2014;Cai et al. 2015;Pollina et al. 2016;Cai et al. 2017;Falck et al. 2018;Paillas et al. 2019).
Furthermore, since voids are smaller than the mean-free path of neutrinos (Lesgourgues & Pastor 2006), voids can be used to constrain the sum of neutrino masses via tests that examine the ways in which neutrino properties affect the sizes and distributions of voids (see, e.g., Villaescusa-Navarro et al. 2013;Massara et al. 2015;Banerjee & Dalal 2016;Kreisch et al. 2019;Schuster et al. 2019;Contarini et al. 2021).
Voids are not completely devoid of internal structure, and even the most mature voids in the local Universe show substructure in the form of galaxies and diffuse filaments (Szomoru et al. 1996;El-Ad & Piran 1997;Hoyle & Vogeley 2004;Sheth & van de Weygaert 2004;Kreckel et al. 2012;Alpaslan et al. 2014).The dynamical structure within voids can have significant impact on cosmic flow patterns in the local Universe, with matter around the ridges of voids affecting the peculiar motions of galaxies around nearby walls and filaments (see, e.g., Bothun et al. 1992;van de Weygaert 2016;Vallés-Pérez et al. 2021;Bermejo et al. 2022).
Additionally, the fact that voids resemble low-density universes with large Hubble parameters (see, e.g., Icke 1984;Goldberg & Vogeley 2004) makes void galaxies interesting objects with which to test models of galaxy formation and evolution.From linear theory, it is known that dark matter halos in underdense regions of the Universe form later in the history of the Universe than do their counterparts in overdense regions (Liddle & Lyth 1993).Therefore, studying the physical properties of void galaxies in the local Universe has the potential to provide insight into an early epoch of galaxy evolution.Furthermore, the degree to which the location of a galaxy within the cosmic web affects the evolution of the galaxy is an open question.Some studies have concluded that, due to their differing dynamical histories, void galaxies differ systematically from non-void field galaxies (see, e.g., Peebles 2001;Croton et al. 2005;Kreckel et al. 2012;Rodríguez Medrano et al. 2022;Rosas-Guevara et al. 2022).For example, when compared to observed galaxies in walls and and filaments, some studies have found that void galaxies are, on average, bluer in color, have lower stellar masses, are richer in HI, have higher specific star formation rates, and have later morphological types (see, e.g., Rojas et al. 2004;Croton et al. 2005;Hoyle et al. 2012;Kreckel et al. 2012;Beygu et al. 2016;Douglass et al. 2018;Florez et al. 2021;Pandey et al. 2021;Rodríguez Medrano et al. 2022).However, other studies have found little to no difference between the stellar masses, gas content, star formation rates, chemical abundances, dark matter profiles, or metallicities of void vs. non-void field galaxies (see, e.g., Szomoru et al. 1996;Patiri et al. 2006;Moorman et al. 2014;Liu et al. 2015;Douglass & Vogeley 2017;Douglass et al. 2019;Wegner et al. 2019;Domínguez-Gómez et al. 2022).
Properties of void and non-void galaxies have also been studied in recent magnetohydroynamical simulations of ΛCDM universes.In agreement with some observational studies, Rosas-Guevara et al. (2022) found that, compared to galaxies in denser environments, void galaxies in the EAGLE simulation (Schaye et al. 2015) have lower stellar mass fractions.In addition, Rosas-Guevara et al. (2022) found clear trends of galaxy properties as a function of the distances of galaxies from the nearest void center.In particular, star formation activity and HI gas density decreased with increasing void-centric distance, and stellar mass fraction increased with increasing void-centric distance.Similarly, in a study of galaxies in the HorizonAGN simulation (Dubois et al. 2016), Habouzit et al. (2020) found that low stellar mass galaxies with high star formation rates occur more frequently in the inner regions of voids than in denser regions of the cosmic web.
Another open question involves the degree to which the location of a galaxy within the cosmic web influences the formation of active galactic nuclei (AGN).Some studies have concluded that the AGN fraction of galaxies is independent of the local matter density (e.g., Karhunen et al. 2014;Sabater et al. 2015;Amiri et al. 2019;Habouzit et al. 2020), while others have found a positive correlation with local matter density (e.g., Manzer & De Robertis 2014;Argudo-Fernández et al. 2018).Still other studies have concluded that there is a negative correlation between AGN fraction and the local matter density (e.g., Kauffmann et al. 2004;Constantin et al. 2008;Platen 2009;Lopes et al. 2017;Ceccarelli et al. 2021Mishra et al. 2021).
The relatively isolated nature of void galaxies suggests that their growth and evolution do not depend strongly on merger-driven nuclear activity.For example, Ceccarelli et al. (2021) found that the growth channels for void galaxies and their central, supermassive black holes differed from those of their non-void counterparts.Ceccarelli et al. (2021) attribute this to the fact that, compared to the supermassive black holes in their non-void galaxies, the supermassive black holes in their void galaxies had larger surrounding OIII reservoirs that fed into the central regions of the galaxies.In contrast, however, Habouzit et al. (2020) found that void galaxies and their supermassive black holes in the Horizon-AGN simulation grow in a manner that is similar to that of galaxies in denser environments, where merger-driven nuclear activity is common.
Understanding the discrepancies between these results could further our understanding of the various ways in which nuclear activity is triggered, as well as the effects that cosmic flow patterns and mergers have on AGN.
Here we investigate the properties of voids and void galaxies in the = 0 snapshot of the TNG300 simulation (hereafter TNG300; Nelson et al. 2018;Springel et al. 2018;Marinacci et al. 2018;Naiman et al. 2018;Pillepich et al. 2018).TNG300 is a cosmological magnetohydrodynamical (MHD) simulation of a ΛCDM universe with sufficient spatial and mass resolution within a large volume to conduct statistical analyses of voids and void galaxies.Using TNG300, we construct the largest catalog of luminous void galaxies within a cosmological MHD simulation to date.We compare various physical properties (sizes, colors, star formation rates, luminosity functions, mass functions, metallicities, and nuclear activity) of the void galaxies to those of galaxies found in walls and filaments.In addition, we examine the degree to which location within the cosmic web (i.e., within voids and outside voids) affects AGN activity.
The paper is organized as follows.TNG300 and our void finding algorithm are discussed in §2.In §3 we present the properties of the voids and the void galaxies, and we compare the properties of void galaxies to non-void field galaxies.A summary and discussion of our results is presented in §4.Throughout, we compute error bars using 10,000 bootstrap resamplings of the data.Error bars are omitted from figures when the error bars are comparable to or smaller than the sizes of the data points.
To identify luminous galaxies, we use the publicly-available TNG300 subhalo and group catalogs, which list friends-of-friends groups and their substructures.Excluding subhalos that are flagged as being noncosmological in origin, there are a total of 664,322 luminous galaxies.To assign luminosities to the galaxies, we adopt the subhalo magnitudes from the supplementary catalog created by Nelson et al. (2018).
In comparison to the main IllustrisTNG subhalo catalog, the Nelson et al. (2018) catalog better resembles Sloan Digital Sky Survey (SDSS; York et al. 2000) photometry because it includes the effects of dust obscuration on the simulated galaxies.
In addition to the group and subhalo catalogs, we also make use of particle data from the = 0 snapshot, including stellar, dark matter, gas, and black hole particles.Following Pillepich et al. (2018), when quoting stellar masses for our TNG300 galaxies, we apply a 40% correction to the stellar masses from the TNG300 catalog (i.e., to account for the fact that the stellar masses in TNG300 are not as well converged as in the benchmark TNG100 simulation; see Appendix A of Pillepich et al. 2018).
Void Finder
To identify voids, we use the 3D spherical void finder (SVF) from Padilla et al. (2005) as implemented by Paillas et al. (2017) and Paillas et al. (2019).1This is a computationally inexpensive algorithm that identifies regions with significant central underdensities that converge to the average density beyond the ridges of the voids at ≳ 3 void radii.The SVF algorithm can be summarized as follows: 1.A rectangular grid is constructed over the galaxy distribution.The number of galaxies within each cell is then counted and any cell that is completely devoid of galaxies (i.e., the cell contains no galaxies) is considered to be a void center.
2. Spheres are expanded outwards from each void center.The largest sphere around a void center with an integrated underdensity contrast Δ void ≤ −0.8 has its radius defined to be the radius of the void.
The choice of setting the underdensity contrast threshold to −0.8 comes from linear theory arguments in Blumenthal et al. (1992), who showed that voids at the present epoch should have interior densities that are 20% of the mean density of the Universe at the time of shell-crossing.
3. Any void that neighbors a larger void by more than 20% of the sum of the radii of both voids is rejected.
That is, if the distance, , between voids and (where radius ≤ ) satisfies ≤ 0.2 * ( + ), void is rejected from the sample.
4. Remaining voids have their centers perturbed in random directions to determine whether their radii can be increased.If a shifted void center results in a larger sphere that satisfies the underdensity contrast criterion from Step 2, the center and radius of the void are then updated to the values of the larger sphere.
The rectangular grid from Step 1 contains cells that are 5ℎ −1 Mpc on a side.The grid size was chosen as a balance between: [1] computation time, [2] reducing the error associated with the locations of the centers of the smallest voids, and [3] reducing the number of spurious centers (which can arise due to shot noise).
That is, if a void center is identified with low resolution, the error associated with the radius of that void will be adversely affected (i.e., since the radius depends on the integrated underdensity contrast threshold in the region surrounding that center).Therefore, a cell size is chosen such that the error it contributes to the radius of the smallest voids in our sample is less than 10% of the total error for that result (see Paillas et al. 2017 for a detailed discussion).
The 3D SVF does not provide the detailed description of void geometry that can be obtained with more sophisticated void finding algorithms (see, e.g., Neyrinck 2008;Platen et al. 2007); however, voids in the local Universe tend to exhibit spherical symmetry (see, e.g., Icke 1984;Sheth & van de Weygaert 2004;Sutter et al. 2014;Hamaus et al. 2016).Moreover, Paillas et al. (2019) compared the results of six void finding techniques in the context of differentiating () gravity models and found that similar results were obtained regardless of the choice of void finding algorithm.For our purposes, then, we consider the 3D SVF to be a reasonable choice of algorithm.
Field Galaxy Sample
Our investigation of galaxy properties focuses on field galaxies (i.e., galaxies that reside outside cluster environments), subdivided according to whether the galaxies reside within a void ("void galaxies") or outside a void ("non-void" galaxies).We define field galaxies to be all galaxies that are not contained within parent halos with masses ≥ 10 14 ℎ −1 ⊙ , which eliminates cluster galaxies from the sample.We then use the void catalog from §2.2 to separate void galaxies from non-void galaxies.From this, our field galaxy sample consists of 75,220 void galaxies and 527,454 non-void galaxies.
As part of our analysis, we investigate the relative ages of void and non-void galaxies.Even in simulation space, determining the formation time of a galaxy is not simple.This is due to the hierarchical nature of structure formation, which results in many smaller overdensities merging to form a single galaxy.Here we adopt two indicators of galaxy age: [1] the age of the oldest bound stellar particle and [2] the luminosityweighted age.For both of these age indicators, we use the subhalo cutout provided by the TNG, which contains all particles that are identified by the SUBFIND algorithm as being bound to the halo.Formation times of stellar particles are given as scale factors in the simulation, and we convert these to lookback times using the Planck Collaboration et al. ( 2016) cosmological parameters and the astropy.cosmologypackage (Astropy Collaboration et al. 2013, 2018).
The age of the oldest bound stellar particle is a useful indicator of the age of a simulated galaxy, but it is not straightforward to use as a comparison to observed galaxies.In contrast, the luminosity-weighted age is a metric that can be compared to observations (see, e.g., Li et al. 2018).To obtain luminosity-weighted ages of the simulated galaxies we use the following equation from Lu et al. (2020): where is the logarithm (base 10) of the formation age of the th particle and is the SDSS -band luminosity of the th particle.Above, the summation is computed over all stellar particles that are bound to the subhalo.3. RESULTS
Void Properties
Using the 3D SVF algorithm, a total of 5,078 voids were identified.The void radii span an order of magnitude, with the smallest voids having radii of = 2.5ℎ −1 Mpc and the largest void having a radius of = 24.7ℎ−1 Mpc.The median void radius is 4.4ℎ −1 Mpc.In total, voids cover a volume of 168.3 3 ℎ −3 Mpc 3 , or 82% of the simulation box.The number of voids as a function of radius is shown in Figure 1, from which it is clear that the majority of voids have radii ≲ 10ℎ −1 Mpc.
The mean radial number density contrast of the voids is shown in Figure 2. Squares show the result obtained using the luminous galaxies ("luminous density contrast") and crosses show the result obtained using the dark matter particles ("dark matter density contrast").For each void, the density contrast in concentric spherical shells of thickness , centered on the void center, was calculated as where () is the number of galaxies or dark matter particles within the radial bin that ranges from − 2 to + 2 and n() is the average number density of galaxies or dark matter particles within the simulation box.In both cases (i.e., luminous density contrast and dark matter density contrast), the mean radial density profiles resemble reverse spherical top-hat distributions, which are expected for underdense regions in the local Universe (see, e.g., Icke 1984;Sheth & van de Weygaert 2004).These profiles are characterized by an underdense, flat interior that rises steeply above the mean density at the ridge of a void and slowly decreases to the mean density at large distances from the void center.
From Figure 2, it is clear that mass does not directly trace light within the voids.Rather, the mean central density contrast obtained from the galaxies (Δ = −0.77± 0.04) is somewhat lower than the mean central density contrast of the dark matter (Δ = −0.64 ± 0.02); i.e., the centers of the voids are more "devoid" of galaxies than they are of dark matter.The opposite is true at the ridges of the voids where, on average, there is a higher concentration of galaxies than dark matter, with the mean dark matter density contrast reaching a maximum of 0.33 ± 0.02 at = 1.25 and the mean luminous density contrast reaching a maximum contrast of 0.42 ± 0.02 at = 1.4 .
Voids form in regions of space with differing background densities (i.e., as compared to the average density of the Universe).That is, some voids form in regions of space that have relatively high local background densities ("voids in clouds") while others form in regions of space that have relatively low local background densities ("voids in voids"); see Sheth & van de Weygaert (2004) for a discussion of void hierarchy.In Figure 3 we use mean integrated number density profiles to explore the effects of the local background density on the density contrast.For legibility of the figure, the number density profiles for the dark matter particles are shown as spline fits.Following Sheth & van de Weygaert (2004), we define voids-in-voids to be those with an integrated galaxy density contrast < 0 at = 3 and voids-in-clouds to be those with an integrated galaxy density contrast > 0 at = 3 .To construct the mean integrated density profiles, the number density contrast of the galaxies (points) and the dark matter particles (lines) was computed in concentric spheres of radius , centered on each void center, from which means were then computed.
The mean integrated number density profiles obtained using all voids (blue line and blue crosses in Figure 3) show the same trends as the mean differential number density profiles in Figure 2. On average, the central regions of the voids have a dark matter density contrast that is 65% ± 2% lower than the average dark matter density contrast in the simulation and a galaxy density contrast that is 77% ± 4% lower than the average galaxy density contrast in the simulation.At the ridges of the voids, the mean interior density contrast for the galaxies reaches a maximum that is 22% ± 1% higher than the average galaxy density contrast Black line and green diamonds: results for voids in regions of the simulation that have a low local background density ("voids-in-voids").See text for definitions of voids-in-clouds and voids-in-voids.
at = 2.08 and the mean interior density contrast for the dark matter reaches a maximum that is 18% ± 2% higher than the average dark matter density contrast at = 2.25 .
For radii ≲ , the local background density in which the voids are located has relatively little effect on the integrated number density profiles of the galaxies and the dark matter.For radii > , however, significant differences for the integrated number density profiles of voids-in-voids (black line and green diamonds) and voids-in-clouds (yellow line and red crosses) occur.In the case of voids-in-voids, galaxies largely trace the dark matter for ≳ .At the ridges of voids-in-voids, the mean interior density contrast remains less than the average in the simulation (i.e., the mean interior density contrast is −0.20 ± 0.01 at the ridges) and for radii 1.5 ≲ ≲ 4 the mean interior density contrast of voids-in-voids remains approximately constant at a value of ∼ −0.2.In the case of voids-in-clouds, the galaxies trace the dark matter for a small range of radii ( ≲ ≲ 1.2 ), but are more overdense than the dark matter for ≳ 1.2 .At the ridges of voids-in-clouds, the integrated number densities of the galaxies and dark matter exceed the integrated number densities of the galaxies and dark matter in the full void sample by a factor of ∼ 3 and they remain significantly higher than than the average density out to distances as large as ∼ 4 .
Properties of Void and Non-void Galaxies
Below we investigate the following properties of void and non-void galaxies:
Physical Sizes
The TNG300 subhalo catalog defines the photometric radii of the galaxies to be the radii at which the surface brightness profiles drop below 20.7 mag arcsec −1 in the -band.We use this definition to compute normalized probability distributions for the radii of void and non-void galaxies, results of which are shown in Figure 4. From this figure, the majority of the galaxies in both populations have radii ≲ 5ℎ −1 kpc, and the probability of a given galaxy having a radius ≲ 5ℎ −1 kpc is the same for both void and non-void galaxies.
For galaxy radii ≳ 5ℎ −1 kpc, the distribution of void galaxy radii declines much more steeply than the 10 radius (h 1 kpc) distribution of non-void galaxy radii.As a result, the largest void galaxies have radii ∼ 11ℎ −1 kpc, while the largest non-void galaxies have radii that are ∼ 2 times larger than the largest void galaxies.
Optical Color-Magnitude Relationships
Figure 5 shows the relationship between the ( −) optical color and the absolute SDSS -band magnitude, , for non-void galaxies (panel b) and void galaxies (panel d).From these, it is clear that both void and non-void galaxies have distributions that peak in two locations: [1] intrinsically bright, red galaxies and [2] intrinsically faint, blue galaxies.Red crosses in panels (b) and (d) of Figure 5 indicate the locations of the peaks, and show that the peaks occur in similar locations of the ( − ) vs. space for both void and non-void galaxies.Formally, the blue peak occurs at ( , − ) = (−14.77,0.59) for the void galaxies and at ( , − ) = (−14.76,0.64) for the non-void galaxies; i.e., the blue peaks occur at identical absolute magnitudes but slightly different colors, with the void galaxies being somewhat bluer than the non-void galaxies near the blue peaks.The red peak occurs at ( , − ) = (−20.47,0.77) for the void galaxies and at ( , − ) = (−20.55,0.77) for the non-void galaxies; i.e., the red peaks occur at essentially identical absolute magnitudes and colors for both populations of galaxies.
Panels (a) and (c) in Figure 5 show normalized probability distributions for the absolute -band magnitudes of the non-void and void galaxies, respectively.From this, it is clear that distributions differ signficantly, and this will be reflected in the luminosity functions of void and non-void galaxies below.Panel (e) in Figure 5 shows normalized probability distributions for the ( − ) colors of void galaxies (diamonds) and non-void galaxies (circles).From this, it is clear that, while both types of galaxies have a broad distribution of optical colors, the distributions are substantially different and, in particular, there is a much higher concentration of red, non-void galaxies than red void galaxies.
We further explore the differences between the optical color distributions for void and non-void galaxies in Figure 6, which shows the cumulative probability distributions of ( − ) values for void galaxies (diamonds) and non-void galaxies (circles).A two-sample Kolmogorov-Smirnov (KS) test performed on the two distributions in Figure 6 rejects the null hypothesis that both distributions are drawn from the same underlying distribution at a high confidence level (> 99.9999%), indicating that the distribution of void galaxy colors is significantly different from that of non-void galaxies.In particular, void galaxies are typically bluer than non-void galaxies, with the median ( − ) color of void galaxies being 0.4205 ± 0.0007 and the median ( − ) color of non-void galaxies being 0.4881 ± 0.0003.
Luminosity and Stellar Mass Functions
Next we compute luminosity functions and stellar mass functions for void and non-void galaxies.Figure 7 shows the luminosity functions, computed in terms of SDSS -band absolute magnitude (i.e., the number of galaxies per magnitude per unit volume).Lines in Figure 7 show the best-fitting Schechter luminosity functions (Schechter 1976) of the form: for the -band absolute magnitudes of non-void galaxies (panel a) and void galaxies (panel c).Side panel: normalized probability distributions of the ( − ) colors of void galaxies (diamonds) and non-void galaxies (circles).
where * is the absolute magnitude of an * galaxy.
Overall, the luminosity functions of both void and non-void galaxies are fitted reasonably well by Schechter functions, though some deviation is apparent (particularly at the extreme ends of the luminosity functions).
The parameters of the best-fitting Schechter functions are * = (1.9 ± 0. magnitudes, * , that differ significantly; i.e., * , non-void galaxies are intrinsically ∼ 70% brighter than * , void galaxies.However, the faint end slopes of the luminosity functions, , are consistent within 2. Results for the differential stellar mass functions (i.e., the number of galaxies per log stellar mass bin per unit volume) of void and non-void galaxies are shown in Figure 8.For stellar masses between 10 7.25 ℎ −1 ⊙ and 10 10.25 ℎ −1 ⊙ , the mass functions exhibit roughly power law behaviors, with the slope of the power law being substantially shallower for non-void galaxies than it is for void galaxies.Below stellar masses of 10 7.25 ℎ −1 ⊙ , both mass functions fall below the power law due to the finite resolution of the simulation, which leads to undercounting of the smallest galaxies.Above stellar masses of 10 10.25 ℎ −1 ⊙ , both mass functions decrease sharply due to the fact that high mass galaxies are relatively rare objects.From the high mass ends of the mass functions, however, it is clear that the sample of non-void galaxies has a larger fraction of high mass galaxies than does the void galaxy sample.The combined differences in the mass functions result in the median stellar mass for the void galaxies being a factor of ∼ 2 smaller than the median stellar mass for the non-void galaxies: (1.58 ± 0.01) × 10 8 ℎ −1 ⊙ vs. (3.08 ± 0.01) × 10 8 ℎ −1 ⊙ .
Ages
Next, we quantify the ages of the void and non-void galaxies using two metrics: [1] the oldest stellar particles within the subhalos and [2] the luminosity weighted ages of the galaxies (see Equation 1).From this, the median age of the oldest stellar particles bound to the subhalos in the void galaxies is 10.1033 ± 0.0001 Gyr and the median age of the oldest stellar particles bound to the subhalos in the non-void galaxies is 10.10685 ± 0.00003 Gyr; i.e., the oldest stellar particles bound to the subhalos in the void galaxies are ∼ 3.6 Myr years younger than the oldest stellar particles in the non-void galaxies.In comparison, the median luminosity weighted age of the void galaxies is 9.217 ± 0.002 Gyr and the median luminosity weighted age of the non-void galaxies is 9.3642 ± 0.0009 Gyr; i.e., when using luminosity weighted ages, we find that the void galaxies are ∼ 147 Myr younger than the non-void galaxies.Both age indicators reveal that void galaxies are systematically younger than non-void galaxies, but the age difference between void and non-void galaxies is reflected less by the time at which the first stars formed (i.e., the ages of the oldest stellar particles) than it is by the overall star formation histories of the galaxies (i.e., the luminosity weighted ages).
That being said, the difference between the median luminosity-weighted ages of void and non-void galaxies is too small to be detected in modern surveys which report typical dispersions for luminosity-weighted ages of galaxy populations between 0.15 − 0.30 dex (see, e.g., González Delgado et al. 2014;Scott et al. 2017;Li et al. 2018;Lu et al. 2020).
Stellar and Gas Chemical Abundances
Results for the stellar and gas chemical abundance ratios are shown in Figures 9 and 10, respectively.
Here we subdivide the galaxy samples using four distinct stellar mass bins, the boundaries of which are * = 10 7.40 ℎ −1 ⊙ , 10 8.15 ℎ −1 ⊙ , 10 8.90 ℎ −1 ⊙ , 10 9.65 ℎ −1 ⊙ , and 10 10.40 ℎ −1 ⊙ .These are indicated by vertical, dashed blue lines in Figure 8.The panels in Figures 9 and 10 are arranged vertically in order of increasing stellar mass, such that panels (a) and (e) include only galaxies in the first bin, panels(b) and (f) include only galaxies in the second bin, panels (c) and (g) include only galaxies in the third bin, and panels (d) and (h) include only galaxies in the fourth bin.The left panels of Figures 9 and 10 show the average stellar and gas chemical abundance ratios, respectively.The right panels of Figures 9 and 10 show the ratios of the corresponding data points for void and non-void galaxies from the left panels.
From Figures 9 and 10, void galaxies have stellar and gas metal fractions that are systematically lower than those of non-void galaxies.This result holds true across all stellar mass bins, but the differences are more pronounced for galaxies with lower stellar masses than they are for galaxies with higher stellar masses.
Specific Star Formation Rates
Results for the relationships between stellar mass and specific star formation rate (sSFR) are shown in Figure 11b) (non-void galaxies) and Figure 11c) (void galaxies).Here, the specific star formation rates are instantaneous rates, derived from the sum of star formation rates in individual gas cells at = 0. From Figure 11b) and c), it is clear that both void and non-void galaxies show a main sequence of star formation.
Normalized 1D probability distributions for * and sSSF are also shown Figure 11 (top panels and side panel, respectively).Red crosses at (8.38, −9.45) in Figure 11b) and (8.25, −9.40) in Figure 11d) indicate the peak densities, which were determined from the relative maxima in the top and side panels.underlying distribution (confidence level > 99.9999%).The median sSFR for void galaxies is 21.8 ± 0.5 percent higher than it is for non-void galaxies (5.015 +0.021 −0.018 × 10 −10 yr −1 vs. 4.116 +0.008 −0.008 × 10 −10 yr −1 ); hence, Black contours: density contours, evenly spaced in the logarithm, computed using all galaxies in each sample.Gray points: 5% of the non-void galaxies, randomly selected from the complete sample (panel b) and 100% of the void galaxies (panel d).Red crosses indicate the central peaks of the distributions.Top: Normalized 1D probability distributions for the stellar masses of non-void galaxies (panel a) and non-void galaxies (panel b).Side: Normalized 1D probability distributions for the sSFR of void galaxies (diamonds) and non-void galaxies (circles).per unit stellar mass, the void galaxies have star formation rates that are higher than those of the non-void galaxies.
Supermassive Black Holes and AGN Fraction
Finally, we investigate the relationships between stellar mass and supermassive black hole (SMBH) mass for TNG300 void and non-void galaxies, results of which are shown in Figure 13 (black points).SMBHs are "seeded" into TNG subhalos once the subhalos pass a given mass threshold, and this seeding of SBMHs gives rise to unphysical artifacts in the stellar mass-SMBH mass relationship for SMBH with masses below 14 13 12 11 10 9 8 7 log 10 [sSFR yr] 0 1 cumulative probability void galaxies non-void galaxies Figure 12.Normalized cumulative probability distribution functions for the specific star formation rates of void galaxies (black diamonds) and non-void galaxies (black circles).
∼ 3 × 10 6 ℎ −1 ⊙ (i.e., due to recently-seeded SMBHs having nearly identical masses, independent of the stellar mass of their host galaxy).Therefore, we focus our analysis below on galaxies for which the masses of the SMBHs are ≳ 3 × 10 6 ℎ −1 ⊙ , which results in a sample of 107,527 non-void galaxies and 10,231 void.We further subdivide these particular galaxies according to whether or not their SMBHs are in an active state.To do this, we use the ratio of Bondi accretion rate to Eddington accretion rate as a measure of nuclear activity.Following Weinberger et al. (2017), we classify galaxies with ratios greater than 0.05 as "active".From this classification, we find 1,330 void galaxies with SMBH masses ≳ 3 × 10 6 ℎ −1 ⊙ are in an active state (i.e., AGN fraction of 13.0 ± 0.4%) and 11,269 non-void galaxies with SMBH masses ≳ 3 × 10 6 ℎ −1 ⊙ are an active state (i.e., AGN fraction of 10.5 ± 0.1%).That is, for TNG300 galaxies with SMBHs in the mass range that we consider, void galaxies have a somewhat higher (24 ± 4%) AGN fraction than do non-void galaxies.We also note that it is primarily intermediate-mass SMBHs that are in an active state, since there are few TNG300 AGNs with SMBH masses ≳ 10 8 ℎ −1 ⊙ or ≲ 10 6.3 ℎ −1 ⊙ .
The top panels in Figure 13 show the relationships for galaxies with inactive SMBH, while the bottom panels show the relationships for galaxies with active SMBH.Results for non-void galaxies are shown in the left panels of Figure 13 and results for void galaxies are shown in the right panels.The observed relationships between stellar mass and SMBH mass from Reines & Volonteri (2015) are also shown in Figure 13 for comparison (red stars: inactive SMBH; green crosses: active SMBH).Observational results for galaxies with inactive SMBH come from a sample of galaxies with dynamical BH masses (Table 3 of Reines & Volonteri 2015), and observational results for galaxies with active SMBH come from a sample of broadline AGN (Table 1 of Reines & Volonteri 2015).
From Figure 13, the relationship between SMBH mass and stellar mass for TNG galaxies with inactive BHs is in rough agreement with the observational results from Reines & Volonteri (2015).For galaxies with inactive BHs that have masses ≳ 10 7 ℎ −1 ⊙ , there is a tighter relationship between SMBH mass and galaxy stellar mass in the TNG300 galaxies than there is for observed galaxies.In contrast to observed galaxies, there are no TNG300 non-void galaxies with inactive black holes with masses > 2 × 10 9 ℎ −1 ⊙ and no TNG300 void galaxies with inactive black holes with masses > 7 × 10 8 ℎ −1 ⊙ .In the case of non-void galaxies, inactive SMBHs with masses > 2 × 10 9 ℎ −1 ⊙ do exist in the TNG300, but all of these objects are located within galaxies that reside in large clusters of galaxies (and all of which were omitted from our sample since the focus of our investigation is the field galaxy population).
Similar to the galaxies with inactive SMBH, TNG300 galaxies with active BHs show a much tighter relationship between stellar mass and SMBH mass than do the observed galaxies from Reines & Volonteri Top: Inactive SMBH.Bottom: Active SMBH.Left: Non-void galaxies.Right: Void galaxies.Red stars and green crosses: observational results obtained by Reines & Volonteri (2015).Dashed red lines: Best-fitting relationship between SMBH mass and galaxy stellar mass for observed galaxies with AGN in Reines & Volonteri (2015).For clarity of the figure, randomly-selected fractions of the TNG300 data points are plotted (1% in panel a, 10% in panel b, 10% in panel c, and 30% in panel d).
(2015).However, while the slope of the relationship is similar to that of observed galaxies with AGN, the amplitude of the relationship for TNG300 galaxies with AGN is significantly higher than it is for observed galaxies.As a result, at fixed stellar mass, the SMBHs in TNG300 active galaxies are a factor of ∼ 10 more massive than would be expected based on the best-fitting relationship from Reines & Volonteri (2015) (i.e., red dashed lines in Figure 13).
SUMMARY & DISCUSSION
Here we have investigated the properties of voids and void galaxies in the = 0 snapshot of the cosmological MHD simulation TNG300.The large volume and high spatial resolution of the TNG300 makes it possible to study a substantial number of voids and void galaxies.
Voids were identified using a spherical void finding algorithm that was applied to the TNG300 galaxy catalog (i.e., in analogy to observational studies of voids and void galaxies, here the voids were identified via underdensities in the distribution of luminous galaxies, not the distribution of dark matter mass or dark matter halos).From this, a total 5,078 voids with radii that range from 2.5ℎ −1 Mpc to 24.7ℎ −1 Mpc were identified.The median radius of the TNG300 voids is 4.4ℎ −1 Mpc, in good agreement with the typical sizes of voids that have been found in TNG300 (Dávila-Kurbán et al. 2023) and in previous simulations that have computational volumes similar to that of the TNG300 (see, e.g., Paillas et al. 2017;Habouzit et al. 2020).
As expected (see, e.g., Sheth & van de Weygaert 2004), the radial underdensity profiles of the TNG300 voids follow a reverse spherical top-hat profile.This is the case whether luminous tracers of the underdensity (i.e., luminous galaxies) or dark matter particles are used to compute the profiles.
Recently, Schuster et al. (2023) performed an in-depth study of void profiles in the Magneticum2 suite of cosmological MHD simulations (see, e.g., Dolag et al. 2016 andHirschmann et al. 2014 for other studies that have used Magneticum.)The authors find the density profiles of isolated voids to be similar for various void sizes, spatial resolutions, and mass scales.Their stacked, isolated, DM void profiles are similar to ours, although our DM profile (see Figure 2) has a flatter interior out to = .However, since we do not investigate void profiles as functions of void shape or size, it is difficult to make direct comparisons.Regardless, the similarities to our galaxy number density profile corroborate their conclusion that the physical properties of voids are universal characteristics that are independent of tracer type and resolution.Dávila-Kurbán et al. (2023) obtained integrated galaxy number density profiles for "voids-in-clouds" and "voids-in-voids" ("S-type" and "R-type" voids in their notation) in TNG300.To obtain their void catalog, Dávila-Kurbán et al. (2023) used the void finding algorithm of Ruiz et al. (2015), a modified version of the void finding algorithm that we employed, but with a few differences.Dávila-Kurbán et al. (2023) considered candidate void centers as Voronoi cells constructed in the subhalo field that had density contrasts < −0.8.Dávila-Kurbán et al. ( 2023) also adopted a stricter integrated underdensity contrast threshold of Δ void ≤ −0.9 for their candidate voids.In addition, Dávila-Kurbán et al. (2023) did not allow for any overlap between voids; instead, they rejected all spheres that overlapped already established, larger voids.Because of this, Dávila-Kurbán et al. ( 2023) identify far fewer voids in TNG300 than we do here (82 voids vs. 5,078 voids), and the radii of their voids are restricted to a much narrow range than the radii of the voids in this work (7 − 11ℎ −1 Mpc vs. 2.5 − 24.7ℎ −1 Mpc).Because of the differences in void finding algorithms and size ranges of the resulting voids, we would expect some differences between our results and those of Dávila-Kurbán et al. ( 2023) and, indeed, this is the case when we compare the two catalogs of "voids-in-clouds".
In particular, the ridges of profiles for "voids-in-clouds" in Dávila-Kurbán et al. ( 2023) only reach maxima up to ∼ 0.3, which is considerably lower than the maximum for our median profile (i.e., 0.6).However, the profiles of "voids-in-voids" in Dávila-Kurbán et al. (2023) are similar to the luminous profiles that we find for our "voids-in-voids".
In the observed Universe, Sánchez et al. (2017) These voids were found with a circular void finder, and utilized projected density maps of various thicknesses.Much like our voids, the interiors of DES voids are nearly empty of galaxies.However, the DES voids do not have flat interiors and gradually increase in density out to the ridges of voids.It is unclear whether this is due to projection effects caused by using 2D spectroscopic slices.
Furthermore, several authors have provided void-galaxy cross-correlation functions for large voids in various SDSS, SDSS Baryon Oscillation Spectroscopic Survey (BOSS; Dawson et al. 2013), and SDSS extended BOSS (eBOSS;Alam et al. 2021) Data Releases (see, e.g., Nadathur et al. 2019;Hamaus et al. 2020;Woodfinden et al. 2022).For instance, Nadathur et al. (2019) and Woodfinden et al. (2022) report voids with very empty and flat interiors ( 0 ∼ −1.0) out to around 20ℎ −1 Mpc that reach maxima of 0 ∼ 0.05 at ∼ 60 .In addition, Hamaus et al. (2020) report values of 0 that rise from ∼ −0.90 in the innermost regions to −0.55 by 0.5 and 0.2 at 1 .Thus, the innermost regions of our TNG300 voids appear to have a higher galaxy density than those in the observed Universe, but their profiles rise more gradually out to the effective radii of voids.Despite this, the ridges of TNG300 voids have a higher overdensity of galaxies not seen in the observed Universe.
In terms of the detectability of our voids, the small size of most of our voids means their average galaxy number density is orders of magnitudes larger than what is available in most current observational surveys.
For instance, we report an average galaxy number density of TNG300 voids of 9.4 × 10 −2 ℎ 3 Mpc −3 , whereas Mao et al. (2017) found an average galaxy number density of 3.6 × 10 −4 ℎ 3 Mpc −3 in their SDSS BOSS DR12 void member catalogs.Because of this, it is unlikely that the smaller voids we report would be detected in current-generation surveys.
We find that the TNG300 voids are more devoid of galaxies than they are of dark matter, and this result is independent of the local background density within which the voids are embedded (i.e., voids-in-voids vs. voids-in-clouds).That is, within the voids, mass does not trace light.This result agrees with previous studies that have compared luminous tracers within voids to the underlying dark matter field (see, e.g., Ricciardelli et al. 2014;Pollina et al. 2017).In particular, Pollina et al. (2017) used the Magneticum suite to test the linearity of the bias of luminous tracers compared to the underlying dark matter field.There, the authors found a linear relationship between the density contrast of tracers and the density contrast of dark matter within voids when galaxies, clusters, or AGN are used as tracers.
In addition, we note that previous work on the locations of satellite galaxies in the Illustris-1 (Vogelsberger et al. 2014a,b;Genel et al. 2014;Sijacki et al. 2015) and TNG100 simulations have demonstrated that mass does not trace light in host-satellite systems (see, e.g., Brainerd 2018;McDonough & Brainerd 2022).
Hence, the distribution of luminous galaxies in a ΛCDM universe is an unreliable tracer of the dark matter distribution in overdense regions of space.However, the linear bias between the galaxy and dark matter distributions in underdense regions still makes galaxies valuable probes of the density field within the linear regime of voids.
In TNG300 we identified a total of 75,220 void galaxies and 527,454 non-void field galaxies, and systematic differences between the void and non-void galaxies are clear.Compared to the non-void galaxies, the void galaxies are, on average, younger, bluer in color, less metal enriched, have lower stellar masses, and have smaller physical extents.The luminosity functions of both void and non-void galaxies exhibit similar faintend slopes, but the luminosity of an * non-void galaxy is ∼ 70% greater than the luminosity of an * void galaxy, consistent with void galaxies being smaller and less massive than non-void galaxies on average.In addition, void galaxies have a somewhat higher specific star formation rate than non-void galaxies and, in the case of galaxies with central SMBHs with masses ≳ 3 × 10 6 ℎ −1 ⊙ , void galaxies have a somewhat higher AGN fraction than non-void galaxies.These results are in agreement with previous studies that find void galaxies to be bluer, lower in stellar mass, and less metal enriched than non-void galaxies (see, e.g., Rojas et al. 2004;Florez et al. 2021;Rosas-Guevara et al. 2022) and that concluded AGN fraction is not strongly dependent upon the local matter density of a galaxy (see, e.g., Carter et al. 2001;Karhunen et al. 2014;Habouzit et al. 2020).
The relationship between central SMBH mass and host galaxy stellar mass was also investigated.In the case of TNG300 galaxies with inactive SMBHs, the SMBH mass-stellar mass relationships of void and non-void galaxies is in rough agreement with results obtained by Reines & Volonteri (2015) for observed galaxies.However, for galaxies with SMBH masses ≳ 10 7 ℎ −1 ⊙ , the SMBH mass-stellar mass relationship is considerably tighter for TNG300 galaxies than it is for observed galaxies.A relationship between SMBH mass and stellar mass that is tighter for TNG300 galaxies than observed galaxies is also shown by void and non-void TNG300 galaxies with active central black holes.In both cases, the tightness of the relationship for simulated galaxies may simply be reflective of the relative ease with which both parameters can be obtained in simulation space.Lastly, while the slopes of the SMBH mass-stellar mass relationships for active TNG300 void and non-void galaxies agree well with that of observed galaxies, at fixed stellar mass the SMBHs in active TNG300 galaxies are ∼ 10 times more massive than would be expected based on the best-fitting relationship from Reines & Volonteri (2015).
The lower host galaxy stellar masses and higher SMBH masses of TNG300 AGN compared to those of Reines & Volonteri (2015) could be caused by several factors.For example, in TNG300, a SMBH of mass 6.2 × 10 6 ⊙ is seeded into any friends-of-friends halo whose mass exceeds 7.3 × 10 10 ⊙ , which is based on the host-halo relationship of Di Matteo et al. (2008) and Sijacki et al. (2009).This initial mass could be too large, resulting in more massive SMBHs in TNG300 by = 0.0.This would, however, also affect the relationships in the top panels of Figure 13.Another explanation could involve the AGN feedback models in TNG300, which describe how energy is injected into the galaxies and circumgalactic mediums (CGM).Zinger et al. (2020) find the AGN feedback channel of TNG300 AGN to be both highly "ejective" and "preventative".This means the AGN state is highly efficient at expelling star-forming gas from the galaxy and increasing the entropy of the CGM, which strongly quenches future star formation within the galaxy.
Therefore, if the feedback injection is over-tuned, or if TNG300 galaxies spend too long in their "active" state, this could cause in the host galaxy stellar masses of TNG300 AGN to be lower than those of Reines & Volonteri (2015).
Our results for the systematic differences between void and non-void galaxies in the TNG300 simulation are consistent with the expectation that the two populations of galaxies underwent somewhat different evolutionary paths, with non-void galaxies forming in regions of space that had both higher gas densities and higher galaxy densities than void galaxies.In a universe in which structure forms hierarchically, this naturally leads to non-void galaxies forming earlier than void galaxies (i.e., due to biased galaxy formation in which the highest peaks collapse first) and becoming larger on average than void galaxies (i.e., due to both a larger local reservoir of gas and a higher frequency of galaxy collisions outside the voids).Here we have investigated void and non-void galaxy properties in a single simulation snapshot (corresponding to the present epoch) and, therefore, we cannot make definitive statements about the degree to which the evolutionary paths of the TNG300 void and non-void galaxies differed, and the ways in which those differences affected their physical properties at = 0. Further work, concentrated on the details of the evolution of void and non-void galaxies over cosmic time, will be necessary to establish and quantify differences in the evolutionary paths and the resulting effects on the natures of the two populations of galaxies.
Figure 2 .
Figure2.Average radial number density contrast profiles for the voids, computed using concentric spherical shells centered on the void centers.Radii of the shells are given in units of the void radius, .Squares: number density contrast of the luminous galaxies.Crosses: number density contrast of the dark matter particles.
Figure 3 .
Figure3.Mean integrated number density contrast computed using luminous galaxies (points) and dark matter particles (lines).Blue line and blue circles: results obtained using all voids in the simulation.Yellow line and red crosses: results for voids in regions of the simulation that have a high local background density ("voids-in-clouds").
Figure 4 .
Figure 4. Probability distributions for the radii of void galaxies (diamonds) and non-void galaxies (circles).
ΦFigure 5 .
Figure 5. Optical color-magnitude relations for non-void galaxies (panel b) and void galaxies (panel d).Black contours: linearly spaced density contours, computed using all galaxies in each sample.Gray points: 10% of the non-void galaxies, randomly selected from the complete sample (panel b) and 100% of the void galaxies (panel d).Red crosses indicate the red and blue peaks for each of the distributions.Top panels: normalized probability distributions
Figure 7 .
Figure 7. Luminosity functions in the SDSS -band for void galaxies (diamonds) and non-void galaxies (circles).Dotted line: best fitting Schechter luminosity function for void galaxies.Dashed line: best fitting Schechter luminosity function for non-void galaxies.
Figure 8 .
Figure 8. Differential stellar mass functions for void galaxies (diamonds) and non-void galaxies (circles).Dashed lines indicate the boundaries of the stellar mass bins used in the analysis of stellar and gas metallicities (see text).
Figure9.Left: Average stellar chemical abundance ratios for void galaxies (diamonds) and non-void galaxies (circles).
Figure 10 .
Figure10.Same as Figure9, but for chemical abundance ratios of the gas.
Figure
Figure shows the normalized cumulative probability distribution functions for the sSFR of void and non-void galaxies.A two-sample KS test shows that the two distributions are not drawn from the same
Figure 11 .
Figure 11.Specific star formation rate vs. stellar mass for non-void galaxies (panel b) and void galaxies (panel c).
provide 2D photometric galaxy number density profiles for voids in the Dark Energy Survey (DES; Flaugher 2005; Dark Energy Survey Collaboration et al. 2016). | 12,086.2 | 2024-01-04T00:00:00.000 | [
"Physics"
] |
A Multi-Class Deep Learning Approach for Early Detection of Depressive and Anxiety Disorders Using Twitter Data
: Social media occupies an important place in people’s daily lives where users share various contents and topics such as thoughts, experiences, events and feelings. The massive use of social media has led to the generation of huge volumes of data. These data constitute a treasure trove, allowing the extraction of high volumes of relevant information particularly by involving deep learning techniques. Based on this context, various research studies have been carried out with the aim of studying the detection of mental disorders, notably depression and anxiety, through the analysis of data extracted from the Twitter platform. However, although these studies were able to achieve very satisfactory results, they nevertheless relied mainly on binary classification models by treating each mental disorder separately. Indeed, it would be better if we managed to develop systems capable of dealing with several mental disorders at the same time. To address this point, we propose a well-defined methodology involving the use of deep learning to develop effective multi-class models for detecting both depression and anxiety disorders through the analysis of tweets. The idea consists in testing a large number of deep learning models ranging from simple to hybrid variants to examine their strengths and weaknesses. Moreover, we involve the grid search technique to help find suitable values for the learning rate hyper-parameter due to its importance in training models. Our work is validated through several experiments and comparisons by considering various datasets and other binary classification models. The aim is to show the effectiveness of both the assumptions used to collect the data and the use of multi-class models rather than binary class models. Overall, the results obtained are satisfactory and very competitive compared to related works.
Introduction
In this research, we are interested in analyzing social Twitter data (tweets) to help detect psychological disorders, more specifically depression and anxiety disorders.Millions of people are now living with mental disorders, which are one of the leading causes of ill health worldwide.Therefore, early detection is crucial for rapid intervention in order to reduce the escalation of these disorders.In what follows, we first provide an overview of depression and anxiety disorders, then highlight the use of the Twitter platform to help deal with them and finally summarize the paper structure.
Table 1.Differences and commonalities between depressive and anxiety disorders [2].
Psychological diagnoses
In common with the same degree Difficultly concentrating, fear, excessive worry and nightmares.
In common but of different degree Sad/melancholy *** Sad/melancholy * Which are not common points Loss of interest (loss of pleasure = anhedonia, despair about the future), feelings of guilt or failure, low self-esteem,
Detection of Depression and Anxiety Disorders on the Twitter Platform
In general, social media allows users to post and share their feelings and moods.This helped significantly analyze these contents in order to understand several mental disorders and make predictions accordingly.More specifically, the growing popularity of Twitter (known currently as X platform) has contributed to making it an excellent data source for performing such content analyses, in particular for depression and anxiety detection.Indeed, people with severe symptoms of mental disorders are affected in their professional, family and social lives.This is why the automatic detection of these symptoms through social media would have important implications for those affected.
In this paper, we focus on the analysis of data extracted from the Twitter platform (i.e., tweets) with the aim of developing models capable of detecting mental disorders in users, more specifically depression and anxiety.In this regard, much research has been conducted in order to understand the statements expressed through tweets and to classify them into positive and negative sentiments while taking into account certain parameters (e.g., population, language, etc.).Traditional approaches used classic machine learning algorithms such as decision trees and SVMs (support vector machines) (see for instance [3][4][5][6][7][8][9]).However, as the data volumes have become very large, recent research has shifted towards deep learning techniques such as recurrent neural networks (RNN) and convolutional neural networks (CNN) (see for example [10,11]).
Even if the detection of depressive and anxious disorders using deep learning could give satisfactory results, these approaches nevertheless mainly rely on binary classification models by treating each mental disorder separately (i.e., depressive or non-depressive/anxious or nonanxious).This is because dealing with one single mental disorder is easier.Table 1 shows us the severity of the distinction between these mental disorders due to the existence of several symptoms in common (e.g., disturbed sleep, fluctuations, etc.).On another side, some symptoms that are not in common between depression and anxiety disorders (e.g., dizziness, heart palpitations, etc.) can overlap with other disorders such as heart disease and cancer.Thus, it would be better if we managed to develop effective models capable of treating more than one mental disorder at the same time.
To fill this gap, we propose a well-defined methodology involving the use of deep learning so as to develop efficient multi-class models for detecting depression and anxiety via tweets analysis.The objective is to classify tweets into three distinct classes: normal, potentially depressive and potentially anxious.This multi-classification approach should allow a better understanding and a more precise assessment of the different nuances linked to these two mental disorders when they are expressed in tweets and thus improve the sensitivity and specificity of their detection.
The basic idea of our proposal is to build several multi-class deep learning models considering both simple and hybrid variants through an efficient combination of different models, in order to test them all.To validate our proposal, we first evaluate the performance of the tested models using different metrics.Then, the well-performing models are used to classify tweets from other datasets.Finally, we compare their performances with binary deep learning models that disjointedly classify depressive and anxious disorders.As a result, the accuracy of our models could reach up to 93%, which is very competitive with other related works, on the one hand, and show more accuracy than binary models that separately predict depressive and anxious disorders, on the other hand.
Paper Structure
The rest of this paper is organized as follows: Section 2 reviews and summarizes some related works on depression and anxiety detection with a special focus on those involving the Twitter platform.Section 3 provides the details of the proposed methodology for the detection of depressive and anxious disorders using multi-class deep learning models.Section 4 summarizes the experimental stage, gives a set of numerical results and discusses and analyzes the obtained results.Finally, Section 5 provides some concluding remarks.
Related Works
Many people around the world suffer from mental disorders due to several factors such as quality of life and stress.Consequently, intensive research efforts have been made in terms of diagnosis and management.In this regard, the evolution of computing technologies have further supported these efforts in different ways, notably by involving artificial intelligence [12].Indeed, as reported in [13], artificial intelligence methods could improve psychotherapy by providing therapists and patients with real-time or near-real-time recommendations based on the patient's response to treatment, especially since 40% of patients do not respond to psychotherapy as planned.In particular, machine learning and data mining techniques can be used to analyze a patient's history to diagnose a problem, thereby helping to copy human reasoning or make logical decisions [12].
Much research has been conducted on the detection of depressive and anxiety mental disorders through social media platforms [3][4][5][6][7][8][9][10][11], in particular using Twitter, while considering different factors such as population, period, language, etc.Most of such studies rely on supervised machine learning models for text classification using either traditional learning techniques such as SVM, RF, NB and LR or deep learning approaches such as RNN, LSTM, GRU, Bi_RNN, Bi_LSTM and Bi_GRU.In addition, some approaches are designed around hybridization of different models such as combining different variants of CNN with RNN (see for instance [33,37]).The general scheme of this kind of analysis mainly consists in collecting data according to some assumptions and hypothesis (i.e., keywords, location, etc.), preprocessing these data, labeling the data according to the target classes, extracting the features, training the adopted models and finally evaluating their performances so as they can be deployed (i.e., they become ready for use).Tables 2-4 summarize and compare some typical research studies according to the classification techniques used.
Research Methodology
The proposed process uses multi-class classification models to categorize tweets as "normal", "potentially depressed" or "potentially anxious".In order to achieve these objectives, we rely on a rigorous methodology which allows us to obtain efficient classifiers by exploiting Twitter data.This process carries out a clear sequence of well-defined phases, as illustrated in Figure 1.In the following, we detail each phase by providing explanations on its role within the system.
Preparation Dataset
The goal of this phase is to obtain a large number of relevant tweets.To do so, four steps are required.First, raw data are collected using dedicated tools.Then, these data are preprocessed to make them ready for use.Next, the preprocessed data are labeled in order to bind them to one among the three classes, namely "normal", "potentially depressed" and "potentially anxious".Finally, the labeled data are balanced so that their numbers are approximately equal.
Data Collection
The aim of this step is to collect a large dataset of tweets written in English.The period of tweets related to depression and anxiety is from 1 December 2019 to 31 December 2021.This period corresponds to the circumstances of the COVID-19 pandemic, where many people were affected by the requirements of confinement, isolation, risk of illness,
Preparation Dataset
The goal of this phase is to obtain a large number of relevant tweets.To do so, four steps are required.First, raw data are collected using dedicated tools.Then, these data are preprocessed to make them ready for use.Next, the preprocessed data are labeled in order to bind them to one among the three classes, namely "normal", "potentially depressed" and "potentially anxious".Finally, the labeled data are balanced so that their numbers are approximately equal.
Data Collection
The aim of this step is to collect a large dataset of tweets written in English.The period of tweets related to depression and anxiety is from 1 December 2019 to 31 December 2021.This period corresponds to the circumstances of the COVID-19 pandemic, where many people were affected by the requirements of confinement, isolation, risk of illness, loss of loved ones, etc.These poor living conditions have encouraged people to use social media to express their feelings.In contrast, the period of the tweets related to normal behaviors is from 25 January 2022 to 31 January 2022.
The keywords used to collect the data were carefully inspired by the symptoms of depression and anxiety summarized in Table 1.This procedure for collecting the data from Twitter is widely adopted by several deep learning approaches for many purposes.In what follows, we give some typical cases.For instance, Shen et al. have collected data for depression detection using keywords close to "(I'm/I was/I am/I've been) diagnosed depression" [36].These data were reused in other works [5,28,[36][37][38] for different purposes.Chang et al. use the disease name 'Borderline, bpd, bipolar' as keywords to predict borderline personality disorder (BPD) and bipolar disorder (BD) [39].In [40], Wang collected data based on the name of five dietary supplements 'Melatonin, Kava, Ginkgo, Biloba, Ginseng' to predict depression, anxiety and mood Disorders.Note that the use of a single word as a keyword (e.g., name of a disease or a food supplement) does not confirm that the user is sick, so the ambiguity rate is systematically high.In contrast, using these words by indicating one symptom or more within an explanatory sentence may reduce the rate of ambiguity.This is because such sentences correspond to user statements and thus their content is more likely contain negative sentiments and expressions that help train models.
To generate depressive and anxiety tweets, we first used patterns close to: "I am/was/have been diagnosed/identified with depression/anxiety".The aim is to target users who selfreport their issues.Then, we intensified the search around these data using other keywords related to both common and non-common symptoms between depression and anxiety disorders.For common symptoms, we used several verbs like "feel", "suffer", "want", "can", "be", "have" under several forms (conjugated in the past and the present according to negative and affirmative forms, depending on the meaning targeted) combined with words related to "sleep", "appetite", "fatigue", "suicide", "death", "sadness", "melancholy", "fear", "worry", under several forms (nouns, adjectives, gerunds in addition to some of their synonyms).The degree of a given symptom was expressed using adverbs such as "so", "very", "little" (e.g., so sad, little sad).
In the same way, we have generated depressive and anxiety tweets based on the symptoms which are not in common.For depression disorder, we used keywords close to "loss of pleasure", "despair about the future", "feelings of failure".Regarding anxiety disorder, we used keywords close to "Dizziness", "heart palpitations", "panic attack".All these keywords were involved under several forms such as nouns, adjectives, gerunds in addition to some of their synonyms.Finally, normal tweets were generated based on keywords related to positive sentiments and feelings such as "happiness", "love" and "beauty".Table 5 gives typical examples of such keywords used within some parts of sentences that can appear in tweets.Our choice to create our dataset can be summarized in two main points.First, in the context of deep learning, it will be better to rely on large volumes of data in the hope that they lead to good performances.Second, as one of the goals of our paper is to show the effects of the nuances between depression and anxiety disorders on training process, it would be better to rely on our own datasets provided that they follow a robust method leading to reliable data.On another side, one might ask whether the training of our models could be done using data extracted from other sources such as statements, reports and questionnaires of those affected in hospitals and clinics.Unfortunately, social media have their own specificities (posts form, language used, emoticons, multimedia contents, etc.).So, even if a given user is affected by a mental disorder, she/he will be most likely adapted to the way social media are used.Therefore, ideally, the models should be trained using data extracted from social media platforms.
Preprocessing of Data
The data collection phase results in building three datasets, denoted as D0, D1 and D2, with a total size of over seven million tweets, as shown in Table 6.Unfortunately, these data are unclear, incomplete, unstructured and containing errors and redundancy; therefore, it is not recommended to analyze them directly.This is why data preprocessing is a much-needed step to obtain relevant data.In our methodology, we have adopted 14 preprocessing techniques by removing: (1) emojis, (2) emoticons, (3) URLs, ( 4) hashtags (#), ( 5) mentions (@name), ( 6) special characters, (7) punctuation from text, (8) symbols, (9) digits, (10) repetitive letters from words, (11) extra whitespace, (12) uppercase letters, ( 13) contractions (e.g., "It's" becomes "It is") and ( 14) NaN and duplicates in column text.Table 6 gives the numbers of tweets before and after preprocessing the collected data.The word clouds are given in Figure 2, which shows the visual representation of the most used keywords (tags) used in the preprocessed data in datasets D0, D1 and D2.
Data Labeling
The next step is data labeling; it implies assigning a label to each tweet in the datasets based on its class.The tweets from datasets D0, D1 and D2 are bound to the three classes "normal", "potentially depressed" and "potentially anxious", respectively.Therefore, we have labeled tweets from dataset D0 with value '0', tweets from dataset D1 with value '1' and finally tweets from dataset D2 with value '2'.This data labelling aims to build classification models that only classify tweets as potentially positive towards depressive and anxiety mental disorders or not; thus, the analysis is done at the tweet level.If so, the behaviors of concerned users on social media platforms will be analyzed through other systems which further process user data in order to make decisions (user level analysis).
In general, data collected from social media should always be taken with a certain degree of confidence.This is why we collected a large volume of data relating to users self-reporting their cases, in order to increase the degree of confidence in the statements contained in the tweets.Moreover, according to the above-stated objectives, our models may allow a certain tolerance regarding the confidence of tweets toward mental disorders because they do not make decisions about users but only classify tweets for further processing.In addition, large volumes of data are generally more suitable for deep learning approaches in order to obtain good results.
Data Labeling
The next step is data labeling; it implies assigning a label to each tweet in the datasets based on its class.The tweets from datasets D0, D1 and D2 are bound to the three classes "normal", "potentially depressed" and "potentially anxious", respectively.Therefore, we have labeled tweets from dataset D0 with value '0', tweets from dataset D1 with value '1' and finally tweets from dataset D2 with value '2'.This data labelling aims to build classification models that only classify tweets as potentially positive towards depressive and anxiety mental disorders or not; thus, the analysis is done at the tweet level.If so, the behaviors of concerned users on social media platforms will be analyzed through other systems which further process user data in order to make decisions (user level analysis).
In general, data collected from social media should always be taken with a certain degree of confidence.This is why we collected a large volume of data relating to users self-reporting their cases, in order to increase the degree of confidence in the statements contained in the tweets.Moreover, according to the above-stated objectives, our models may allow a certain tolerance regarding the confidence of tweets toward mental disorders
Balancing Data
After data labeling of datasets D0, D1 and D2, they are merged into a single dataset denoted as Main_dataset.Imbalanced datasets refer to those for which the target classes have an uneven distribution of observations leading to appearance of minority and majority classes [41].This risks producing models with poor predictive performance, particularly for minority classes.Regarding our dataset, Table 5 shows that, after preprocessing, the contents of datasets D0, D1 and D2 represent approximately 32.00%, 32.63% and 35.37%, respectively.Consequently, our main dataset is quite balanced.Next, the Main-dataset is randomly divided into three balanced datasets that we refer to as Train_dataset, Test_dataset and Eval_dataset, as shown in Figure 3.The Train-dataset contains 70% of the tweets from each of the datasets D0, D1 and D2, which represents 70% of the total tweets from Main-dataset; this is used to train the models.The Test_dataset contains 15% of the tweets from each of the datasets D0, D1 and D2, which represents 15% of the total tweets from Main-dataset; this is used as a test dataset throughout the models training.Finally, the Eval_dataset contains the remaining tweets (about 15% of the total tweets); this is used in the evaluation phase.
Test_dataset and Eval_dataset, as shown in Figure 3.The Train-dataset contains 70% of the tweets from each of the datasets D0, D1 and D2, which represents 70% of the total tweets from Main-dataset; this is used to train the models.The Test_dataset contains 15% of the tweets from each of the datasets D0, D1 and D2, which represents 15% of the total tweets from Main-dataset; this is used as a test dataset throughout the models training.Finally, the Eval_dataset contains the remaining tweets (about 15% of the total tweets); this is used in the evaluation phase.
Tokenization
Tokenization is a crucial procedure in our process.It breaks up each tweet in the dataset into words called tokens.These tokens help understand the context and thus develop the model for natural language processing tasks.In our dataset, the maximum length of tweets is 131 words.
Feature Extraction
This phase aims to extract the most important features from tweets.In our case, we use word embedding, which is one of the most popular representations of document vocabulary.It helps extract many useful features of a given word in a document (e.g., context, semantic, etc.).For this task, we rely on the GloVe model (Global and Vectors) which allows obtaining vector representations for words while integrating global statistics of words co-occurrence to obtain word vectors [42].GloVe is developed as an open-source project at Stanford University and launched in 2014.Regarding our work, the pre-trained word vectors that are used are the GloVe Twitter word embedding (200 d), which are trained by using 2 billion tweets (containing 27 billion tokens and 1.2 million vocab).These data are made available under the Public Domain Dedication and License v1.0 [43].
Training the Models
In order to build well-performing models for classifying normal, depression and anxiety cases, our proposal is based on
Tokenization
Tokenization is a crucial procedure in our process.It breaks up each tweet in the dataset into words called tokens.These tokens help understand the context and thus develop the model for natural language processing tasks.In our dataset, the maximum length of tweets is 131 words.
Feature Extraction
This phase aims to extract the most important features from tweets.In our case, we use word embedding, which is one of the most popular representations of document vocabulary.It helps extract many useful features of a given word in a document (e.g., context, semantic, etc.).For this task, we rely on the GloVe model (Global and Vectors) which allows obtaining vector representations for words while integrating global statistics of words co-occurrence to obtain word vectors [42].GloVe is developed as an open-source project at Stanford University and launched in 2014.Regarding our work, the pre-trained word vectors that are used are the GloVe Twitter word embedding (200 d), which are trained by using 2 billion tweets (containing 27 billion tokens and 1.2 million vocab).These data are made available under the Public Domain Dedication and License v1.0 [43].
Training the Models
In order to build well-performing models for classifying normal, depression and anxiety cases, our proposal is based on
•
An efficient hybridization that combines CNN model with other types of neural networks to take advantage of the strengths that characterize them such as (1) Simple RNN, (2) LSTM, (3) GRU, (4) Bidirectional RNN (BiRNN), ( 5) BiLSTM and (6) BiGRU.Subsequently, we build hybrid multi-class classifier models according to our multilabeled dataset of tweets; • Dealing with the optimization of the learning rate parameter, which is considered one of the most important parameters in deep learning-based tasks.To do so, we first adopt the Adam optimizer while initializing the learning rate parameter with 0.0001 (the smallest value).Then, we call up the technique of Grid Search Optimization to find the best learning rate value for each model in the interval [0.0001, 0.001].
The result of each deep learning classifier is represented as knowledge (model.h5) in order to be used to predict normal cases and depressive and anxious disorders.
Evaluation of Models
In this phase, we evaluate the performance of all models built.For this purpose, we use the four metrics given by Formulas (1)-( 4) namely, accuracy, precision, recall and F1-score, due to their wide use in the literature.These measures are calculated according to the confusion matrix, which summaries the number of correct and incorrect predictions made by a given classifier, as shown below.given class (i.e., both the current label and the label output by the model does not match the class label); (3) False Positives: when the current value is negative while the predicted value is positive with respect to a given class; (4) False Negatives: when the current value is positive while the predicted value is negative with respect to a given class.
Software and Hardware Configuration
The training of our models was performed on an AMD Ryzen 5 4600H laptop endowed with a 3.00-GHz Radeon processor and 16-GB of RAM.The tweets composing the datasets were collected by using Twitter API and Twarc2 Python library.Regarding the parameters of the training process, we have empirically set them as follows: number of epochs is 20, batch size is 256, maximum tweets length is 131 words, embedding glove 200 d and Adam optimizer is adopted as the default optimization algorithm.
Performance of the Developed Models
To build multi-class models for predicting normal, depressive and anxiety tweets, we have tested around 100 models ranging from simple to hybrid models combining different types of neural network layers: convolution, recurrent, attention and bidirectional.Consequently, we found that the following hybrid multi-classifiers are the most representative typical cases of both success and failure: CNN_RNN, CNN_LSTM, CNN_GRU, CNN_BiRNN, CNN_BiLSTM and CNN_BiGRU.CNN_BiRNN, CNN_BiLSTM and CNN_BiGRU models are the best in terms of performance for all experiment instances while CNN_RNN and CNN_GRU models are the best in terms of performance improvements by involving grid search technique.Finally, CNN_LSTM model represents a failure case where the grid search technique was unable to provide performance improvements.Figure 4 show the performance of these models in terms of training accuracy and training loss, respectively.In particular, the well-performing model is CNN_BiGRU with a learning rate of 0.001.
By setting the learning rate value to 0.001, CNN_RNN was the worst model as it recorded poor accuracy.Moreover, CNN_LSTM and CNN_GRU also showed a significant value of overfitting (red and blue curves are far from each other).However, this unwanted overfitting effect gradually disappeared by setting the learning rate value to 0.0001.In contrast, value 0.001 for the learning rate led to better performance for CNN_BiRNN, CNN_BiLSTM and CNN_BiGRU compared to 0.0001, in addition to the good behavior regarding overfitting.Figure 4 shows the associated curves (the curves on the left concern learning rate value 0.0001 while the curves on the right concern learning rate value 0.001).
The above results suggest that changing the learning rate value of the Adam optimizer has positive or negative influence on the performance of each model.Thus, we need efficient methods to define such a value in order to provide efficient models.In this respect, we adopt grid search, which is a well-known technique serving as a Hyperparameter optimizer for each model.The results are given in Tables 7 and 8.
According to Tables 7 and 8, the best Accuracy achieved is 93.38%; it corresponds to CNN_ BiGRU model such that F1-score of the Normal class is 96%, F1-score of the Depression class is 91% and F1-score of the Anxiety class is 93%. Figure 5 illustrates the confusion matrix for both cases grid search and fixed-based learning rate values.Thus, it can be seen that the grid search could make some improvements in some cases for which the diagonal has a max of correct predictions.
Evaluation and Analysis of the Well-Performing Models
In this section, we evaluate our approach regarding the quality of the data collected and the models built.The objective is twofold: (1) verify the effectiveness of the assumptions used to collect data and (2) show the effectiveness of using multi-class models rather than binary class models.To this end, we leverage the dataset used in [36] to perform an evaluation using binary class models for depression and anxiety detection.Thus, we have randomly selected 12,982 tweets from Depression Dataset D1 and 2658 tweets from Non-Depression Dataset D2.After preprocessing these data, we obtained 5955 tweets labeled by '1' and 2325 tweets labeled by '0'; the resulting dataset is denoted as Shen_dataset.These data are then tested by considering the well-performing models discussed in Tables 7 and 8.The results are given on Table 9.By setting the learning rate value to 0.001, CNN_RNN was the worst model as it recorded poor accuracy.Moreover, CNN_LSTM and CNN_GRU also showed a significant value of overfitting (red and blue curves are far from each other).However, this unwanted overfitting effect gradually disappeared by setting the learning rate value to 0.0001.In contrast, value 0.001 for the learning rate led to better performance for CNN_BiRNN, CNN_BiLSTM and CNN_BiGRU compared to 0.0001, in addition to the good behavior regarding overfitting.Figure 4 shows the associated curves (the curves on the left concern learning rate value 0.0001 while the curves on the right concern learning rate value 0.001).The above results suggest that changing the learning rate value of the Adam optimizer has positive or negative influence on the performance of each model.Thus, we need efficient methods to define such a value in order to provide efficient models.In this respect, we adopt grid search, which is a well-known technique serving as a Hyperparameter optimizer for each model.The results are given in Tables 7 and 8.According to Table 9, one observes that the prediction accuracy of Shen_dataset is average and thus does not show very good results.This is because many depressive tweets were classified as anxious tweets by our models.Indeed, as mentioned in Table 1, there are some common symptoms between depressive and anxiety disorders which consequently may lead to committing classification errors.By knowing that the tweets of Shen_dataset were collected by using some keywords that overlap with anxiety disorders (e.g., "I am depressed and anxious", "I am too tired", "I am so sad" and "I have depression anxiety suicidal thoughts"), our models most likely classify them as anxiety tweets instead of depressive ones.To check this issue, we have reused our dataset to build two binary class models for predicting depression and anxiety separately while keeping the same parameters values.These models are based on the hybridization of CNN and Bi-GRU.Hence, Main-dataset was divided into two datasets denoted as Dataset1 and Dataset2.Dataset1 contains only normal and depressive tweets labeled, respectively, with '0' and '1' while Dataset2 contains only normal and anxiety tweets labeled, respectively, with '0' and '1'.Once these models are built, we test datasets Eval_dataset, Shen_dataset, Dataset1 and Dataset2 to make comparisons and thus draw conclusions.The results are given on Table 10.According to Table 10, both binary class models classify depressive tweets from Shen_dataset as depressive and anxiety tweets with very high accuracy.Regarding our datasets, the obtained results are much better.For instance, Model_2 was trained to classify depressive tweets.By evaluating Dataset2 (anxiety dataset), the accuracy is about 86.35% which means that many anxious tweets were classified as non-depressive.Likewise, by evaluating Dataset1 (depressive dataset) using Model_3, the accuracy is about 62.96%; this means that most of depressive tweets were classified as non-anxious.The conclusions we draw from these results can be summarized as follows: 1.
The source of the improved accuracy of the studied models comes from the way the data were collected by relying on both common and non-common symptoms instead of only using keywords related to common symptoms between depressive and anxiety disorders.2.
Our multi-class models seem to be more effective than the corresponding binary class models as they can resolve ambiguities.Indeed, as depressive and anxiety disorders present certain intersections, binary models most likely classify them as positive tweets (i.e., either depressive or anxious tweets) regardless of the model used (see for instance the results of using Model_2).
It should be noted that the conclusions drawn concern only the context of our work and can in no way be generalized.
Assessment of Our Proposal
Finally, we objectively assess our proposal against related works.Table 11 provides a comparison between our proposal and some other related works within the same context (i.e., those dealing with depression and/or anxiety disorders based on Twitter data), according to the following criteria: According to Tables 7 and 8, the best Accuracy achieved is 93.38%; it corresponds to CNN_ BiGRU model such that F1-score of the Normal class is 96%, F1-score of the Depression class is 91% and F1-score of the Anxiety class is 93%. Figure 5 illustrates the confusion matrix for both cases grid search and fixed-based learning rate values.Thus, it can be seen that the grid search could make some improvements in some cases for which the diagonal has a max of correct predictions.
Evaluation and Analysis of the Well-Performing Models
In this section, we evaluate our approach regarding the quality of the data collected and the models built.The objective is twofold: (1) verify the effectiveness of the assumptions used to collect data and (2) show the effectiveness of using multi-class models rather than binary class models.To this end, we leverage the dataset used in [36] to perform an evaluation using binary class models for depression and anxiety detection.Thus, we have randomly selected 12,982 tweets from Depression Dataset D1 and 2658 tweets from Non-Depression Dataset D2.After preprocessing these data, we obtained 5955 tweets labeled by '1' and 2325 tweets labeled by '0'; the resulting dataset is denoted as Shen_dataset.These data are then tested by considering the well-performing models discussed in Tables 7 and 8.The results are given on Table 9.According to Table 9, one observes that the prediction accuracy of Shen_dataset is average and thus does not show very good results.This is because many depressive tweets In view of the foregoing, the main potential advantage of our study is that it can be viewed as a complementary work to existing research focused on the detection of depression and anxiety disorders, as 1.
In contrast to many related works that rely on binary classification, our approach is based on multi-class models; 2.
Our study showed that multi-classification may be more efficient than binary class models as it could better resolve ambiguities issues, although this cannot be generalized; 3.
The data were collected based on assumptions involving both common and noncommon symptoms between depression and anxiety disorders.
Our approach also shows some drawbacks which are discussed in the following while trying to propose solutions.It should be noted that these limitations do not only concern our approach but much research working within the same context.
1.
Although the data were generated according to a well-defined process, we still lack for more efficient methods for collecting data and labelling them (tweets).This still remains a big challenge for large volumes of data, in contrast to small volumes of data that can be processed and annotated within a reasonable time.As an ongoing work, we are currently studying the use of semantics to help collect and label the data through ontology-computing while considering emoji, emoticons and related contents.
2.
In fact, many researchers have embarked on a frantic race to design/improve classification models for the detection of mental disorders through the Twitter platform.Undoubtedly, this is very important, but it should not be an end in itself because what is more important is to leverage these models in order to perform useful tasks.In this line of thinking, we are currently working to deploy our models within a syndromic surveillance system, in order to improve public health systems.At this level, our sleep, fluctuations in appetite or weight, agitation, anxiety, isolation (absenteeism) and sexual inhibition.In common but of different degree Intense fatigue (loss of energy) *** Suicidal thoughts *** Intense fatigue (loss of energy) * Suicidal thoughts * Which are not common points -Dizziness, heart palpitations.
Figure 1 .
Figure 1.The proposed methodology for building effective classifiers of mental disorders detection.
Figure 1 .
Figure 1.The proposed methodology for building effective classifiers of mental disorders detection.
Figure 2 .
Figure 2. Word cloud of dataset etching after preprocessing; (a) Word cloud of dataset D0; (b) Word cloud of dataset D1; (c) Word cloud of dataset D2.
Figure 2 .
Figure 2. Word cloud of dataset etching after preprocessing; (a) Word cloud of dataset D0; (b) Word cloud of dataset D1; (c) Word cloud of dataset D2.
1 )
Accuracy =TN + TP TN + FP + TP + FN (True Positives: when current and predicted values are positive with respect to a given class (i.e., both the current label and the label output by the model match the class label);(2) True Negatives: when current and predicted values are negative with respect to a
Algorithms 2023 ,
16, x FOR PEER REVIEW 13 of 26 typical cases of both success and failure: CNN_RNN, CNN_LSTM, CNN_GRU, CNN_BiRNN, CNN_BiLSTM and CNN_BiGRU.CNN_BiRNN, CNN_BiLSTM and CNN_BiGRU models are the best in terms of performance for all experiment instances while CNN_RNN and CNN_GRU models are the best in terms of performance improvements by involving grid search technique.Finally, CNN_LSTM model represents a failure case where the grid search technique was unable to provide performance improvements.Figures 4 show the performance of these models in terms of training accuracy and training loss, respectively.In particular, the well-performing model is CNN_BiGRU with a learning rate of 0.001.
Figure 4 .
Figure 4. Comparison between training and test for accuracy and loss of hybrid models; (a) learning rate 0.001; (b) learning rate 0.0001.
Figure 4 .
Figure 4. Comparison between training and test for accuracy and loss of hybrid models; (a) learning rate 0.001; (b) learning rate 0.0001.
C1.
Mental disorder: this refers to the mental disorder studied, which can be either depression (denoted as Dep) or anxiety (denoted as Anx) disorders.C2.Data collection: this refers to whether the training data were collected using keywords (e.g., symptoms, usernames, etc.) or reused from other datasets.C3.Dataset size: this refers to the total number of tweets used to train the models.C4.Type of learning model: this refers to whether the well-performing classifier adopts simple variants (denoted as S) or hybridization (denoted as H) of models.C5.Type of classification: this refers to whether the well-performing classifier is a binary (denoted as B) or a multi-class (denoted as M) model.C6.Accuracy achieved: this refers to the accuracy achieved by the well-performing classifier (measured as a percentage).
Table 2 .
Comparison of recent studies using traditional machine learning approaches to detect mental disorders from different data sources.
Table 3 .
Comparison of recent studies using simple deep learning approaches to detect mental disorders from different data sources.
Table 5 .
Typical keywords used as parameters to collect our dataset.
I have had dizziness for more than six months.I have had heart palpitations for more than six months.
Table 6 .
Number of tweets before and after preprocessing sub-steps.
Table 7 .
The evaluation of our models on the evaluation dataset (Eval_dataset), based on fixed learning rate values for Adam optimizer.
Table 7 .
The evaluation of our models on the evaluation dataset (Eval_dataset), based on fixed learning rate values for Adam optimizer.
Table 8 .
The evaluation of our models on the evaluation dataset (Eval_dataset), by using grid search optimizer to determine the learning rate value for Adam optimizer.
Table 9 .
Prediction of tweets from Shen_dataset using our well-performing models.
Table 10 .
The CNN-BiGRU classifiers to predict normal cases and, depression and anxiety disorders using different datasets.
Table 8 .
The evaluation of our models on the evaluation dataset (Eval_dataset), by using grid search optimizer to determine the learning rate value for Adam optimizer.
Table 9 .
Prediction of tweets from Shen_dataset using our well-performing models. | 8,915.2 | 2023-11-27T00:00:00.000 | [
"Computer Science",
"Psychology"
] |
Measurement of event-by-event transverse momentum and multiplicity fluctuations using strongly intensive measures $\Delta[P_T, N]$ and $\Sigma[P_T, N]$ in nucleus-nucleus collisions at the CERN Super Proton Synchrotron
Results from the NA49 experiment at the CERN SPS are presented on event-by-event transverse momentum and multiplicity fluctuations of charged particles, produced at forward rapidities in central Pb+Pb interactions at beam momenta 20$A$, 30$A$, 40$A$, 80$A$, and 158$A$ GeV/c, as well as in systems of different size ($p+p$, C+C, Si+Si, and Pb+Pb) at 158$A$ GeV/c. This publication extends the previous NA49 measurements of the strongly intensive measure $\Phi_{p_T}$ by a study of the recently proposed strongly intensive measures of fluctuations $\Delta[P_T, N]$ and $\Sigma[P_T, N]$. In the explored kinematic region transverse momentum and multiplicity fluctuations show no significant energy dependence in the SPS energy range. However, a remarkable system size dependence is observed for both $\Delta[P_T, N]$ and $\Sigma[P_T, N]$, with the largest values measured in peripheral Pb+Pb interactions. The results are compared with NA61/SHINE measurements in $p+p$ collisions, as well as with predictions of the UrQMD and EPOS models.
I. INTRODUCTION AND MOTIVATION
Ultra-relativistic heavy ion collisions are studied mainly to understand the properties of strongly interacting matter under extreme conditions of high energy densities when the creation of the quark-gluon plasma (QGP) is expected. The results obtained in a broad collision energy range by experiments at the Super Proton Synchrotron (SPS) at CERN, the Relativistic Heavy Ion Collider (RHIC) at BNL, and at the Large Hadron Collider (LHC) at CERN indeed suggest that in collisions of heavy nuclei such a state with sub-hadronic degrees of freedom appears when the system is sufficiently hot and dense.
The phase diagram of strongly interacting matter is most often presented in terms of temperature (T ) and baryochemical potential (µ B ), which reflects net-baryon density. It is commonly believed that for large values of µ B the phase transition is of the first order and turns into a rapid but continuous transition (cross-over) for low µ B values. A critical point of second order (CP) separates these two regions. The phase diagram can be scanned by varying the energy and the size of the colliding nuclei and the CP is believed to cause a maximum of fluctuations in the measured final state particles. More specifically, the CP is expected to lead not only to non-Poissonian distributions of event quantities like multiplicities or average transverse momentum [1,2], but also to intermittent behavior of low-mass π + π − pair and proton production with power-law exponents calculable in QCD [3,4].
The NA49 experiment at the CERN SPS [5] pioneered the exploration of the phase diagram by an energy scan for central Pb+Pb collisions in the range 20A to 158A GeV ( √ s N N = 6.3-17.3 GeV), as well as a system size scan at the top SPS energy of 158A GeV. Evidence was found [6,7] that quark/gluon deconfinement sets in at a beam energy of about 30A GeV. Thus the SPS energy range is a region where the CP could be located. At present the search for the critical point is vigorously pursued by the NA61/SHINE collaboration at the SPS [8] and by the beam energy scan program BES at RHIC [9]. The NA49 experiment already measured multiplicity fluctuations in terms of the scaled variance ω of the distribution of event multiplicity N [10,11] and event-by-event fluctuations of the transverse momentum of the particles employing the strongly intensive measure Φ p T [12,13]. The present paper reports a continuation of this NA49 study by analyzing two new strongly intensive measures of event-by-event transverse momentum and multiplicity fluctuations, ∆[P T , N ] and Σ[P T , N ] [14,15]. These measures are dimensionless and have scales given by two reference values, namely they are equal to zero in case of no fluctuations and one in case of independent particle production. Unlike Φ p T they allow to classify the strength of fluctuations on a common scale. This paper is organized as follows. In Sec. II the new strongly intensive measures of fluctuations ∆[P T , N ] and Σ[P T , N ] are introduced and briefly discussed. Data sets, acceptance used for this analysis, detector effects, and systematic uncertainty estimates are discussed in Sec. III. The NA49 results on the energy and system size dependences of transverse momentum and multiplicity fluctuations quantified by the new measures are presented and discussed in Sec. IV. A summary closes the paper.
II. STRONGLY INTENSIVE MEASURES OF TRANSVERSE MOMENTUM AND MULTIPLICITY FLUCTUATIONS
In thermodynamics extensive quantities are those which are proportional to the system volume. Examples of extensive quantities in this case are the mean multiplicity or the variance of the multiplicity distribution. In contrast, intensive quantities are defined such that they do not depend on the volume of the system. It was shown [14] that the ratio of two extensive quantities is an intensive quantity, and therefore, the ratio of mean multiplicities, as well as the commonly used scaled variance of the distribution of the multiplicity N , ω[N ] = ( N 2 − N 2 )/ N , are intensive measures. Finally, one can define a class of strongly intensive quantities which depend neither on the volume of the system nor on the volume fluctuations within the event ensemble. Such quantities can be truly attractive when studying heavy ion collisions, where the volume of the produced matter cannot be fixed and may change significantly from one event to another. Examples of strongly intensive quantities are mean multiplicity ratios, the Φ measure of fluctuations [16], and the recently introduced ∆ and Σ measures of fluctuations [14,15]. In fact, it was shown [14] that there are at least two families of strongly intensive measures: ∆ and Σ. The previously introduced measure Φ is a member of the Σ-family.
In nucleus-nucleus collisions the volume is expected to vary from event to event and these changes are impossible to eliminate fully. Thus, the strongly intensive quantities allow, at least partly, to overcome the problem of volume fluctuations. Generally, the ∆ and Σ measures can be calculated for any two extensive quantities A and B. In this paper B is taken to be the accepted particle multiplicity, N (B ≡ N ), and A the sum of their transverse momenta P T (A ≡ P T = N i=1 p Ti , the summation runs over the transverse momenta p Ti of all accepted particles in a given event). Following Refs. [14,15] the quantities ∆[P T , N ] and Σ[P T , N ] are defined as: and where: and are the scaled variances of the two fluctuating extensive quantities P T and N , respectively. The brackets ... represent averaging over events. The quantity ω(p T ) is the scaled variance of the inclusive p T distribution (all accepted particles and events are used): Equations (1) and (2) can be used only when assuming that ω(p T ) is not equal to zero. There is an important difference between the ∆[P T , N ] and Σ[P T , N ] measures. Only the first two moments: P T , N , and P T 2 , N 2 are required to calculate ∆[P T , N ], whereas Σ[P T , N ] includes also the correlation term P T N − P T N . Therefore ∆[P T , N ] and Σ[P T , N ] can be sensitive to various physics effects in different ways. In Ref. [14] all strongly intensive quantities containing the correlation term are named the Σ family, whereas those based only on mean values and variances the ∆ family. As already mentioned, the previously studied [12,13] measure Φ p T belongs to the Σ family and obeys the relation: With the normalization of ∆ and Σ proposed in Ref. [15] Like ω[N ] they have two reference values, namely ω[N ] equals zero when the multiplicity is constant from event to event and one for a Poisson multiplicity distribution. Therefore one can judge whether fluctuations are large ( > 1 ) or small ( < 1 ) compared to independent particle production. However, ω[N ] is not a strongly intensive quantity, and in the MIS one finds ω[N ](N s sources) = ω[N ] (1 source) + n ω[N S ], where n is the mean multiplicity of particles from a single source and ω[N S ] represents fluctuations of N S .
A comparison of the properties of ∆[P T , N ], Σ[P T , N ], and Φ p T is presented in Table I. The quantities ∆[P T , N ] and Σ[P T , N ] were studied in several models. The results of simulations of the IPM, the MIS, source-by-source temperature fluctuations (example of MIS), event-by-event (global) temperature fluctuations, and anti-correlation between P T /N and N were studied in Ref. [18]. Predictions from the UrQMD model on the system size and on the energy dependence of ∆[P T , N ] and Σ[P T , N ] are shown in Ref. [15]. Finally, the effects of quantum statistics were discussed in Ref. [19]. The general conclusion is that ∆[P T , N ] and Σ[P T , N ] measure deviations from the superposition model in different ways. Therefore, the interpretation of the experimental results may benefit from a simultaneous measurement of both quantities.
Unit
No fluctuations Independent Particle Model Model of Independent Sources
III. DATA SELECTION AND ANALYSIS
The data used for the analysis, event and particle selection criteria, uncertainty estimates and corrections are described in the previous publications of NA49 [12,13] on the measure Φ p T . Here we only recall the key points.
The analysis of the energy dependence of transverse momentum and multiplicity fluctuations uses samples of Pb+Pb collisions at 20A, 30A, 40A, 80A and 158A GeV/c beam momenta (center of mass energies from 6.3 to 17.3 GeV per N + N pair) for which the 7.2% most central reactions were selected. The analysis of the system size dependence is based on samples of p + p, semi-central C+C, semi-central Si+Si, and minimum bias and central Pb+Pb collisions at 158A GeV/c beam momentum. Minimum bias Pb+Pb events were divided into six centrality bins (see Ref. [12] for details) but due to a trigger bias the most peripheral bin (6) is not used in the current analysis. For each bin of centrality the mean number of wounded nucleons N W was determined by use of the Glauber model and the VENUS event generator [20] (see Ref. [12]).
Tracks were restricted to the transverse momentum region 0.005 < p T < 1.5 GeV/c. For the study of energy dependence the forward rapidity range 1.1 < y * π < 2.6 was selected, where y * π is the particle rapidity calculated in the center-of-mass reference system. For the study of system size dependence at 158A GeV/c the rapidity was calculated in the laboratory reference system and restricted to the region 4.0 < y π < 5.5 [12] (it approximately corresponds to 1.1 < y * π < 2.6). As track-by-track identification was not applied, the rapidities were calculated assuming the pion mass for all particles. For the energy scan an additional cut on the rapidity y * p calculated with the proton mass was applied (y * p < y * beam − 0.5) [13,21]. This excludes the projectile rapidity domain where particles may be contaminated by e.g. elastically scattered or diffractively produced protons.
The acceptance of azimuthal angle φ was chosen differently for the study of energy and system size dependence (Fig. 1). For the energy scan a common region of azimuthal angle was selected for all five energies (only particles within the solid curves in Fig. 1 (left) were retained), whereas a wider range was used at 158A GeV/c for the system size study (see Fig. 1 (right)). Together with the track quality criteria and rapidity cuts this results in using only about 5 % respectively 20 % of all charged particles produced in the reactions.
An additive correction for the limited two track resolution of the detector was applied to the values of ∆[P T , N ] and Σ[P T , N ]. The procedure to determine this correction was analogous to the one used to estimate the corrections for Φ p T in Refs. [12,13]. Mixed events were prepared for each of the analyzed data sets and then processed by the [13] for details). The solid lines represent the analytical parametrization of acceptance used for further analysis. Left: acceptance used for the energy scan of pT and N fluctuations, example for 2.0 < y * π < 2.2. Right: acceptance used for the system size dependence of pT and N fluctuations, examples for 1.2 < y * π < 1.4 and 2.4 < y * π < 2.6. Additional cut on y * p (see the text) not included. Figure reproduced from Refs. [12,13] where parametrizations of the curves can be found. after detector simulation and reconstruction and before this procedure. The resulting corrections for the data of the energy scan are plotted in Fig. 2 and those for the data of the system size study in Fig. 3. The statistical uncertainties on ∆[P T , N ] and Σ[P T , N ] were obtained via the sub-sample method [12,13]. The systematic uncertainties were estimated by varying event and track cut parameters (the procedures were identical to those applied for Φ p T in Refs. [12,13]).
IV. RESULTS AND DISCUSSION
The results shown in this section refer to accepted particles, i.e., particles that are accepted by the detector and pass all kinematic cuts and track selection criteria as discussed in Sec. III. The data cover a broad range in p T (0.005 < p T < 1.5 GeV/c). The rapidity was restricted to the interval 1.1 < y * π < 2.6 (forward rapidity) where contamination from beam produced δ-rays is small. The selected azimuthal angle region is large and represents essentially the whole detector acceptance for the study of the system size dependence at 158A GeV/c (see lines in Fig. 1 (right)). It is more limited for the analysis of the energy dependence since the same region was chosen at all energies (see lines in Fig. 1 (left)). Results are not corrected for limited kinematic acceptance. Such a correction is not possible since it depends on the, in general, unknown correlation mechanism. Instead the limited acceptance should be taken into account in the model calculations when comparing to experimental results. However, corrections for limited two track resolution of the NA49 detector were applied (see Sec. III and Refs. [12,13]). A possible bias due to particle reconstruction losses and contamination in the accepted kinematic region was estimated to be small and is included in the systematic uncertainty of the results. N ] are smaller than one, the expectation for independent particle production. For Σ[P T , N ] fluctuations for all and positively charged particles are close to the hypothesis of independent particle production (similar to the results on Φ p T [13] which belongs to the same family of strongly intensive measures), whereas for negatively charged particles Σ[P T , N ] values are higher than one. It was suggested in Refs. [15,19] that values of ∆[P T , N ] < 1 and Σ[P T , N ] > 1 can be explained as due to effects of Bose-Einstein statistics. Similarly, Φ p T > 0 was predicted in Refs. [22,23] as a consequence of Bose-Einstein correlations.
A. Energy scan for central Pb+Pb interactions
The measured values of ∆[P T , N ] and Σ[P T , N ] are compared to predictions of the UrQMD [24,25] and EPOS [26,27] models in Fig. 4 (solid and dashed lines respectively). The models do not simulate a phase transition or the critical point. However, resonance decays and effects of correlated particle production due to energy-momentum, charge and strangeness conservation laws are taken into account. The most central 7.2% interactions were selected for comparison of the energy scan results, in accordance with the real NA49 events. The procedure of selecting the 7.2% most central events was the following: a sample of minimum bias Pb+Pb events was produced. Then the distribution of the impact parameter b was drawn and the value of b was determined below which 7.2% of the events remained. The resulting impact parameter range was 0 < b < 4.35 fm in UrQMD and 0 < b < 4.00 fm in EPOS. Finally, high statistics samples of UrQMD and EPOS events were produced in these impact parameter ranges separately for each energy. The measures ∆[P T , N ] and Σ[P T , N ] were calculated from charged particles, consistent with originating from the main vertex. This means that mostly pions, protons, kaons and their anti-particles from the primary interaction were used because particles coming from the decays of K 0 S , Λ, Σ, Ξ, Ω, etc. are suppressed by the track selection cuts. Therefore, the analyses of UrQMD and EPOS events were also carried out by using primary charged pions, protons, and kaons and their anti-particles. The tracking time parameter in the UrQMD model was set to 100 fm/c and therefore the list of generated kaons, pions and (anti-)protons did not contain the products of weak decays. In the parameter settings of the EPOS model the decays of K 0 S/L , Λ, Σ, Ξ, Ω, etc. particles were explicitly forbidden. Finally, in the analysis of the UrQMD and EPOS events the same kinematic restrictions were applied as for the NA49 data. Figure 4 (top) shows that the energy dependence of ∆[P T , N ] in the UrQMD model exhibits behavior similar to that observed in the measurements. In both cases one finds ∆[P T , N ] < 1, i.e. values below those for independent particle production. As Bose-Einstein correlations are not implemented in the UrQMD model we conclude that in this model there must be another source(s) of correlation(s) leading to ∆[P T , N ] < 1. The EPOS model shows ∆[P T , N ] values which are significantly higher that those obtained from the NA49 data and UrQMD. The comparisons for Σ[P T , N ] can be seen in Fig. 4 (bottom). Here the predictions of UrQMD lie above the measurements for all charged and positively charged particles, whereas they are significantly below the results for negatively charged particles. On the other hand EPOS calculations for negatively charged particles are close to the data, but exceed the measurements even more than the UrQMD predictions for all charged and positively charged particles.
The measured energy dependences of ∆[P T , N ] and Σ[P T , N ] do not show any anomalies which might be attributed to approaching the phase boundary or the critical point. However, it should be noted that due to the limited acceptance of NA49 and the additional restrictions used for this analysis the sensitivity for such fluctuations may be small if the underlying range of correlations in momentum space is large. for negatively and all charged particles are significantly above unity (the prediction of the independent particle production model) and also reach a maximum in the most peripheral Pb+Pb interactions. For positively charged particles the values are close to zero or below. The same behavior was observed for the measure Φ p T [12] (Φ p T and Σ[P T , N ] belong to the same family of strongly intensive measures). Finally, it is worth recalling that also for multiplicity fluctuations a maximum was observed in peripheral Pb+Pb collisions by NA49 [10]. Figure 5 shows that ∆[P T , N ] and Σ[P T , N ] for all charged particles are usually higher than for either negatively or positively charged particles. Moreover, in case of Σ[P T , N ], the values for positively charged particles are always lower than those for the negatively charged particles. In Ref. [12] it was shown that also values of Φ p T for positively charged particles were lower than those for negatively charged and for all charged particles. However, the same effect was observed in simulations using the HIJING model and the fact that Φ p T values for positively charged particles were always lower than those for negatively charged ones was found to be related to the limited acceptance and treatment of protons as pions in the calculation of rapidity.
To further investigate the nature of the correlations leading to the observed values of the measures ∆[P T , N ] and Σ[P T , N ] a toy model was constructed, in which an anti-correlation between mean transverse momentum per event (P T /N ) versus multiplicity (N ) was assumed [18]. The parametrization of P T /N versus N was taken from the NA49 p + p data [12] (in the current paper the same p + p data are used), resulting in ∆[P T , N ] = 0.816(0.005) and Σ[P T , N ] = 1.008(0.002). This shows that Bose-Einstein correlations are not the only candidate for the explanation of ∆[P T , N ] < 1 and Σ[P T , N ] > 1 [15,19], but this observation, especially for smaller systems, may also be explained as due to the known P T /N versus N anti-correlation [18].
The system size dependences of ∆[P T , N ] and Σ[P T , N ] were also compared to predictions of the UrQMD and EPOS models (the procedure of selecting the proper impact parameter range was analogous to that used in the case of the energy scan). When searching for possible indications of a critical point it is most appropriate to plot the strength of fluctuations using the standard phase diagram coordinates temperature T and baryochemical potential µ B . Moreover, central collisions of nuclei provide the cleanest interaction geometry. For such reactions fits of the hadron gas model (see e.g. Ref. [28]) were performed to determine the temperature T chem and baryochemical potential µ B of the produced particle composition. These values are believed to be close to those of the hadronization along the transition line in the phase diagram. The value of T chem was found to decrease somewhat for collisions of larger nuclei, whereas µ B decreases rapidly with collision energy.
Results for ∆[P T , N ] and Σ[P T , N ] for inelastic p + p as well as central Pb+Pb collisions are shown in Fig. 6 versus µ B . The p + p results from NA61 [29,30], plotted for comparison, were obtained using the NA49 acceptance cuts. One observes little dependence on µ B for both Pb+Pb or p + p collisions. In particular, there is no indication of a maximum that might be attributed to the critical point. A similar conclusion was reached from the µ B dependence of Φ p T [13]. The measurements of ∆[P T , N ] are consistent for the two reactions. The values of Σ[P T , N ] are close to unity with the exception of the higher result in Pb+Pb for negatively charged particles.
The dependence of ∆[P T , N ], and Σ[P T , N ] on T chem is shown in Fig. 7 at the beam momentum of 158A GeV/c for p + p, semi-central C+C, Si+Si and central Pb+Pb reactions. The results for p + p from NA49 (solid triangles) and NA61 (open triangles) are consistent. A maximum is observed for Si+Si interactions similar to the one found previously for Φ p T in Ref. [12]. There it was interpreted as a possible effect of the critical point [31] consistent with QCD-based predictions of Ref. [1,32]. Interestingly, for the same system, studies of intermittency in the production of low mass π + π − pairs [33] and of protons [34] found indications of power-law behavior with exponents that were consistent with QCD predictions for a CP.
Unfortunately, theoretical predictions, for fluctuations at CP, are not yet published for the new fluctuation measures ∆[P T , N ] and Σ[P T , N ]. However, calculations for Si+Si collisions at 158A GeV/c by using the Critical Monte Carlo (CMC) model [3,35] are currently under study.
V. SUMMARY
This paper reports on the continuing search at the CERN SPS by the NA49 experiment for evidence of the critical point of strongly interacting matter expected as a maximum of fluctuations. Results are presented on transverse momentum and multiplicity fluctuations of charged particles, produced at forward rapidities (1.1 < y * π < 2.6) in central Pb+Pb interactions at beam momenta 20A, 30A, 40A, 80A, and 158A GeV/c, as well as in different systems (p + p, C+C, Si+Si, and Pb+Pb) at 158A GeV/c. New strongly intensive measures of fluctuations, ∆[P T , N ] and Σ[P T , N ], were measured. This paper is an extension of previous NA49 studies [12,13] where the strongly intensive measure Φ p T was used to determine transverse momentum fluctuations. The quantities ∆[P T , N ] and Σ[P T , N ] are dimensionless and have two reference values, namely they are equal to zero in case of no fluctuations (P T = const., N = const.) and one in case of independent particle production. Therefore, ∆[P T , N ] and Σ[P T , N ] are preferable to Φ p T for which only one reference value is defined, i.e. Φ p T = 0 MeV/c for the model of independent particle production (IPM).
The NA49 results show no indications of a maximum in the energy dependence of transverse momentum (see also Ref. [31]) and previously measured multiplicity [31] [28]. NA49 data indicate [28] that at the top SPS energy µB does not depend on the system size (C+C, Si+Si, Pb+Pb). Therefore, the µB values for p + p are also displayed and assumed to be the same as for Pb+Pb. NA61 data were taken from Refs. [29,30]. For NA61 only statistical uncertainties are shown.
all charged particles in C+C and Si+Si interactions is about 5% higher than the base line defined by the IPM. Also previously studied multiplicity fluctuations for the most central A + A collisions were found to show a maximum for Si+Si reactions at 158A GeV/c [31]. The excess of transverse momentum and multiplicity fluctuations is two times higher for all charged than for negatively charged particles as expected for the CP [1]. The NA49 collaboration also searched for evidence of the critical point in an intermittency analysis of low-mass π + π − pair [33] and proton [34] production. Indications of power-law behavior consistent with that predicted for a CP were found in the same Si+Si interactions at 158A GeV/c. The intriguing results strongly motivate the ongoing critical point search by the successor experiment NA61/SHINE [8] which performs a systematic two-dimensional scan (SPS energies and system size (p, Be, Ar, Xe, Pb)) of the phase diagram of strongly interacting matter. A maximum of several CP signatures, the so-called hill of fluctuations, would signal the existence of the CP. The RHIC Beam Energy Scan [9] pursues a complementary program measuring higher order moments and cumulants of net-charge and net-proton distributions in Au+Au collisions. So far no clear evidence for the CP was found [36,37]. Thus the possible existence of the CP remains an interesting and challenging question. [28]. NA61 data were taken from Refs. [29,30]. For NA61 only statistical uncertainties are shown. | 6,452.6 | 2015-09-15T00:00:00.000 | [
"Physics"
] |
The effectiveness of vaccination to prevent the papillomavirus infection: a systematic review and meta-analysis
Our purpose was to determine the effectiveness and harms of vaccination in patients with any sexual history to prevent the prevalence of papillomavirus infection. A search strategy was conducted in the MEDLINE, CENTRAL, EMBASE and LILACS databases. Searches were also conducted in other databases and unpublished literature. The risk of bias was evaluated with the Cochrane Collaboration's tool. Analysis of fixed effects was conducted. The primary outcome was the infection by any and each human papillomavirus (HPV) genotype, serious adverse effects and short-term adverse effects. The measure of the effect was the risk difference (RD) with a 95% confidence interval (CI). The planned interventions were bivalent vaccine/tetravalent/nonavalent vs. placebo/no intervention/other vaccines. We included 29 studies described in 35 publications. Bivalent HPV vaccine offers protection against HPV16 (RD −0.05, 95% CI −0.098 to −0.0032), HPV18 (RD −0.03, 95% CI −0.062 to −0.0004) and HPV16/18 genotypes (RD of −0.1, 95% CI −0.16 to −0.04). On the other side, tetravalent HPV vaccine offered protection against HPV6 (RD of −0.0500, 95% CI −0.0963 to −0.0230), HPV11 (RD −0.0198, 95% CI −0.0310 to −0.0085). Also, against HPV16 (RD of −0.0608, 95% CI −0.1126 to −0.0091) and HPV18 (RD of −0.0200, 95% CI −0.0408 to −0.0123). There was a reduction in the prevalence of HPV16, 18 and 16/18 genotypes when applying the bivalent vaccine, with no increase in adverse effects. Regarding the tetravalent vaccine, we found a reduction in the prevalence of HPV6, 11, 16 and 18 genotypes, with no increase in adverse effects.
Introduction
The development of the three FDA-approved multivalent prophylactic human papillomavirus (HPV) vaccines has been because of the discovery of HPV infection as the cause of all cervical cancer [1]. The three prophylactic HPV vaccines have shown high efficacy for prevention of HPV infection [2].
Cervarix® and Gardasil® were the first vaccines for the prevention of the cervical cancer. Cervarix® targets types 16 and 18 HPV, which are responsible for 70% of all cervical cancer [3], while Gardasil®, on the other hand, adds also activity against types 6 and 11 HPV, which cause 90% of anogenital warts [4]. Additionally to those included in the qHPV, the 9vHPV vaccine contains type 31, 33, 45, 52 and 58 antigens [5].
These vaccines are well-tolerated, safe and only with minor adverse effects. They also protect against pre-cancerous lesions caused by subtypes 16 and 18 in a naive population, adding a systemic immune response at 5 years post-vaccination [6][7][8].
The primary care service in United States recommended HPV vaccine for females aged 9-26, through the Vaccines for Children program since 2006 [9]. Additionally, the immunisation for adolescents aged 11-12 years and for adolescent women (aged 13-26 years) before sexually active was recommended by The US Center for Disease Control and it is accepted in many developed countries [7,10].
According to the health care system's organisation, the coverage rate and the programmes differ. Some of the vaccination programmes are offered through schools (e.g. Australia, UK) whereas others are provided in private clinics, or public primary care (e.g. the United States) [11,12].
Different studies (phases II and III) have been conducted, and a few reviews have tried to pooled the effects [13], however, none of them emphasise the changes in the prevalence/ incidence of HPV serotypes in the populations. Besides, previous reports have not assessed the impact of vaccination in different ages and time since sexual beginning, nor multisite and cross-protection against HPV types. Also, there is conflict about the protection against re-infection among individuals known to be infected and who subsequently clear their infections [14].
The objective was to determine the effectiveness and harms of vaccination in patients with any sexual history to prevent the prevalence of papillomavirus infection.
Methods
We performed this review according to the recommendations of the Cochrane Collaboration [15] and following the PRISMA Statement [16]. The PROSPERO registration number is CRD42017074007.
Eligibility criteria
We included only clinical trials; both gender patients from any population at any age and any sexual history. The interventions were: vaccine against HPV: HPV16 and HPV18 (bivalent vaccine), or HPV6, HPV11, HPV16 and HPV18 (tetravalent vaccine) and HPV6, HPV11, HPV16, HPV18, HPV31, HPV33, HPV45, HPV52 and HPV58 (nanovalent) genotypes. The vaccination had to be intramuscular injection over a period of 6 months according to the standard scheme. We did not include pregnant patients The comparisons were: bivalent vaccine/tetravalent/nanovalent vs. placebo/no intervention/other vaccines. Outcomes: • Infection by any HPV genotype: assessment must be by any validated technique identified by the blood sample and oral cell sample. If patients are older than 18 years old, the sample must come from the specific organ. • Infection by each HPV genotype: assessment must be by any validated technique identified by the blood sample and oral cell sample. If patients are older than 18 years old, the sample must come from the specific organ. • Serious adverse effects.
• Short-and long-term adverse effects.
For infection and long-term adverse effect outcomes, studies had at least 1-year follow-up and for short-term outcome studies were allowed to be less than a 1-year follow-up.
Information sources
A literature search was conducted as recommended by Cochrane. We used medical subject headings (MeSh), Emtree language, Decs and text words related. We searched MEDLINE (OVID), EMBASE, LILACS and the Cochrane Central Register of Controlled Trials (CENTRAL). The search was performed from 2000 to to date.
To ensure literature saturation, we scanned references from relevant articles identified through the search; conferences (HPV2015 and HPV2017) related to HPV and vaccines and thesis databases (e.g. Theseo). Grey literature was searched in open grey and Google scholar. We looked for clinical trials in the process in clinicaltrials.gov, the registry created by the World Health Organization and the New Zealand registry for Clinical trials, among others. We contacted authors by e-mail in case of missing information. There were no setting or language restrictions.
Data collection
Two researchers reviewed each reference by title and abstract and full-texts, applied pre-specified inclusion and exclusion criteria. Disagreements were resolved by consensus and where the dispute could not be solved, a third reviewer dissolved conflict.
Two trained reviewers using a standardised form independently extracted the following information from each article: study design, geographic location, authors names, title, objectives, inclusion and exclusion criteria. Also, the number of patients, type of laboratory technique, losses to follow-up, timing, definitions of outcomes, outcomes and association measures and funding source.
Risk of bias
We assessed the risk of bias for each study with the Cochrane Collaboration tool, which covers: sequence generation, allocation concealment, blinding, incomplete outcome data, selective reporting and other biases. Two independent researchers will judge about the possible risk of bias from extracted information, rated as 'high risk,' 'low risk' or 'unclear risk. ' We described a graphic representation of potential bias using RevMan 5.3 [17].
Data analysis/synthesis of results
We performed the statistical analysis in R [18]. For categorical outcomes, we reported information in risk differences (RD) with 95% confidence intervals (CIs) according to the type of variables, and we pooled the data with a random effect meta-analysis according to the heterogeneity expected. The results were reported in forest plots. Heterogeneity was evaluated using the I 2 test [15]. For the interpretation, it was determined that the values of less than 50% would be for low heterogeneity and more than 50% would be for the high level of heterogeneity.
Publication bias
We did not perform publication bias due to the number of included studies in each meta-analysis.
Sensitivity analysis
We performed sensitivity analysis extracting weighted studies, analysing the length of follow-up and running the estimated effect to find differences.
Subgroup analysis
We tried to perform subgroup analysis by co-infection, sample site, gender, age and continent however due to the scarcity of data, we were not able to do it.
Results
We found 2834 records through the electronic search strategy and 20 through other searches (Fig. 1). After excluding duplicates, we ultimately included 29 unique studies described in 35 The rest of the information regarding the included studies is described in Table 1.
Excluded studies
Sixty-eight full-text articles were excluded (28 studies did not have the outcome of interest; 17 were not RCT; 11 studies had no intervention of interest; four studies did not have enough follow-up; four had no population of interest, and four were not related).
Risk of bias assessment
We found that 50% and 75% of the included studies were graded as unclear on describing the random sequence generation No selective reporting was identified during grading in these studies. Nonetheless, all studies were industry-funded, and this leads to unclear classification (Fig. 2).
Regarding the HPV18 genotype, we found five studies (Konno, 2014 Other results for any genotype, little adverse effects and autoimmune disease are shown in Table 2.
Other results for little adverse effects and autoimmune disease are shown in Table 2.
Other results for HPV16/18 and small adverse effects are shown in Table 2.
Other results regarding short-term adverse effects and serious adverse effects are shown in Table 2.
Nonavalent vs. tetravalent: total Only one study (Vesikari 2015) compared the 9vHPV against the tetravalent vaccine. The study was conducted in different countries and included people from 9 to 15 years. The outcome was evaluated at 7 months and the results were: anti-HPV16 and anti-HPV18, geometric mean titer (GMT) were similar between vaccines (anti-HPV16 GMTs: 6739.5 vs. 6887.4 mMU/ml for
Summary of the main results
In summary, evidence suggested that the bivalent HPV vaccine offers protection against HPV16, HPV18 and HPV16/18 genotypes without increasing adverse effects. It is consistent with 4 years for HPV16 and HPV18 genotypes. On the other side, tetravalent HPV vaccine, offered protection against HPV6, HPV11, HPV16 and HPV18 without increasing adverse effects and this was consistent at 4 years.
Comparison with other reviews or other studies
Detecting a specific capsid antibody in serum suggests an HPV infection. Although more than 50% of infected subjects will have a serological conversion, and thus detection of HPV antibodies. This has been used to analyse the natural history and the cumulative infection in different groups [55]. Assays measuring functional neutralising ability might reflect protective immunity, based on animal experiments. Nonetheless, epitopes for HPV are not completely characterised in human responses therefore, they will be heterogeneous [55]. Herney Andrés García-Perdomo et al.
Neutralisation assays are independent of vaccine material, therefore they provide an unbiased measure of HPV serological response, induced by the vaccine. The direct ELISA, assesses the immunoglobulin G (IgG) response (both antibodies), that might be used as a technique to manage the large volume of samples [56].
On the other hand, the immune response to HPV infection (mainly IgG) is generally weak and heterogeneous among women. Around 50% of the individuals serologically convert to the L1 protein of HPV6, 16 and 18, within 18 months [57]. Conversely, it signifies that more than 40% of women do not seroconvert over time, therefore, the HPV L1 capsid-specific antibody is not an important diagnostic test for this infection. Other HPV antigens (E1, E2, E6 and L2) do not resemble any responses in patients with HPV infection [58]. But, HPV vaccines are based on recombinant virus-like particles (VLPs). These vaccines are highly immunogenic; induce very high titres of neutralising antibodies and represent durable responses. This might be due to
Epidemiology and Infection
VLPs since its ordered structure which permits the presentation of epitopes to B-cells for potent activation [59]. The challenge would be demonstrating that L1 gene expressed via a recombinant virus and the L1 protein produced largely and selfassembled into the VLP or the empty capsid almost identical to the native virion. These VLPs generate neutralising antibodies in animal models and these were protected against a virus challenge [60].
Strengths and limitations of the review
The advantages of this systematic review include: following standardised methods, according to Cochrane collaboration and PRISMA guidelines and the wide-ranging search to identify data about both clinical and immunological outcomes.
Although we had a lot of studies, most of our limitations were regarding available data for the vaccines across the studies. Additionally, we found differences in study design and methods to assess specific efficacy endpoints and immune responses [61].
On the other hand, we found high statistical heterogeneity and different issues regarding the risk of bias, mainly for not having enough information to evaluate the effect. However, some studies had a high risk of bias regarding blinding and attrition bias. All studies were industry-funded; therefore, this might lead to an unclear risk of bias.
Implications for policy and research
Hence, this is the first step to evaluate the efficacy of the vaccines against HPV. According to these findings, the vaccines reduce the risk of having the genotype that they were designed for. However, we need to go further, assessing the efficacy to reduce the risk of cancer. Although they were statistically valid, the risk reduction was not high (1-10%) compared with the general chance of not receiving intervention. Therefore, it is essential to consider this when decision-making in public health.
Another interesting data for public health is that there were similar adverse effects in both groups (vaccine vs. no intervention), therefore, even if risk reduction was not high, there were no increase of adverse effects in vaccinated people.
Conclusions
There was a reduction in the prevalence of HPV16, HPV18 and HPV16/18 genotypes when applying the bivalent vaccine, with no increase in adverse effects. Regarding the tetravalent vaccine, we found a reduction in the prevalence of HPV6, HPV11, HPV16 and HPV18 genotypes against placebo/no intervention, with no increase in adverse effects.
We suggest describing better and establishing measures to prevent attrition bias in RCTs regarding this critical issue. Financial support. None.
Ethical standards. This systematic review and meta-analysis accomplishes all the ethics requirements according to Helsinki declaration and all international statements. | 3,214.4 | 2019-03-20T00:00:00.000 | [
"Biology"
] |
Microstructure Transformation on Pre-Quenched and Ultrafast-Tempered High-Strength Multiphase Steels
High-strength, multiphase steels consisting of pearlite surrounded by tempered martensite were prepared by pre-quenching and ultrafast tempering heat treatment of high-carbon pearlitic steels (0.81% C). The microstructures were analyzed by scanning electron microscopy, electron backscatter diffraction, and transmission electron microscopy. With an increasing quenching temperature from 120 °C to 190 °C, the quenched martensite variants nucleated via autocatalytic nucleation along the interface. Furthermore, the tempered nodules exhibited a distinct symmetrical structure, and the tempered martensite and pearlitic colonies in the group also showed a symmetrical morphology. In addition, a reasonable model was formulated to explain the transformation process from quenching martensite to the multiphase microstructure. When the quenching temperature was set to 120 °C, followed by ultrafast heating at 200 °C/s to 600 °C and subsequent isothermal treatment for 60 s, the multiphase structure showed highest strength, and the pearlite volume fraction after tempering was the lowest. The microhardness softening mechanism for the tempered structures consisted of two stages. The first stage is related to martensitic sheets undergoing reverse transformation and the nucleation of cementite on dislocations. The second stage involves the transformation of austenite into pearlite and continued carbide coarsening in the martensitic matrix.
Introduction
The design and development of low-alloy steels with excellent mechanical properties at low cost has been a challenge for structural applications. In view of this challenge, many alloy steels, such as transformation-induced plasticity (TRIP) steel and maraging steel, have been developed [1][2][3][4]. Although these steels possess improved mechanical properties in terms of their strength and plasticity compared with low-alloy steels, they can be used only in certain conditions due to their dependence on costly alloying additives [5]. Hence, the design of effective structural steels with improved strength and ductility has become particularly important. Thus, the research and development of various high-strength multiphase steels by proper heat treatment of low-alloy steels is attractive. In recent years, multiphase structural steels have been extensively investigated [6][7][8][9]. The theory of a "multiphase structure" has been a topic of focus as a technology for improving the strength and ductility of steel. Multiphase steels show remarkably enhanced strength without a significant reduction in plasticity, or show improved plasticity without reduced strength. In general, the design for multiphase steel requires an effective combination of hard and soft phases, such as martensite and bainite, pearlite and ferrite, ferrite and martensite, or austenite and martensite [10][11][12]. The soft phase favors plastic deformation, while the hard phase can improve strength. Strain partitioning between the hard and soft phases can remarkably improve the mechanical properties [6,13,14].
In recent years, many studies on the mechanical properties of multiphase steel have been reported in the literature [15][16][17]. Many investigations have indicated that whether low or high, carbon content generally increases the strength of steel by yielding a quenched martensite phase [18]. Zare et al. [12] investigated the effects of the martensite volume fraction on the tensile properties of a ferrite-pearlite-martensite triple-phase microstructure and reported that the strength increased with an increase in the martensite volume fraction. Elliot et al. [19] showed that martensite is three times more effective as a strengthener than pearlite. However, the two phases both have deleterious effects on uniform and total elongation. Hence, annealing was subsequently used to enhance plastic deformation. Furthermore, Li et al. [7] reported that an increase in the tempering temperature reduced the hardness and the yield and tensile strengths of low-carbon ferrite and martensite dual-phase steel. Additionally, a study performed by Varshney et al. [20] investigated the effects of high-temperature tempering on the tensile properties of low-alloy steel with a ferrite-pearlite-martensite triple-phase microstructure. It is interesting to note that the elongation increased significantly with variation in the tempered martensite content. Meanwhile, under this condition, the tensile strength increased with increasing tempered martensite content. Many studies have focused on low-carbon alloy steel, while few have focused on the heat treatment processing of high-carbon steels due to the complex phases and difficult control associated with such steels.
Thus, in this investigation, a combination of pearlite and tempered martensite phases was obtained by isothermal transformation of high-carbon steel in the austenite region followed by pre-quenching and subsequent ultrafast tempering (PQFT) at different temperatures. Furthermore, the microstructure transformation mechanism is discussed, including rapid heating to a highest temperature within the range of 500-700 • C and subsequent rapid heating and tempering. A theoretical analysis coupled with acquired experimental data was then proposed to explain the evolution of microhardness softening.
Materials and Methods
In this study, the experimental material was SWRS82B steel wire with a diameter of 12.5 mm, the chemical composition of which is indicated in Table 1. Heat treatment experiments were performed using a DIL-805A/D dynamic and static dilatometer (BAEHR, Pirmasens, Germany) for precise control of the heating procedure of each phase. The specimen size for the heat treatments and the process curve are shown in Figure 1. After austenitizing at 880 • C for 600 s, the heat treatment schedules were designed to achieve multiphase microstructures with pearlitic colonies surrounded by tempered martensite microconstituent volume fractions. The specimens were quickly pre-quenched to different temperatures below the M s point (120, 150, and 190 • C, held for 3 s) at a cooling rate of 100 • C/s. These rapid annealing cycles were characterized by an ultrafast heating rate of 200 • C/s to different temperatures at 50 • C intervals within the range of 550 • C-700 • C, subsequent isothermal treatment for 60 s, and final cooling to room temperature. Microhardness tests were performed under a load of 200 g on a microhardness tester (HV-1000) with the specimens processed by pre-quenching followed by ultrafast tempering under different temperature conditions. The average of five measurements was recorded as the result of each microhardness test. After austenitizing at 880 °C for 600 s, the heat treatment schedules were designed to achieve multiphase microstructures with pearlitic colonies surrounded by tempered martensite microconstituent volume fractions. The specimens were quickly pre-quenched to different temperatures below the Ms point (120, 150, and 190 °C, held for 3 s) at a cooling rate of 100 °C/s. These rapid annealing cycles were characterized by an ultrafast heating rate of 200 °C/s to different temperatures at 50 °C intervals within the range of 550 °C-700 °C, subsequent isothermal treatment for 60 s, and final cooling to room temperature. Microhardness tests were performed under a load of 200 g on a microhardness tester (HV-1000) with the specimens processed by pre-quenching followed by ultrafast tempering under different temperature conditions. The average of five measurements was recorded as the result of each microhardness test.
The samples were then mechanically polished and etched with 3% nitric acid in alcohol. The microstructure morphologies were examined using a ZEISS SUPRA 40 field emission scanning electron microscope (SEM, ZEISS, Oberkochen, Germany). To characterize the microstructure of the samples after the tempering process, TEM analysis was carried out on a Tecnai G2 F20 S-TWIN (FEI, Hillsboro, OR, USA) operated at a voltage of 200 kV. Samples were prepared by twin jet electropolishing in an alcohol solution of 7% HClO4 at a temperature of −20 °C and current of 50 mA. Electron backscatter diffraction (EBSD, ZEISS) analysis was performed to study the crystallographic orientation and morphological characteristics before and after tempering; this was carried out using an HKL EBSD detector mounted on an FEI Quanta 650F with Channel 5 software for electron image capture at 20 kV and a probe current of 80 μA with a working distance of 18 mm. The diffraction data were acquired with a step size of 0.12 μm. Tango and Mambo menus were used for data processing to get the IPF maps and PF maps. A noise reduction menu was performed to clean-up the bad point. The standard noise reduction used to remove Zero solutions and isolated points that have been incorrectly indexed and appear as Wild Spikes. The points that have been removed are filled in using copies of neighboring points. In this test, the orientation of each pixel was obtained for a neighboring pixel pair with 3 × 3 and the smoothing angle set as 5°. Specimens for EBSD characterization were electropolished in a solution containing 250 mL distilled water, 125 mL alcohol, 125 mL H3PO4, 25 mL isopropanol, and 2.5 g carbamide at an electric current density of 450 mA/cm 2 for 60 s. The samples were then mechanically polished and etched with 3% nitric acid in alcohol. The microstructure morphologies were examined using a ZEISS SUPRA 40 field emission scanning electron microscope (SEM, ZEISS, Oberkochen, Germany). To characterize the microstructure of the samples after the tempering process, TEM analysis was carried out on a Tecnai G2 F20 S-TWIN (FEI, Hillsboro, OR, USA) operated at a voltage of 200 kV. Samples were prepared by twin jet electropolishing in an alcohol solution of 7% HClO 4 at a temperature of −20 • C and current of 50 mA. Electron backscatter diffraction (EBSD, ZEISS) analysis was performed to study the crystallographic orientation and morphological characteristics before and after tempering; this was carried out using an HKL EBSD detector mounted on an FEI Quanta 650F with Channel 5 software for electron image capture at 20 kV and a probe current of 80 µA with a working distance of 18 mm. The diffraction data were acquired with a step size of 0.12 µm. Tango and Mambo menus were used for data processing to get the IPF maps and PF maps. A noise reduction menu was performed to clean-up the bad point. The standard noise reduction used to remove Zero solutions and isolated points that have been incorrectly indexed and appear as Wild Spikes. The points that have been removed are filled in using copies of neighboring points. In this test, the orientation of each pixel was obtained for a neighboring pixel pair with 3 × 3 and the smoothing angle set as 5 • . Specimens for EBSD characterization were electropolished in a solution containing 250 mL distilled water, 125 mL alcohol, 125 mL H 3 PO 4 , 25 mL isopropanol, and 2.5 g carbamide at an electric current density of 450 mA/cm 2 for 60 s. Figure 2 shows the microhardness as a function of the annealing temperature. The values at each quenching temperature (QT) clearly show a similar and decreasing tendency with increasing temperature. Furthermore, the QT is low and the rigidity is high at the same tempering temperature. Samples quenched at 120 • C, 150 • C, or 190 • C and tempered at 600 • C were selected for comparison with original pearlitic steel in terms of microhardness. Two other significant reasons for choosing the selected transformation temperature were to avoid the highest bainite start temperature (B s ) and for convenience in controlling the ultimate microstructure. Figure 2 shows the microhardness as a function of the annealing temperature. The values at each quenching temperature (QT) clearly show a similar and decreasing tendency with increasing temperature. Furthermore, the QT is low and the rigidity is high at the same tempering temperature. Samples quenched at 120 °C, 150 °C, or 190 °C and tempered at 600 °C were selected for comparison with original pearlitic steel in terms of microhardness. Two other significant reasons for choosing the selected transformation temperature were to avoid the highest bainite start temperature (Bs) and for convenience in controlling the ultimate microstructure. Figure 3 shows the microstructure of the initial pearlitic wires as observed by SEM. The structure morphology is mainly pearlitic colonies, and exhibits a random orientation. Figure 4 shows the microstructure of three samples at different QTs and tempering at 600 °C for 60 s. The morphologies clearly indicate that a prospective multiphase microstructure was attained, i.e., tempered martensite (TM) surrounding pearlitic colonies. The typical morphologies of TM exhibited dendritic features, as shown in Figure 4a. The pearlitic colony volume clearly decreased with the decline in QT. Furthermore, parts of the tempered martensitic structure maintained a divergent growth pattern at the triple junction that ran through all the prior austenite grains. Figure 5 shows the structure of the cementite morphology of TM, as well as the lamellar microstructure of the ferrite and cementite layers inside the pearlitic colonies, as observed by bright-field TEM. Regardless of the quenching temperature used, the cementite microstructure in the tempered martensite occurred in the form of elliptical particles or short rods, and was dispersed on the TM matrix. The measurement results clearly indicate that the interlamellar spacing (ILS) equaled 98 ± 10 nm and the minor axis of the cementite feature equaled 45 ± 8 nm, where the parameters were measured under edge-on conditions [21][22][23][24]. Moreover, it is interesting to note that the lamellar orientation of the adjacent pearlitic colonies appeared to grow symmetrically at higher quenching temperatures-at 190 °C, for instance. Figure 3 shows the microstructure of the initial pearlitic wires as observed by SEM. The structure morphology is mainly pearlitic colonies, and exhibits a random orientation. Figure 4 shows the microstructure of three samples at different QTs and tempering at 600 • C for 60 s. The morphologies clearly indicate that a prospective multiphase microstructure was attained, i.e., tempered martensite (TM) surrounding pearlitic colonies. The typical morphologies of TM exhibited dendritic features, as shown in Figure 4a. The pearlitic colony volume clearly decreased with the decline in QT. Furthermore, parts of the tempered martensitic structure maintained a divergent growth pattern at the triple junction that ran through all the prior austenite grains. Figure 5 shows the structure of the cementite morphology of TM, as well as the lamellar microstructure of the ferrite and cementite layers inside the pearlitic colonies, as observed by bright-field TEM. Regardless of the quenching temperature used, the cementite microstructure in the tempered martensite occurred in the form of elliptical particles or short rods, and was dispersed on the TM matrix. The measurement results clearly indicate that the interlamellar spacing (ILS) equaled 98 ± 10 nm and the minor axis of the cementite feature equaled 45 ± 8 nm, where the parameters were measured under edge-on conditions [21][22][23][24]. Moreover, it is interesting to note that the lamellar orientation of the adjacent pearlitic colonies appeared to grow symmetrically at higher quenching temperatures-at 190 • C, for instance. As for the samples quenched at 120
EBSD Analysis
Electron backscatter diffraction (EBSD) imaging before and after the annealing of specimens was performed to investigate the orientation relationship of the martensitic phase during the transformation and reverse transformation processes. Figure 6d-g shows the inverse pole figure (IPF) maps and the corresponding pole figures (PF) on the right obtained at different QTs. It can be clearly observed from the PF that the microstructure of the quenched samples at 120 °C has a distinct crystallographic orientation relative to that quenched at 190 °C. Thus, for the quenched microstructure, it can be inferred that the crystallographic orientation of the local microdomain became different in orientation as the QT increases, i.e., the microstructure showed isotropic behavior at 190 °C in this microregion. Typical characteristics obtained under this condition can be interpreted
EBSD Analysis
Electron backscatter diffraction (EBSD) imaging before and after the annealing of specimens was performed to investigate the orientation relationship of the martensitic phase during the transformation and reverse transformation processes. Figure 6d-g shows the inverse pole figure (IPF) maps and the corresponding pole figures (PF) on the right obtained at different QTs. It can be clearly observed from the PF that the microstructure of the quenched samples at 120 • C has a distinct crystallographic orientation relative to that quenched at 190 • C. Thus, for the quenched microstructure, it can be inferred that the crystallographic orientation of the local microdomain became different in orientation as the QT increases, i.e., the microstructure showed isotropic behavior at 190 • C in this microregion.
Typical characteristics obtained under this condition can be interpreted from the symmetrical growth of martensite at prior austenite grain boundaries, as shown in Figure 6g. Additionally, the martensite units labeled "A" in Figure 6g appear to have a well-defined crystallographic boundary with units "B". The orientation relationship between these units was determined to be 45.8 • , and the rotation axis/angle was determined to be 58.2 • for units "C". The subcollection of planes {100} for PF showed that the crystallographic orientation of martensitic units was symmetrical. The same analytical method was applied to the samples quenched at different temperatures followed by ultrafast tempering at 600 • C for 60 s. Figure 7 shows IPF maps and the crystallographic relationship of each grain as insets in the PFs, correspondingly. The figure clearly shows that as the QT increases, the symmetry orientation of each nodule gradually becomes more apparent. The typical morphology characteristics are labeled "A", "B", and "C" in Figure 4 for TM; these units belong to the same plate group and form a clearly featured coupling to the preceding unit, which may be of the kink or wedge type. This morphology is in stark contrast to the morphology of pearlitic colonies after the tempering observed via TEM, as described above. from the symmetrical growth of martensite at prior austenite grain boundaries, as shown in Figure 6g. Additionally, the martensite units labeled "A" in Figure 6g appear to have a well-defined crystallographic boundary with units "B". The orientation relationship between these units was determined to be 45.8°, and the rotation axis/angle was determined to be 58.2° for units "C". The subcollection of planes 100 for PF showed that the crystallographic orientation of martensitic units was symmetrical. The same analytical method was applied to the samples quenched at different temperatures followed by ultrafast tempering at 600 °C for 60 s. Figure 7 shows IPF maps and the crystallographic relationship of each grain as insets in the PFs, correspondingly. The figure clearly shows that as the QT increases, the symmetry orientation of each nodule gradually becomes more apparent. The typical morphology characteristics are labeled "A", "B", and "C" in Figure 4 for TM; these units belong to the same plate group and form a clearly featured coupling to the preceding unit, which may be of the kink or wedge type. This morphology is in stark contrast to the morphology of pearlitic colonies after the tempering observed via TEM, as described above.
Crystallographic Relationship with the Tempering Multiphase Microstructure
It is not possible to reliably use martensitic high-strength alloys in their as-quenched condition without tempering heat treatments. Even when reasonable toughness might be achieved without tempering, there is a tendency for static failure as a result of hydrogen embrittlement occuring during servicing. Thus, most high-strength steels are used in a tempered state. The entire heat treatment process leading from austenite to the multiphase microstructure can be divided into four stages: (1) A complete austenitizing stage, followed by (2) a rapid cooling stage, in which the steel is directly quenched below the M s point after austenitization. Due to the large degree of supercooling, a large phase transition driving force is produced. Nearly all of the sheared austenite phase is transformed into sheet martensite, the growth of which occurs along the original austenite habit plane. Then, (3) an ultrafast heating stage, during which two changes can be observed. First, carbides in martensite initially nucleate rapidly at the boundaries or in zones of high dislocation density; thus, the density of dislocations and the microhardness decrease dramatically. However, it is not surprising that many reports have indicated that parts of the martensitic sheets will be adversely transformed into austenite [3]. Finally, (4) an isothermal stage, in which the metastable austenite is further transformed into pearlitic colonies and the quenched martensite is decomposed into TM. Two distinct phenomena occur during this work stage. First, the volume of pearlitic colonies increases with QT, and the structure of the TM surrounding the pearlitic colonies is maintained. Second, it is interesting to note that regardless of whether a phase is quenched martensite, ultimately annealed pearlite, or TM in a nodule unit, the crystallographic orientation remains symmetric with increasing quenching temperature, as described above.
The reason for this is that martensite sheets grow along the austenitic habit plan during the shear transformation process, along <259> r at low temperature and <225> r at high QT [25]. Among the martensitic sheets, the first will penetrate the integral austenite grains, and the subsequent martensitic sheet structure will gradually decrease. If the quenching temperature is low, the phase transformation driving force is large, strengthening the shearing ability of martensite sheets along the habit plan and making the crystallographic orientation of the microregion more distinct, as shown in Figure 6. Notably, Albin et al. [26] analyzed the formation of plate martensite in high-carbon, low-alloy steels. It is worth noting that in addition to the formation of a [112] M twin crystal structure, a small amount of [101] M twin crystal formation occurs inside the martensite sheets at low quenching temperature. Thus, due to the higher interfacial energy and pinning effect of the [101] M twin crystal, the reverse transformation behavior is difficult to achieve, and the phase ultimately transforms into TM. The secondary martensite structures, with small layers, a disordered orientation, and a low interface energy, are more prone to reverse transformation and eventually form a pearlite structure. Another reasonable explanation for the abovementioned transformation behavior is that the martensitic sheets travel along low-index crystal planes, e.g., along [225] r , at high quenching temperature. Usually, only a [112] M twin orientation occurs inside martensitic sheets. Similarly, researchers have indicated that the dislocation density and the quenching temperature are inversely related [27,28]. Hence, a low dislocation density and low surface energy allow for easier reverse transformation of the microstructure, and the volume of pearlitic colonies is high.
The symmetrical microstructure inside a nodule can be characterized by EBSD analysis. Samples pre-quenched at 120 • C, 150 • C or 190 • C and submitted to ultrafast heating (600 • C, 60s) were tested. In contrast to the statistical analysis results pertaining to the misorientation angle before and after tempering, the crystallographic orientation did not change significantly, and only the number of small angles decreased, as shown in Figure 8. It was verified that the effect of tempering only altered the dislocation density, whereas the tempered misorientation structure did not change significantly. Thus, it can be inferred that although the martensite reversibly transformed into austenite, the parallel dislocation channels formed by shearing deformation or residual twins were preserved in the austenite matrix. Therefore, during the diffusion-type growth transformation from austenite to pearlite, carbon atoms more easily migrated and formed cementite lamellae along the defects inside the austenite phase. Ultimately, a symmetrical pearlitic structure formed. Materials 2019, 12, x FOR PEER REVIEW 12 of 16
Evolution Model of the Multiphase Microstructure
Many studies have reported that new martensite variants are often nucleated via autocatalytic nucleation [20,29]. Furthermore, autocatalysis will generate well-defined kink-type crystallographic boundaries and form wedge-type secondary martensite variants based on primary martensite variants. Both these orientation relationships and their nonrandom nature have previously been investigated and discussed by Okamoto et al. and Stormvinter et al. [26,28]. Similarly, in this study, the abovementioned phenomenon was also observed at different QTs and tempering at 600 °C for 60 s by SEM, as shown in Figure 4. Interestingly, an obvious homologous orientation in microdomains, which is closely related to the orientation of martensite variants during the transformation process, could be observed in samples quenched at 120 °C. Due to the large degree of subcooling, the formation of martensitic variants with the same orientation was possible. A schematic of the evolution of quenched martensite to TM in this type of high-carbon, low-alloy steel is presented in Figure 9, where Figure 9a presents the martensitic structure after quenching, and Figure 9b presents the multiphase structure after tempering. Commonly, the martensite variant of the midrib type without reverse transformation is transformed into tempered martensite, the final morphology is lenticular, and the martensite undergoing reverse transformation forms a pearlite structure. Sometimes, the pearlitic colonies are separated by a structure of banded TM.
Evolution Model of the Multiphase Microstructure
Many studies have reported that new martensite variants are often nucleated via autocatalytic nucleation [20,29]. Furthermore, autocatalysis will generate well-defined kink-type crystallographic boundaries and form wedge-type secondary martensite variants based on primary martensite variants. Both these orientation relationships and their nonrandom nature have previously been investigated and discussed by Okamoto et al. and Stormvinter et al. [26,28]. Similarly, in this study, the abovementioned phenomenon was also observed at different QTs and tempering at 600 • C for 60 s by SEM, as shown in Figure 4. Interestingly, an obvious homologous orientation in microdomains, which is closely related to the orientation of martensite variants during the transformation process, could be observed in samples quenched at 120 • C. Due to the large degree of subcooling, the formation of martensitic variants with the same orientation was possible. A schematic of the evolution of quenched martensite to TM in this type of high-carbon, low-alloy steel is presented in Figure 9, where Figure 9a presents the martensitic structure after quenching, and Figure 9b presents the multiphase structure after tempering. Commonly, the martensite variant of the midrib type without reverse transformation is transformed into tempered martensite, the final morphology is lenticular, and the martensite undergoing reverse transformation forms a pearlite structure. Sometimes, the pearlitic colonies are separated by a structure of banded TM.
Evolution Model of the Multiphase Microstructure
Many studies have reported that new martensite variants are often nucleated via autocatalytic nucleation [20,29]. Furthermore, autocatalysis will generate well-defined kink-type crystallographic boundaries and form wedge-type secondary martensite variants based on primary martensite variants. Both these orientation relationships and their nonrandom nature have previously been investigated and discussed by Okamoto et al. and Stormvinter et al. [26,28]. Similarly, in this study, the abovementioned phenomenon was also observed at different QTs and tempering at 600 °C for 60 s by SEM, as shown in Figure 4. Interestingly, an obvious homologous orientation in microdomains, which is closely related to the orientation of martensite variants during the transformation process, could be observed in samples quenched at 120 °C. Due to the large degree of subcooling, the formation of martensitic variants with the same orientation was possible. A schematic of the evolution of quenched martensite to TM in this type of high-carbon, low-alloy steel is presented in Figure 9, where Figure 9a presents the martensitic structure after quenching, and Figure 9b presents the multiphase structure after tempering. Commonly, the martensite variant of the midrib type without reverse transformation is transformed into tempered martensite, the final morphology is lenticular, and the martensite undergoing reverse transformation forms a pearlite structure. Sometimes, the pearlitic colonies are separated by a structure of banded TM.
Softening Mechanism
The time-temperature-expansion curve of samples heat treated at a QT of 120 • C and a tempering temperature of 600 • C is presented in Figure 10. The black line represents the actual heat treatment temperature curve, and the red line represents the length change curve of the sample during heating and cooling. The length change curve is closely related to the microstructure transformation process. After annealing for 600 s, the expansion curve was nearly flat, demonstrating that the sample had completely changed from the structure of pearlite to austenite. In the subsequent rapid heating process, the length of expansion was smaller than that in the case of complete austenitization, which indicates that only part of the martensite structure was reverse-transformed into an austenite structure. Thus, as shown in Figure 10b, the time required for complete pearlitic isothermal transformation was approximately 10 s. Furthermore, the microhardness of samples tempered for different durations was tested under these conditions, as shown in Figure 11; the microhardness decreased according to a negative exponent [30] and satisfied the formula: where Φ is the microhardness value along the fitted curve and x is the tempering time.
Materials 2019, 12, x FOR PEER REVIEW 13 of 16 Figure 9. Schematic of the microstructure transformation of quenched martensite into tempered martensite: (a) the martensitic structure after quenching and (b) the multiphase structure after tempering.
Softening Mechanism
The time-temperature-expansion curve of samples heat treated at a QT of 120 °C and a tempering temperature of 600 °C is presented in Figure 10. The black line represents the actual heat treatment temperature curve, and the red line represents the length change curve of the sample during heating and cooling. The length change curve is closely related to the microstructure transformation process. After annealing for 600 s, the expansion curve was nearly flat, demonstrating that the sample had completely changed from the structure of pearlite to austenite. In the subsequent rapid heating process, the length of expansion was smaller than that in the case of complete austenitization, which indicates that only part of the martensite structure was reverse-transformed into an austenite structure. Thus, as shown in Figure 10b, the time required for complete pearlitic isothermal transformation was approximately 10 s. Furthermore, the microhardness of samples tempered for different durations was tested under these conditions, as shown in Figure 11; the microhardness decreased according to a negative exponent [30] and satisfied the formula: where is the microhardness value along the fitted curve and is the tempering time. Researchers have reported that there are two stages of microstructure transformation in the fourth step, i.e., ultrafast heating and annealing [19,29,31]. The first stage is rapid heating. Xing et al.
Softening Mechanism
The time-temperature-expansion curve of samples heat treated at a QT of 120 °C and a tempering temperature of 600 °C is presented in Figure 10. The black line represents the actual heat treatment temperature curve, and the red line represents the length change curve of the sample during heating and cooling. The length change curve is closely related to the microstructure transformation process. After annealing for 600 s, the expansion curve was nearly flat, demonstrating that the sample had completely changed from the structure of pearlite to austenite. In the subsequent rapid heating process, the length of expansion was smaller than that in the case of complete austenitization, which indicates that only part of the martensite structure was reverse-transformed into an austenite structure. Thus, as shown in Figure 10b, the time required for complete pearlitic isothermal transformation was approximately 10 s. Furthermore, the microhardness of samples tempered for different durations was tested under these conditions, as shown in Figure 11; the microhardness decreased according to a negative exponent [30] and satisfied the formula: where is the microhardness value along the fitted curve and is the tempering time. Researchers have reported that there are two stages of microstructure transformation in the fourth step, i.e., ultrafast heating and annealing [19,29,31]. The first stage is rapid heating. Xing et al. Researchers have reported that there are two stages of microstructure transformation in the fourth step, i.e., ultrafast heating and annealing [19,29,31]. The first stage is rapid heating. Xing et al. [32] investigated the effect of refined precipitation on the high-temperature rapid tempering process of SS400 steel. The results reflected that cementite tended to be refined and dispersed if the heating rate exceeded 3 • C/s. Contributions to ultrafast heating during the tempering process, determined using a thermomechanical simulation tester at a heating rate of 200 • C/s, could cause the temperature to reach higher levels so rapidly that there was insufficient time for cementite precipitates to grow along the boundaries [30]. This process was accompanied by rapid carbide nucleation on dislocations in less than one second [19]. Furthermore, parts of martensitic sheets reversibly transformed into austenite. Hence, the softening mechanism in the first stage was related to the microstructure transformation as well as carbide nucleation. As expected, structural transformation occurs following the subsequent isothermal tempering process, i.e., when austenite transforms into pearlite, and carbide coarsening continues to occur in the martensitic matrix, which is the second stage of softening. Therefore, the microhardness in the early stage decreases greatly at a tempering time of less than 11 s. Thus, microstructure transformation is the dominant factor in this process. Elliot et al. [19] proposed that carbide-coarsening-induced softening behavior decreased linearly with tempering time within 10 s. The slow microhardness reduction observed later was caused by the coarsening of cementite.
Conclusions
In this study, high-strength steels containing multiple phases consisting of pearlite surrounded by tempered martensite were formed via the PQFT heat treatment of high-carbon pearlitic steels. The evolution of microstructure transformation was investigated, and the following results were obtained: (1) The values at each quenching temperature clearly show a similar and decreasing tendency with increasing temperature. When the quenching temperature was set to 120 • C and isothermal treatment at 600 • C for 60 s, the multiphase structure showed highest strength, and the pearlite volume fraction after tempering was the lowest. (2) When the quenching temperature is higher, e.g., at 190 • C, the quenched martensite sheet nucleated via autocatalytic nucleation along the interface and showed an obvious symmetrical morphology. (3) After heat treatment process, the microstructure inside a nodule containing the pearlitic colonies and TM, the crystallographic orientation remains symmetric with increasing quenching temperature. (4) The microhardness of the tempered microstructure decreases with increasing quenching temperature and tempering temperature. In addition, the microhardness decreases according to a negative exponent for tempering time within 60 s. | 7,756 | 2019-01-27T00:00:00.000 | [
"Materials Science"
] |
A Compact Two-Dimensional Varifocal Scanning Imaging Device Actuated by Artificial Muscle Material
This paper presents a compact two-dimensional varifocal-scanning imaging device, with the capability of continuously variable focal length and a large scanning range, actuated by artificial muscle material. The varifocal function is realized by the principle of laterally shifting cubic phase masks and the scanning function is achieved by the principle of the decentered lens. One remarkable feature of these two principles is that both are based on the lateral displacements perpendicular to the optical axis. Artificial muscle material is emerging as a good choice of soft actuators capable of high strain, high efficiency, fast response speed, and light weight. Inspired by the artificial muscle, the dielectric elastomer is used as an actuator and produces the lateral displacements of the Alvarez lenses and the decentered lenses. A two-dimensional varifocal scanning imaging device prototype was established and validated through experiments to verify the feasibility of the proposed varifocal-scanning device. The results showed that the focal length variation of the proposed varifocal scanning device is up to 4.65 times higher (31.6 mm/6.8 mm), and the maximum scanning angle was 26.4°. The rise and fall times were 110 ms and 185 ms, respectively. Such a varifocal scanning device studied here has the potential to be used in consumer electronics, endoscopy, and microscopy in the future.
Introduction
A compact two-dimensional varifocal scanning imaging system, which exhibits the ability to provide detailed object information and adjust the interesting area to make the object centered in the field of view, plays a crucial role in the fields of robots, aerospace, and biomedicine, etc. [1][2][3][4].
Up to now, various methods have been proposed to achieve two-dimensional varifocal scanning imaging. Based on the differences in their operating mechanisms, these methods can be broadly divided into two categories: mechanical and non-mechanical. Mechanical varifocal scanning methods include the microelectromechanical [5][6][7][8], servo motor [9][10][11][12], piezoelectric elements [13,14], manual movement [15], and external force [16]. However, for the mechanical varifocal scanning methods, the focal length variation is small, and the scanning range is difficult to expand [17]. The use of multiple optical components and the requirement of longitudinal movement over them often leads to large system sizes, difficulty to achieve high accuracy, and rapid varifocal scanning [17][18][19]. In addition, it is difficult to insert external optical components into the varifocal scanning systems with limited working distance. A method of manual movement actuation of the varifocal scanning device was proposed [20], but the tuning speed and precision could not be scaled to modern image applications [21,22]. However, the focusing range and speed of varifocal scanning are constrained by the size of the microlenses and the mechanical characteristics of the substrate. Optical-phased array technology is a typical non-mechanical varifocal scanning method [23,24]. Optical-phased array technology has the potential to address some of the issues posed by traditional variable scanning methods. However, the limited processing technology restricts the scanning angle of the optical-phased array technology, resulting in relatively low scanning efficiency [25][26][27]. A confocal scanning device using the Alvarez-Lohmann lens was proposed. This device axially scans volumetric samples, while preserving the locations of the initial point source, as well as that of the detector plane [28]. Therefore, a two-dimensional varifocal scanning element with a compact structure, fast response speed, and large varifocal and scanning angles is highly desirable.
In this paper, we propose a compact two-dimensional varifocal scanning device. The varifocal and scanning functions of the proposed device are realized by the varifocal principle of the Alvarez lenses and the scanning principle of the decentered lens, respectively.
This varifocal concept was rediscovered independently and simultaneously by Alvarez and Lohmann [29][30][31]. Different from the traditional varifocal lens that changes the focal length through the axial shifting of solid lenses, the Alvarez lenses can provide precise and rapid dynamic adjustment of optical power through the lateral displacement of two cubic phase masks [32][33][34]. The Alvarez lenses have recently been regarded as an attractive method to achieve varifocal function rapidly while still maintaining a compact structure [35]. The decentered lens method is a promising option to achieve the scanning function because of its simplicity, which is only composed of two lenses. One remarkable advantage of both the Alvarez lenses and the decentered lenses is that a small displacement perpendicular to the optical axis can realize a large varifocal range and large scanning angle, respectively [36,37]. The traditional methods to actuate the Alvarez lens and the decentered lenses include MEMS-driven units, motor, and manual movement [15]. However, these actuators have some drawbacks, such as small displacement, slow speed, and complex structure, which result in small varifocal and scanning ranges, slow response speed, and bulkiness in size.
Electroactive polymers are a class of materials that exhibit deformation on a large scale under an electric field [38,39]. Within the family of electroactive polymers, dielectric elastomer (DE) is rapidly becoming a preferred choice of soft actuators due to its high strain, energy density, efficiency, response speed, noise-free operation, resilience, and lightweight properties [40]. DE is well known as an 'artificial muscle', and it is suitable as an actuator for the application fields of bio-inspired robots, adaptive optics, energy harvesters, etc. Biomimetics is the process of deriving good design from nature. Benefiting from these distinct advantages of the DE, the Alvarez lenses for varifocal function, and decentered lenses for scanning function are both actuated by artificial muscle in this paper.
The rest of the paper is organized as follows: Section 2 describes the principle of the two-dimensional varifocal scanning imaging device actuated by artificial muscle material. Section 3 presents the design and fabrication process of the varifocal scanning element. The experimental results are presented in Section 4, and Section 5 is the conclusions.
Principle of the Proposed Compact Two-Dimensional Varifocal Scanning Device
As shown in Figure 1, the proposed compact two-dimensional varifocal scanning device comprises four identical decentered lenses, two identical Alvarez lenses, four Des (artificial muscle materials), and compliant electrodes. The four decentered lenses have a plano-convex shape, i.e., plano-convex lens 1, plano-convex lens 2, plano-convex lens 3, and plano-convex lens 4. Each of the two Alvarez lenses, i.e., Alvarez lens 1 and Alvarez lens 2, has two cubic phase masks. Alvarez lens 1 is composed of cubic phase mask 1 and cubic phase mask 2. Alvarez lens 2 is composed of cubic phase mask 3, and cubic phase mask 4. Cubic phase masks are arranged in tandem, with free-form surfaces facing each other, which realizes the optical power tuning by slightly shifting relative to each other in a transverse direction relative to the optical axis. The flat surfaces of the cubic phase masks are opposite to the plane surfaces of the corresponding plano-convex lenses and are mounted in the middle area of the four DEs. The four DEs are divided into two quadrants and both sides of the two quadrants are coated with compliant electrodes. The DEs of the cubic phase mask 1, cubic phase mask 2, plano-convex lens 1, and plano-convex lens 2 are coated with compliant electrodes along the y directions, and that of the cubic phase mask 3, cubic phase mask 4, plano-convex lens 3, and plano-convex lens 4 are coated with compliant electrodes along the x directions. When applying an actuation voltage across one quadrant of the dielectric elastomer through the compliant electrodes, the Coulomb force between free charges on the electrodes generates Maxwell's stress induced by the applied electrical potential in the thickness direction. Maxwell's stress reduces the distance between the compliant electrodes; thus, the dielectric elastomer expands in the lateral directions, because it is an incompressible material. The relationship between the applied voltage (V) and the Maxwell pressure (p) can be expressed as: where ε 0 and ε are the vacuum permittivity and the relative permittivity of the DE, respectively, and d is the thickness of the DE. Since the DE is an incompressible material, Maxwell's stress makes the DE expands in the lateral directions. The expansion in the lateral direction enables lens elements to undergo a radial uniform squeezing. Therefore, the decentered lenses and the cubic phase masks can be moved in the lateral directions by applying actuation voltage on the compliant electrodes of one quadrant of the DEs. By applying different voltages on the different quadrants, the Alvarez lenses can realize the varifocal function and the decentered lens can realize the scanning function by being moved in the lateral direction. The varifocal principle based on Alvarez lenses is easily understood. Each cubic phase mask of the Alvarez lens has a plane surface and a free-form surface. The free-form surface is described by a cubic polynomial equation, which can be given by [32][33][34]: where A, D, and E are constants to be determined as well as x, and y is transverse coordinate normal to the z-direction, and t is the phase profile of the Alvarez lens. Different from the traditional varifocal method that is based on mechanical movement along the optic axis, the Alvarez lenses provide an optical power-tuning range through small lateral displacements, perpendicular to the optical axis. Assuming the lateral displacement is δ, the focal length of the Alvarez lens (f ) can be expressed as: where n is the refractive index of the Alvarez lens material. The varifocal function can be achieved by applying actuation on the compliant electrodes of the DEs adhered to the four Alvarez lenses, the two-dimensional. The scanning principle based on decentered lenses is also easily understood. The incoming collimated wavefront is focused to a point in the back focal plane of the first lens, while the second lens is situated so that its front focal plane coincides with the back focal plane of the first lens. The decentered second lens then re-collimates the exiting light, but the beam is directed to a non-zero steering angle. Based on this principle, the decentered lenses provide a view transformation through small lateral displacements perpendicular to the optical axis.
Because the cubic phase masks and the decentered lens can be moved by actuating the DEs, the two-dimensional varifocal scanning function can be achieved. The principle is described as follows: at the initial state, the four cubic phase masks are precisely aligned along the optic axis and the four plano-convex lens centers are aligned with the four cubic phase masks, respectively. The flat surfaces of the cubic phase masks are opposite to the plane surfaces of the corresponding plano-convex lenses. At the actuated state, the compliant electrodes of quadrants of the DEs are subjected to a controllable actuation voltage, for example, when the plano-convex lens 1 and cubic phase mask 1 are moved in opposite directions with plano-convex lens 2 and cubic phase mask 2 in the x direction. Therefore, a translated displacement between the plano-convex lens 1 and plano-convex lens 2 is generated, and a translated displacement between the cubic phase mask 1 and cubic phase mask 2 is produced. According to the geometrical optics, when plano-convex lens 1 and plano-convex lens 2 are decentered from the principal optical axis with a translated displacement, objects will be scanned along the x direction. When the cubic phase mask 1 and the cubic phase mask 2 are decentered from the principal optical axis with a translated displacement, objects will be magnified or demagnified along the displacement direction. Similarly, varifocal scanning in the y direction can be achieved by applying voltages to compliant electrodes of the DEs of the plano-convex lens 3, cubic phase mask 3 and planoconvex lens 4, and cubic phase mask 4. Thus, the two-dimensional varifocal scanning function can be achieved.
In order to clearly describe the principle of the two-dimensional varifocal scanning device, four varifocal scanning states are shown in Figure 2. The compliant electrodes of four quadrants of the DEs are subjected to four actuation voltages (V 1 , V 2 , V 3 , V 4 ) to make the four plano-convex lenses and four cubic phase masks move in the two-dimensional direction, to realize varifocal scanning. The red areas represent that the compliant electrodes are active, i.e., the actuation voltage is not zero. As shown in Figure 2a, when the actuation voltage V 1 is active, the plano-convex lens 1 and cubic phase mask 1 move along the y+ direction, meanwhile the plano-convex lens 2 and cubic phase mask 2 move along y− direction. The active actuation voltage V 1 makes the proposed varifocal scanning element scan the object in the ydirection with demagnification capacity. Similarly, as shown in Figure 2b, when the actuation voltage V 2 is active, the plano-convex lens 1 and the cubic phase mask 1 move along ydirection, meanwhile the plano-convex lens 2 and the cubic phase mask 2 move along y+ direction. The actuation voltage V 2 makes the proposed varifocal scanning element scan the object along the y+ direction, with magnification capacity. As shown in Figure 2c, the varifocal scanning in the x− direction with demagnification capacity can be realized through active actuation voltages of V 3 . The varifocal scanning along the y+ direction with magnification capacity can be realized through active actuation voltages of V 4 , as shown in Figure 2d. Hence, the proposed element has the ability of two-dimensional varifocal scanning by actuating the four DEs.
Fabrication of the Proposed Two-Dimensional Varifocal Scanning Device
The fabrication processes of the proposed varifocal scanning device are described as shown in Figure 3. The structure of the varifocal scanning device includes four planoconvex lenses and four cubic phase masks, eight polymethyl methacrylate (PMMA) frames (inner diameter of 38 mm and outer diameter of 42 mm), four DEs (VHB 4905, 3M Company, Saint Paul, MN, USA), copper foils, and compliant electrodes, as shown in Figure 3a. From the architecture of the proposed two-dimensional varifocal scanning device, we can find that the eight lenses (four plano-convex lenses and two Alvarez lenses) are the important optical elements. Four commercial 6 mm diameter lenses with a 6 mm focal length (GCL-010130A, Daheng Optics, Beijing, China) were employed as the plano-convex lenses. Concerning the four cubic phase masks, we fabricated them through the diamondturning and replication molding process [41]. The Alvarez lens material was the UV-curable optical adhesive, NOA83H (Norland, New York, NY, USA), with a refractive index of 1.56. The parameters of the four cubic phase masks were the same, which were A = 0.075 mm −2 , D = −0.175, and E = 1 mm in Equation (2), respectively.
Firstly, the eight PMMA frames were fabricated by a laser-engraving machine (4060, Ketailaser Company, Liaocheng, China). As shown in Figure 3a, the DE (VHB4905, 3M Company) was sandwiched by the two PMMA frames, and the top and bottom sides of local areas of the DE along the x-axis of the cubic phase masks were coated with carbon powder (BP2000, Carbot, Boston, MA, USA) as compliant electrodes. With the help of self-designed fan-shaped masks, the carbon powder was printed on the two quadrants of the DE by using a brush. The DE was biaxially stretched by a factor of 200% to achieve a large strain performance. Secondly, to eliminate the effect of DEs in the optical path on imaging, a cylindrical base was used to hold the cubic phase mask and the de-centered lens in the center, and the DE under the cylindrical base was removed, as shown in Figure 3b. Because DE (VHB4905) is a kind of strong adhesion tape, the cylindrical base, and the acrylic frames could directly adhere to the VHB4905. The cylindrical base was fabricated using a 3D printer (Raise3D) with a printing accuracy of 0.01 mm. Thirdly, the cubic phase mask was precisely placed into the cylindrical base under a microscope camera (GP-530H, Gaopin Precision Instrument Company, Kunshan, China). The plane of the Alvarez lens faced outside the cylindrical base, and the cubic phase mask faced inside the cylindrical base. The plano-convex lens was also mounted into the cylindrical base and the plane of the plano-convex lens was aligned to the plane of the cubic phase mask. These components were precisely assembled, as shown in Figure 3c. Fourthly, the same components were fabricated and rotated 180 • as well as, then combined with the components in Figure 3c, to form the unit of the proposed varifocal scanning device that can varifocal-scan objects in the x direction, as shown in Figure 3d. Lastly, the fabrication process of the unit of the varifocal scanning device that can varifocal-scan objects in the y direction was the same as that in the x direction. The top and bottom sides of the local areas of the DE under the cubic phase mask were coated with carbon powder along the y-axis. The cubic phase masks of the unit of the varifocal scanning device that move in the y direction could varifocal-scan objects in the y direction. The fabricated structure of the proposed two-dimensional varifocal scanning device is shown in Figure 4.
Experiments and Discussion
The varifocal range is an important parameter to evaluate the varifocal scanning device. To qualitatively assess the varifocal performance, the r focal length of the varifocal scanning device at the four states, shown in Figure 2, was measured by using the magnification method. The experimental schematic is shown in Figure 5a. A biological stomach tissue section was located at a fixed distance of 3.0 mm (D) from the proposed varifocal scanning device as the imaging object, and was imaged by the microscope with the proposed varifocal scanning device. By applying an actuation voltage on the compliant electrodes, using a voltage-stabilized source (UTP3315TFL-II, UNI-T Company, Dongguan, China), the Alvarez lenses could magnify the object and the decentered lens allowed for the scanning of the object in different directions, which endowed the proposed device with the capacity of varifocal scanning. By measuring the size of the object in the captured image under different driving voltages, the focal length of the varifocal scanning device was obtained. The driving voltage, generated from the voltage-stabilized source and amplified 1200 times by the high-voltage converter, was applied to the compliant electrodes through copper foils. The focal length (f ) was calculated by the following equation: f = DM/(M − 1), where M is the optical magnification of the object [42]. Focal lengths at the four states shown in Figure 2 were shown in Figure 5b,c (see Video S1 in Supplementary Materials). From Figure 5b, we can find that the focal length decreased from 30.1 mm to 8.9 mm (the black line), with an increase in the driving voltage from 0 kV to 3.6 kV when the proposed varifocal scanning device scans the object along x+ direction with magnification capacity. On the other hand, the focal length decreased from −31.6 mm to −6.8 mm (the green line), with an increase in the driving voltage from 0 kV to 3.6 kV when the proposed varifocal scanning element scans the object along the x− direction with demagnification capacity. From Figure 5c, we can also find that the focal length decreased from 30.1 mm to 8.9 mm (the red line), with an increase in the driving voltage from 0 kV to 4 kV when the proposed varifocal scanning device scans the object along y+ direction with magnification capacity. The focal length decreased from −31.0 mm to −7.1 mm (the blue line), with an increase in the driving voltage from 0 kV to 3.6 kV when the proposed varifocal scanning device scans the object along the y− direction with demagnification capacity. Therefore, the focal length variation of the proposed varifocal scanning device was up to 4.65 times (31.6 mm/6.8 mm). The range of scanning is also an important parameter to evaluate the varifocal scanning device. We experimentally measured the varifocal scanning range of the proposed device. A biological stomach section was selected as the imaging object. The distance between the object and the varifocal scanning device was 3 mm (D). Under different actuation voltages on the different quadrants, the varifocal scanning images of the object are shown in Figure 6. From Figure 6, we can find that the object was scanned in two-dimensional directions, including the x-axis (left and right), and the y-axis (top and bottom) directions. The scanning angle was calculated from the displacement distance (l) of the object over the center of the field of view by a simple triangle function, i.e., arctan (l/D). The scanning angle of the device under different applied voltages is shown in Figure 6e. The scanning angle increased with an increase in the amplitude of the actuation voltages. The displacement of the image was up to 1.5 mm. Therefore, the maximum scanning angle was calculated to be approximately 26.4 • under the actuation voltage of 3.6 kV. The results showed that the scanning angle of the device was slightly different when the same voltages were applied to different quadrants of the DEs. This maybe can be dedicated to the measuring error, the misalignments of the Alvarez lenses and the decentered lenses, and the uniform pre-stretch of the DEs. The response speed for the varifocal scanning device is an important parameter to evaluate the dynamic performance. The response speed was also tested. A green laser beam ( λ = 532 nm) was generated by a laser (MGL-III-532, New Industries Optoelectronics Technology Company, Changchun, China) and focused on the photodetector (PDA36A-EC, Thorlabs, Newton, NJ, USA) through the proposed varifocal scanning device. The beam was collimated by a beam expander (GCO02501, Daheng Optics, Beijing, China) and passed through a diaphragm with a 5 μm pinhole (GCT-060201, Daheng Optics, Beijing, China) to eliminate stray light. The square wave signal, with a period of 1 s, a peak-to-peak amplitude of 2.0 V, and a duty cycle of 30%, was amplified by a power amplifier (PA1011, RIGOL Technologies, Suzhou, China) and then is applied to the compliant electrode (V1) of the DEs to change the focal length. The variable focal length made the recorded light intensity change and then the recorded voltage different. The experimental result is shown in Figure 7. The time of rise and fall was regarded as the time consumption from the initially recorded voltage to 90% of the maximum recorded voltage and from the maximum recorded voltage to 90% of the initially recorded voltage [43]. The response time of the proposed varifocal scanning device was obtained from the local magnified area in Figure 7a. From Figure 7b, it can be observed that the rise and fall times of such a device were 110 ms and 185 ms, respectively. The response time can be further The response speed for the varifocal scanning device is an important parameter to evaluate the dynamic performance. The response speed was also tested. A green laser beam (λ = 532 nm) was generated by a laser (MGL-III-532, New Industries Optoelectronics Technology Company, Changchun, China) and focused on the photodetector (PDA36A-EC, Thorlabs, Newton, NJ, USA) through the proposed varifocal scanning device. The beam was collimated by a beam expander (GCO02501, Daheng Optics, Beijing, China) and passed through a diaphragm with a 5 µm pinhole (GCT-060201, Daheng Optics, Beijing, China) to eliminate stray light. The square wave signal, with a period of 1 s, a peak-to-peak amplitude of 2.0 V, and a duty cycle of 30%, was amplified by a power amplifier (PA1011, RIGOL Technologies, Suzhou, China) and then is applied to the compliant electrode (V 1 ) of the DEs to change the focal length. The variable focal length made the recorded light intensity change and then the recorded voltage different. The experimental result is shown in Figure 7. The time of rise and fall was regarded as the time consumption from the initially recorded voltage to 90% of the maximum recorded voltage and from the maximum recorded voltage to 90% of the initially recorded voltage [43]. The response time of the proposed varifocal scanning device was obtained from the local magnified area in Figure 7a. From Figure 7b, it can be observed that the rise and fall times of such a device were 110 ms and 185 ms, respectively. The response time can be further decreased using the lens material with low density, and DEs with a high Young's modulus.
Conclusions
In summary, we present here a novel two-dimensional varifocal scanning device. The proposed varifocal scanning device can both change the focal length continuously and scan the object in a two-dimension direction. The varifocal function of this proposed device is realized by the principle of laterally shifting cubic phase masks, and the scanning function is realized by the principle of decentered lenses. The varifocal function and the scanning function were actuated by artificial muscle material (DEs). The focal length variation of the proposed varifocal scanning device was up to 4.65 times higher, where the maximum focal length was 31.6 mm and the minimum focal length was 6.8 mm. The two-dimensional scanning angle of the proposed varifocal scanning device was up to 26.4 • . The response time was tested and the results showed that the rise and fall times were 110 ms and 185 ms, respectively.
Author Contributions: C.C.: conception and design of the study, acquisition of data, analysis, and interpretation of data, drafting the article. Q.H. and J.C.: conception and design of the study. Y.X.: analysis and interpretation of data and modification of the article. L.L.: analysis and interpretation of data. Y.C.: conception and design of the study and modification of the article and revision of the article. All authors have read and agreed to the published version of the manuscript.
Conflicts of Interest:
The authors declare no conflict of interest. | 5,936 | 2023-03-01T00:00:00.000 | [
"Engineering"
] |
INTEGRATION OF A GENERALISED BUILDING MODEL INTO THE POSE ESTIMATION OF UAS IMAGES
A hybrid bundle adjustment is presented that allows for the integration of a generalised building model into the pose estimation of image sequences. These images are captured by an Unmanned Aerial System (UAS) equipped with a camera flying in between the buildings. The relation between the building model and the images is described by distances between the object coordinates of the tie points and building model planes. Relations are found by a simple 3D distance criterion and are modelled as fictitious observations in a Gauss-Markov adjustment. The coordinates of model vertices are part of the adjustment as directly observed unknowns which allows for changes in the model. Results of first experiments using a synthetic and a real image sequence demonstrate improvements of the image orientation in comparison to an adjustment without the building model, but also reveal limitations of the current state of the method.
INTRODUCTION
The civil market of Unmanned Aerial Systems (UAS) is growing as UAS are used in a wide range of applications, e.g. in 3D reconstruction for visualization and planning, monitoring, inspection, cultural heritage, security, search and rescue and logistics.UAS offer a flexible platform for imaging complex scenes.In most applications the knowledge of the pose (position and attitude, exterior orientation) of the sensors in a world coordinate system is of interest.The camera on a UAS can be seen as an instrument to derive pose relative to objects in its field of view.However, as scale cannot be inferred from images alone, a camera is not able to deliver poses in a world coordinate system without the aid of additional sensors or ground control information.In addition, even if robust methods are applied, image-based parameter estimation suffers from accumulating errors by uncertain image feature positions that lead to block deformation (called "drift" in the following).Also, the limited payload capability of UAS and cost considerations constrain the selection of positioning and attitude sensors such as GNSS (Global Navigation Satellite Systems) receivers and IMUs (Inertial Measurement Units).As a result, directly measured data for the image pose are typically not accurate enough for precise positioning.
We propose a method to incorporate an existing generalised building model into pose estimation from images taken with a camera on board of the UAS.Whereas both the geometric accuracy and the level of detail of such models may be limited, the integration of this information into bundle adjustment is helpful to compensate inaccurate camera positions measured by GNSS, e.g. in case of GNSS signal loss if the UAS flies through urban canyons, and drift effects of a purely image-based pose estimation.The integration of the building model into bundle adjustment is based on fictitious observations that require object points to be situated on building model planes.
This paper is structured as follows.The next section outlines related work in which a-priori knowledge about the objects visible to the sensor is introduced into the process of pose estimation.Section 3 introduces our scenario and outlines the mathematical model that is used to describe related entities.Section 4 presents our hybrid bundle adjustment with a focus on fictitious observations, whereas Section 5 contains the overall workflow of sensor orientation.Experiments using synthetic and real data are presented in section 6, before we conclude and give an outlook on future work in section 7.
RELATED WORK
Reviews of UAS technology and applications in mapping and photogrammetry are given in (Colomina and Molina, 2014) and (Nex and Remondino, 2014).The integration of object knowledge in image pose estimation and 3D reconstruction processes beyond ground control points (GCP) has been dealt with in various applications and with different motivations.First, there is work on the integration of generic knowledge about the captured objects into bundle adjustment.McGlone et al. (1995) provide the generic mathematical framework for including geometric constraints into bundle adjustment.Based on this work, Rottensteiner (2006) reviews different approaches for that purpose, comparing two different strategies: in adjustment, one can use "hard constraints", involving constraints between the unknowns that will be fulfilled exactly, or "soft constraints" related to observation equations which, thus, can be subject to robust estimation procedures for detecting outliers.Consequently, he uses soft constraints to estimate the parameters of building models from sensor data.Gerke (2011) makes use of horizontal and vertical lines to obtain additional fictitious observations as soft constraints in indirect sensor orientation including camera self-calibration.
Digital Terrain Models (DTM) provide knowledge of a scene that is useful in image orientation.Strunz (1993), Heipke et al. (2005) and Spiegel (2007) carry out hybrid bundle adjustment using image observations and a DTM to constrain the heights of object points for improving pose estimation.Geva et al. (2015) deal with the pose estimation of image sequences captured in nadir direction from an UAS flying at a height of 50m in nonurban areas.Assuming the pose of the first frame to be known, they also derive surface intersection constraints based on DTM heights.Avbelj et al. (2015) address the orientation of aerial hyperspectral images.In their work, matches of building outlines extracted from a Digital Surface Model (DSM) in an urban area and lines in the images are combined in a Gauss-Helmert adjustment process.
Methods for integrating linear features are found in the field of texturing 3D models.Frueh et al. (2004) detect lines in oblique aerial imagery and match them against outlines of a building model.The matches are used in image pose estimation by exhaustive search.Other authors make use of corner points (Ding et al., 2008) or plane features (Hoegner et al., 2007) for texture mapping of building models.Hoegner et al. (2007) outline two strategies for image-to-model matching: They search for horizontal and vertical edges in the image and use their intersections as façade corner points that are matched to corners of the model.Alternatively, if not enough such vertices are observed in the images, homographies based on interest points that lie in a plane are estimated to orient images relative to façades.Kager (2004) deals with airborne laser scanning (ALS) strip adjustment.He identifies homologous planar patches as tie features in overlapping ALS strips and uses these planar features to derive fictitious observations for the homogenisation of ALS strips.Hebel et al. (2009) find planes in laser scans acquired by a helicopter and match them to a database of planar elements (also from ALS) for terrain-based navigation.Matches are used to formulate constraint equations requiring the two planes to be identical, which are used to estimate the pose parameters.
Line matching between images and building models is also carried out with the direct goal of orientation improvement.Läbe and Ellenbeck (1996) use 3D-wireframe models of buildings as ground control, matching image edges to model edges and carrying out spatial resection for the orientation of aerial images.Li-Chee-Ming and Armenakis (2013) improve the trajectory of a UAS by matching image edges to edges of rendered images of an Level of Detail 3 (LoD3) building model and performing incremental triangulation.
In this paper, we incorporate object knowledge in the form of a generalised building model represented by planes and vertices.Instead of matching points, lines or planes directly, we use the object coordinates of tie points reconstructed from an image sequence and assign them to model planes based on a 3D distance criterion.In bundle adjustment, this assignment is considered by fictitious observations of the point distances to the model planes, using a mathematical model that can handle planes of any orientation.These fictitious observations act as soft constraints that improve the quality of pose determination beyond what can be achieved with low-cost GNSS receivers.
MATHEMATICAL MODEL
We address the scenario of a moving camera that observes objects in multi-view stereo configuration.Knowledge of the captured scene is given in form of a generalised building model.The building model is represented by its vertices and its faces.The topology is given by a list of the indices of vertices that belong to each model plane.Figure 1 depicts the relevant entities that we use to describe the building model and the cameras.In order to integrate the building model into bundle adjustment, we relate image coordinates to object points and assign these object points to planes of the building model.Note, that there is no need to observe the vertices in the images, which would require to solve a complex image interpretation task.
The mathematical model that relates image coordinates u, v to the parameters of interior and exterior orientation and to the object coordinates X, Y, Z is given by the well-known collinearity equations (Eq.1).(1)
𝑢 = 𝑢
The exterior orientation (pose) of an image is given by the coordinates 0 , 0 , 0 of its projection centre PC and the elements of a rotation matrix which are functions of three rotation angles , , .The coordinates of the principal point 0 and 0 (not shown in Figure 1) and the camera constant c are referred to as interior orientation parameters.
Similar to Kraus (1996), we use a local coordinate system attached to each plane in which we formulate the fictitious observation equations for points situated on that plane.Six parameters describe the pose of this local plane coordinate system x, y, z.These are three rotation angles (used to parameterise a 3D-rotation matrix , not shown in Figure 1) and a 3D-shift 0 from the object coordinate system to the local one for each plane. 0 is initialised in the centre of gravity of the building model vertices of the plane.Initially, the x-y plane of the local system corresponds to the model plane and the z-axis corresponds to the plane normal .
To describe a plane within such a local system, it is parameterised by two angles , defining the direction of the normal and a translation along the (local) z-axis (see Figure 2).Using this parameterisation, the relation between a point and a plane is described by its orthogonal distance to that plane following Eq. 2. is the object point expressed in the local coordinate system.Note that whenever the parameters , and are changed, we use these values to adapt R and P0, so that after the parameter update, the adjusted plane again corresponds to the (slightly shifted and rotated) x-y coordinate plane of the local system.
HYBRID BUNDLE ADJUSTMENT
We use various types of observations in our adjustment problem: -image coordinates of homologous points (u, v) -direct observations (, , ) for the projection centres of the cameras, obtained from low accuracy GNSS receivers -direct observations for the vertices of the building model (, , ) -fictitious observations relating object space coordinates of a tie point to the planes of the building model () -fictitious observations relating object space coordinates of a vertex of the building model to the planes of the building model () These observations are used as inputs into a Gauss-Markov model to estimate the following unknowns: -the pose parameters for each image (three rotation angles , , and projection centre coordinates 0 , 0 , 0 ) -the object space coordinates of the tie points (, , ) -three parameters of each plane of the building model (, , ) -the object space coordinates of the vertices of the building model (, , ) The latter two groups of unknowns reflect the fact that the building model is generalised.Due to the generalisation it is possible that the vertices of the building model do not correspond to real points at the object surface, so that they might not be observable in an image.The direct observations of vertex coordinates relate the estimated planes to the original building model.
Functional Model
The following observation equations are formulated in our model: - For each tie point and for each vertex of the building model one such fictitious observation according to Eq. 2 relates the object space coordinates to a plane of the building model.The distance d between the point and the plane is assumed to be zero, i.e. the point is assumed to lie in the plane.For the vertices of the building model it is exactly known which plane they are situated in.In contrast, relations between tie points and model planes must be established first (see section 5).
The three parameters , and per plane are unknowns in the iterative adjustment.However, the rotation and the translation 0 are treated as constants during each iteration.As stated previously, and 0 are updated after each iteration using the estimated local plane parameters , and ., and are initialised as zero and reset to zero after updating and 0 for each iteration.
Stochastic Model
We assume uncorrelated observations and a constant a-priori level of accuracy for each observation type.This leads to a diagonal variance-covariance matrix of the observations Σ : In Eq. 3, the variances of the measured image coordinates are denoted by 2 .The variance of the GNSS receiver measurements is reflected by
2
. The variance 2 is related to the accuracy of the coordinates of the building model vertices.For the two groups of fictitious distance observations we introduce different variances, namely 2 for tie points and 2 for vertices.The vertices are known to lie exactly on their planes.Therefore, their fictitious distance observations conceptually must be zero (for numerical reasons we use a small variance 2 resulting in high weights of these observations).On the other hand, the variance 2 of the observed distance of tie points to their related planes mainly depends on the generalisation and the accuracy of the building model and has to be selected accordingly.
PROCESSING STEPS
Our processing workflow consists of the steps listed in Table 1.
We first derive homologous points, estimate image poses and 3D object point coordinates based on a structure from motion (SFM) pipeline.Subsequently, we run a bundle adjustment including only images for which GNSS observations are available and without considering the building model (step 2).Images having GNSS coverage are assumed to be connected in the sequence (usually at the beginning or at the end of a flight).
Step 1 Image matching and SFM to derive tie points and image poses Step 2 Bundle adjustment including only images, for which direct observations of projection centres are available.
Step 3 Establishment of relations between tie points and model planes Step 4 Hybrid bundle adjustment including the planes (step 3 is carried out before each iteration) Step 5 Hybrid bundle adjustment based on images and planes already used in step 4 and including new images, new tie points (step 3 is carried out before each iteration only considering planes already used in step 4) Step 6 Hybrid bundle adjustment based on all images and planes considered in step 5 and new model planes for the points added in step 5 (step 3 considering all model planes is carried out before each iteration) Table 1: Work flow of pose estimation.
In step 3 we assign tie points to the planes of the building model on the basis of their estimated 3D positions.Note that both, the observations of the image projection centres and the building model vertices, must be given in the same coordinate system, here the coordinate system of the GNSS observations.The assignment of a point to a plane is based on a distance criterion.
In our current implementation tie points are assumed to be related to the closest plane provided that the Euclidean distance from the plane is below a given threshold.This threshold has to be selected in accordance with the accuracy and degree of generalisation of the building model.
Each tie point can add only one fictitious observation.We do not consider tie points to be related to more than one plane at the same time (e.g. points on plane intersections and corners).Only if the distance of a tie point to the nearest plane is below the threshold, the relation is considered to be correct and a fictitious observation is added to the adjustment.In contrast to the tie points, the relation of the vertices to the planes are known and each vertex can be related to more than one plane.
In step 4, hybrid bundle adjustment is carried out with the additional observations and parameters for the adjusted planes and tie points of step 2 as described in section 4. In each iteration the assignment of the tie points to the planes of the building models to set up the fictitious observations is recomputed based on the current parameter values (step 3).In contrast, the known relations of vertices to planes are not changed.Note that only planes containing more than a pre-defined minimum number of tie points are considered in adjustment.
Step 5 is a hybrid adjustment that additionally includes the images having no direct observations for the projection centre, which is carried out to transfer the remaining images into the object coordinate system using the ground control information of the part of the block already utilised in step 4. In step 5 additional model planes are not considered, in contrast to step 6, where the results of step 5 are used to find assignments of the new tie points to those additional model planes.Finally, a hybrid adjustment with all images including all planes that contain a sufficient number of tie points is carried out, which delivers the final results of our method.
EXPERIMENTS
In our experiments, we show results achieved both for simulated data and for real images captured by a micro UAS.Both scenarios use a 3D city model with Level of Detail 2 (LoD2) of a part of our campus as ground control information.For both sequences the viewing directions of the cameras are approximately horizontal and orthogonal to both, the flight direction and the facades.Both data sets have GNSS coverage for several images at the beginning of the image sequences.Both, GNSS observations and building vertices are given in WGS84/UTM Zone 32, which, after applying a fixed offset to reduce the number of digits, serves as our world coordinate system.The apriori standard deviations of all observation types, used to define the stochastic model (cf.Eq. 3) are set as follows: Image reflects the accuracy and generalisation effects of the vertices of the building model. describes the deviation of the model planes due to the generalisation.In step 3 of the processing pipeline, we choose to take into account fictitious distances for points to planes only if the distance is smaller than 2 m.The threshold is chosen in accordance with the GNSS accuracy to obtain as many correct assignments as possible with few outliers only.Planes are adjusted only if at least 20 points are assigned to them.We noticed that planes having fewer points have a high probability to be reconstructed incorrectly.
Simulation
For the simulation, a trajectory of 41 images with a length of 190 m along the LoD2 model is simulated.Object points are distributed randomly in the planes of the building model with a density of 0.4 points per m 2 .The points are re-projected into the images to generate image coordinate observations.Random Gaussian noise with a standard deviation of 1 pixel is added to these image coordinates.The positions of the first 10 images serve as simulated GNSS observations for the projection centres, they are contaminated by white noise of = 2 .
Figure 3a shows the resulting camera positions and tie points.
The datum is defined by the GNSS observations of the first 10 images, which results in strong deviations of the block relative to the building model (highlighted by red ellipses in the figure).Figure 3b depicts the improvements for tie point and camera positions after including the model planes that are visible in the images of the first sub-block (step 4).After this step, the datum of the block is defined by both, the direct observations of the projection centres and the vertices of the building model.The adjusted points coincide very well with the buildings model planes.As expected, changes of the vertices of the building model are in a range of just a few centimetres due to the fact that the simulated points originally exactly coincided with the model planes.Figure 3c shows the tie points and camera positions of the last part of the simulated image sequence after adding images 11 to 41 to the hybrid adjustment in processing step 5; as ground control is only available for the first part of the sequence, there are considerable deviations of the resultant point cloud from the model.Figure 3d depicts the result after adding model planes to the hybrid adjustment in step 6.The hybrid adjustment is shown to be able to adjust the deviations that were present after step 5 by moving cameras and tie points towards the model.The black dots denote estimated tie points that can be seen to coincide with the walls in comparison to the magenta points estimated in step 5.The blue ellipses highlight areas where this improvement is most obvious.
Figure 4 compares the a-posteriori standard deviations of the estimated 3D tie points with and without considering the plane relations with the first 10 images and with all images.Between steps 2 and 4 as well as between steps 5 and 6, the tie point precision improves clearly when adjusting the points with the building model.Especially the Z-direction shows strong improvements.The relative differences in precision of points remain similar as they depend mainly on the number of images observing a point.Adding images without new planes in step 5 yields higher standard deviations for the new tie points (Note that point indices are not ordered and change from the top figures to the bottom ones).All points are clearly improved by considering additional model planes in step 6, with an estimated precision of the tie points in the order of ±0.2 m.
Real Data
For the acquisition of a real image sequence we used a manually controlled DJI Matrice 100 quadrocopter with gimbal stabilised Zenmuse X3 camera.We used the same area as for the simulations, but with a different trajectory.The camera has a fixed focus, 3.61mm focal length and a 1/2.3"CMOS sensor having 4000x3000 pixels and a pixel size of 1.5 μm.Images were taken automatically every 2 seconds.The image sequence consists of 183 images with an average ground sampling distance of 6 mm/pixel.On average, there was a five-fold overlap, so that 1 http://www.agisoft.com/on average, each tie point was observed in five images.The used GNSS device receives GPS, GLONASS and SBAS satellites signals.The flying height above ground was up to 20 m at the beginning to obtain good GNSS signals and about 2 m for the last part of the flight.The surrounding buildings are 4 to 30 m high.Even between the buildings, GNSS signals from at least 5 satellites were received at each camera position.To be able to also test our processing pipeline using images without GNSS coverage we only considered GNSS observations for the first 110 images of the sequence.
Image distortion was eliminated prior to processing based on available interior orientation parameters.The processing steps 1 and 2 where carried out using the commercial Software Agisoft PhotoScan Pro 1 .In the adjustment of step 2, the GNSS observations for the first 110 images were considered to define the datum.In the subsequent steps, image coordinates exported from PhotoScan were used as observations in our hybrid bundle adjustment (steps 3 and 4); similarly, exported orientation parameters and object point coordinates served as initial values for the unknowns.We only exploit tie points that are observed in at least three images and are considered to be inliers by PhotoScan.This is done to minimise the number of outliers, as at this stage our adjustment does not yet handle outliers in the observations.After eliminating points as described above, there remain 5400 object points in the block.
We show the result for the first part (110 images with GNSS) of the image sequence with considering images and model planes (after step 4) in Figure 5.A comparison of the initial point positions with the building model shows that the distances of most points from their corresponding planes are below 2 m (i.e., below the expected accuracy of GNSS).GNSS for all images yields proper georeferencing which allows for the initialisation of the fictitious observations for our hybrid adjustment despite the simple distance criterion used for assigning tie points to model planes.
Regarding corrections to planes, we often observe ground points being erroneously assigned to wall planes or planes only partly covered by tie points, which results in adjusted walls that are no longer vertical.Furthermore, tie points on building details not contained in the model due to generalisation or points on vegetation close to the building also introduce errors to the parameters of these planes.The profile shown in Figure 5 (right) shows a plane (blue ellipse) that is affected by complex structures not represented in the generalised building model.
We observe several limitations of the current state of the method: There are a few remaining outliers and quite a few points on structures not represented in the building model.The simple distance criterion leads to wrong assignments of points to planes that cannot adequately be handled by our method yet.
Figure 6 shows the results achieved after adding the remaining images in step 5 and including additional model planes in step 6; Figure 7 shows the improvement of the a-posteriori standard deviations of tie points.Whereas the improvement is relatively small between steps 2 and 3 due to the relative large number (110) of GNSS positions used in the adjustment, the tie points observed only in images without GNSS coverage profit most from the inclusion of the model planes: the corresponding object space coordinates have a standard deviation smaller by a factor of two for points in the range of point indices 3000 to 4000.
CONCLUSION AND FUTURE WORK
The method presented in this paper allows for the integration of a generalised building model into the pose estimation of image sequences captured by an UAS.The building model is integrated by fictitious observations of the distances between tie points and model planes.Points are assigned to model planes on the basis of a simple distance criterion.
Our experiments based on simulated data show that the inclusion of a building model results in a considerable improvement of the precision of the resultant 3D points and in a better alignment of the estimated object points with the model.On the other hand, the experiments based on real data show remaining challenges.
The main problem is to find correct matches between tie points and model planes; our simplistic technique based on a distance criterion proofs not to be sufficient.Nevertheless, the adjustment procedure did result in an improvement of the estimated precisions of the tie point coordinates.
In our future work we will address the problem of ambiguous assignments between points and model planes.A next step will be the implementation of robust estimation to detect outliers.The matching process between tie points and planes can be improved by considering the estimated precision of tie point coordinates to adapt the distance threshold for assigning points to planes, replacing the decision by a hypothesis test.Further, to examine the influence of the LoD of the model on the results of our method, experiments with models of different degrees of generalisation will be carried out.Further developments will consist in a proper handling of occlusions to reduce the number of plane candidates for each tie point and the integration of a point cloud segmentation to detect planes that are not part of the model.
Figure 1 :
Figure 1: Relevant entities in our scenario.Two cameras i with identical camera constant c, image coordinate axes (ui, vi), projection centres PCi and three rotation angles (i, i, i) with i {1,2} represent the multi-view scenario where sensors capture an object point P in world coordinates X, Y, Z.The generalised building model is represented by corner points VTk with i {1,2, … } in world coordinates and by the planes they are situated in.Each plane j has a local coordinate system (xj, yj, zj) where the local zjaxis is the plane normal Nj and xj, yj are axes in the plane.The origin of the coordinate system of plane j is 0, , and each plane coordinate system is rotated relative to the world coordinate system by three angles (j, j, j) that are not shown in the figure.The orthogonal distance of an object point P to a corresponding plane of the building model is denoted by d.
Figure 2 :
Local plane parameterisation with two angles α, β and a shift δ (bold arrow) along the local z-axis, which is the plane normal N. d: distance of a point P from the plane.
Figure 3 :
Figure 3: Results of adjustment after different processing steps.Black dots: estimated points: black asterisks: estimated camera positions; magenta: initial positions of points (dots) and camera positions (asterisks), typically the results of the previous step; red crosses: simulated noise-free camera positions.The building model is super-imposed to these results.a) Adjustment without planes (step 2); the initial values of the camera positions correspond to the GNSS positions.The red ellipses indicate deviations of the results relative to the building model.b) Adjustment with planes (step 4).c) Adjustment including new images (step 5) .d) Adjustment including new images and new planes (step 6).The blue ellipses highlight areas where the results of step 5 differ from the model and the adjusted black points coincide with a wall.
Figure 4 :Figure 5 :
Figure 4: A-posteriori standard deviations of the estimated tie point coordinates in object space from the simulated data after steps 2 (top left), 4 (top right), 5 (bottom left) and 6 (bottom right).
Figure 6 :
Figure 6: Results of two variants of hybrid adjustment.Black dots: estimated tie points; black asterisks: estimated camera positions; magenta dots/asterisks: initial positions of tie points/camera positions, i.e., results of the previous processing steps.Red asterisks: GNSS observations of camera positions.Left: Results of the hybrid adjustment with real data after step 5 including new images but no new planes.Right: Results after step 6, including all planes..
Figure 7 :
Figure 7: A-posteriori standard deviations of the tie point coordinates in object space after steps 2 (top left), 4 (top right), 5 (bottom left) and 6 (bottom right). | 6,963.6 | 2016-06-06T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Lobster Position Estimation Using YOLOv7 for Potential Guidance of FANUC Robotic Arm in American Lobster Processing
: The American lobster ( Homarus americanus ) is the most valuable seafood on Canada’s Atlantic coast, generating over CAD 800 million in export revenue alone for New Brunswick. However, labor shortages plague the lobster industry, and lobsters must be processed quickly to maintain food safety and quality assurance standards. This paper proposes a lobster estimation orientation approach using a convolutional neural network model, with the aim of guiding the FANUC LR Mate 200 iD robotic arm for lobster manipulation. To validate this technique, four state-of-the-art object detection algorithms were evaluated on an American lobster images dataset: YOLOv7, YOLOv7-tiny, YOLOV4, and YOLOv3. In comparison to other versions, YOLOv7 demonstrated a superior performance with an F1-score of 95.2%, a mean average precision (mAP) of 95.3%, a recall rate of 95.1%, and 111 frames per second (fps). Object detection models were deployed on the NVIDIA Jetson Xavier NX, with YOLOv7-tiny achieving the highest fps rate of 25.6 on this platform. Due to its outstanding performance, YOLOv7 was selected for developing lobster orientation estimation. This approach has the potential to improve efficiency in lobster processing and address the challenges faced by the industry, including labor shortages and compliance with food safety and quality standards.
Introduction
The American lobster (Homarus americanus) industry relies on various transformation processes to ensure the high quality of its products. Quality assurance is crucial in this sector, as Renaud et al. [1] reported that quality factors can fluctuate due to inconsistencies in labor practices and non-existent or unenforced procedures. Moreover, the limited American lobster fishing season imposes time constraints on processing, which are not always met, thus affecting the quality of the final product [1]. Furthermore, the Atlantic Canada Opportunities Agency highlighted a critical labor and skills shortage in Atlantic Canadian businesses, with American lobster processors experiencing production limitations as more workers leave the industry [2]. As a result, many processing plants in New Brunswick struggle to process the volume of American lobster which is caught. Addressing this issue requires expanding automation and implementing vision-guided robots in processing operations [2].
In recent years, there has been significant progress in computer-based vision and deep convolutional neural network (CNN) methods for applications in agriculture [3] and food processing [4], such as detection, recognition, and segmentation. Various studies have demonstrated the effectiveness of object detection techniques, including YOLO and Faster R-CNN, in tackling challenges across food processing and agriculture sectors. These techniques have been applied to various food items, such as carrots, ham, apples, mutton, fruits, shrimp, Nile tilapia, and Atlantic salmon [5][6][7][8][9][10][11][12].
Several other studies have applied deep learning and computer vision techniques to different targets in the food industry, reinforcing the significance of our work. In the
Robotics-Integrated Vision Systems
In recent years, there has been a growing interest in developing advanced vision systems for robotic applications. This is particularly useful for automating complex tasks such as identifying and manipulating objects of varying shapes and sizes. In a previous study [18], we investigated two distinct vision systems for enabling the FANUC robotic arm in Figure 1 to recognize and locate lobsters. The FANUC vision-based solution, the IRVision system, was assessed using two different tools: the Curved Surface Matching (CSM) Locator and the Geometric Pattern Matching (GPM) Locator.
The GPM Locator is a computer vision technology designed to identify and locate specific geometric shapes or patterns in an image. The CSM Locator is an effective computer vision solution used to identify and locate curved surfaces on an object. These solutions are often used in manufacturing and quality control applications, where it is necessary to precisely locate and inspect components or products on a production line. However, the experiments conducted in the previous study showed that both tools exhibit limitations in the detection of effectiveness and speed. They are not the most effective solutions for more complex shapes or objects that do not have a distinct geometric pattern or feature such as lobsters. In contrast, the object detection model based on the YOLOv4 algorithm showed promising results when implemented on the NVIDIA Jetson Xavier NX.
Building upon this foundation, the present study aims to further explore and enhance the capabilities of the YOLO-based vision system for lobster detection and orientation estimation. In this work, we have combined an object orientation estimation algorithm with the YOLOv7 model to improve the detection and identification of lobster body parts and their spatial orientation, evaluated the performance of this integrated approach, and assessed its effectiveness in accurately detecting lobster body parts while maintaining real-time processing speeds. the YOLOv7 model to improve the detection and identification of lobster body parts and their spatial orientation, evaluated the performance of this integrated approach, and assessed its effectiveness in accurately detecting lobster body parts while maintaining realtime processing speeds.
Data Pre-Processing
As far as we are aware, there is no open-source American lobster dataset providing representative samples that can be used for developing automated object detection models for lobster position estimation. Continuing our research from [18], 1000 images of cooked lobsters of various sizes were added to the dataset. These images consider the variation in lighting conditions and lobster orientation. Furthermore, to avoid the problem of overfitting, the number of images is increased through data augmentation. This paper adopts a data augmentation strategy that includes the following: 1. Reorienting angles: randomly rotating images within a specified range (−10 to 10 degrees) to create variations in lobster orientation. 2. Adjusting saturation: modifying the saturation levels in images to simulate different lighting conditions. 3. Flipping images: creating horizontal and vertical flips of the original images to introduce variations in the dataset. 4. Translating: shifting the images horizontally and vertically within a defined range to create positional variations.
With the four augmentation techniques applied to each of the 1300 original images, 5200 additional images were generated, resulting in a total dataset size of 6500 images.
As lobsters have one body, one head, one tail (folded or not), and two claws, their proportions are not equal. This uneven distribution could affect training and classification. Data augmentation techniques have helped the model to learn robust features for each class. Furthermore, the pre-trained YOLOv3, YOLOv4, and YOLOv7 models have been trained on the extensive dataset, MS COCO. These pre-trained models have already learned valuable features for detecting objects by leveraging knowledge from larger datasets.
Real-Time Object Detection
Object detection is a computer-vision task that includes classifying and locating multiple objects in a single image, using bounding boxes that locate objects in the image in
Data Pre-Processing
As far as we are aware, there is no open-source American lobster dataset providing representative samples that can be used for developing automated object detection models for lobster position estimation. Continuing our research from [18], 1000 images of cooked lobsters of various sizes were added to the dataset. These images consider the variation in lighting conditions and lobster orientation. Furthermore, to avoid the problem of overfitting, the number of images is increased through data augmentation. This paper adopts a data augmentation strategy that includes the following:
1.
Reorienting angles: randomly rotating images within a specified range (−10 to 10 degrees) to create variations in lobster orientation.
2.
Adjusting saturation: modifying the saturation levels in images to simulate different lighting conditions. 3.
Flipping images: creating horizontal and vertical flips of the original images to introduce variations in the dataset.
4.
Translating: shifting the images horizontally and vertically within a defined range to create positional variations.
With the four augmentation techniques applied to each of the 1300 original images, 5200 additional images were generated, resulting in a total dataset size of 6500 images.
As lobsters have one body, one head, one tail (folded or not), and two claws, their proportions are not equal. This uneven distribution could affect training and classification. Data augmentation techniques have helped the model to learn robust features for each class. Furthermore, the pre-trained YOLOv3, YOLOv4, and YOLOv7 models have been trained on the extensive dataset, MS COCO. These pre-trained models have already learned valuable features for detecting objects by leveraging knowledge from larger datasets.
Real-Time Object Detection
Object detection is a computer-vision task that includes classifying and locating multiple objects in a single image, using bounding boxes that locate objects in the image in order to predict the class of objects, as in image classification, and the coordinates of the bounding box which adapt to the detected object.
There are two categories of object detectors. The first category is single-stage detectors, which use a single convolutional neural network to detect objects in images, such as YOLO (You Only Look Once), a popular object detection algorithm that has seen several versions [19][20][21][22][23], with YOLO v7 being the latest version. It includes SSD, Single Shot Multi- box Detector [24], Focal Loss for Dense Object Detection [25], and DetectoRS, Detecting Objects with Recursive Feature Pyramid and Switchable [26]. The second category includes two-stage detectors, such as Fast R-CNN [27], Faster R-CNN [28], and Mask RCNN [29]. Through a region proposal network (RPN), this category of models generates regions of interest in the first stage and then sends these region proposals to the second stage for object classification and bounding-box regression. Two-stage models are generally slower than single-stage detectors, which use a single neural network to output classification probabilities and bounding box regression.
Lobster Orientation Estimation Approach
In a controlled environment, where the workspace is well-defined with a known relationship between the camera's field of view and the robotic arm's workspace, the 2D bounding box coordinates and lobster orientation estimation provide precise information on the lobster's location and orientation, enabling a robotic arm to move quickly and accurately to pick up and manipulate the lobster for processing. This approach can lead to enhanced efficiency and increased productivity.
As illustrated in Figure 2, we developed an orientation estimation algorithm that combines a convolutional neural network model for object detection with angle calculation between the detected parts of the lobster. The algorithm proceeds through the following steps: a.
Output layers are obtained from the trained neural network model for object detection. b.
The algorithm iterates through each output layer, examining every detected object within it. c.
For each object, class scores are computed, and the class with the highest score is identified. This step helps to identify the specific part of the lobster that the bounding box corresponds to. d.
The detection confidence level is determined. If the confidence level is above a predefined threshold, the bounding box is considered reliable, and its coordinates are extracted. e.
The center point of each bounding box is calculated by averaging the x and y coordinates of the box's corners. f.
Using the center points of the bounding boxes, the angle between the line connecting the centers and the horizontal x-axis is computed using the arctan2 function, as shown in Equation (1). This angle represents the orientation of the lobster in the image.
The described algorithm efficiently narrows down candidate objects based on their class scores and confidence levels. This ensures that only relevant and reliable detections are considered. This filtering step minimizes false positives and maintains the performance overall. Once the bounding box coordinates are obtained, the center point is calculated and the arctan2 function is employed to determine whether the angle is computationally efficient. It provides a complete range of angles from −π to π. While this approach has its merits, it is important to consider that the accuracy and reliability of the estimated orientation are highly dependent on the quality of the object detection model and the precision of the calculated bounding box coordinates.
In this work, we used YOLOv7. As a member of the YOLO (You Only Look Once) family, YOLOv7 is known for its real-time object detection capabilities, providing both high speed and accuracy in detecting objects within images. This advantage is crucial when working with a robotic arm, as the swift and precise localization of the lobster is required for effective manipulation. Additionally, the features of YOLOv7 are significantly improved compared to its predecessors in terms of performance, resulting in a better detection of small objects and reduced false positives. This enhanced detection quality is essential when estimating the orientation of lobsters as they may vary in size and shape. Furthermore, YOLOv7 handles a wide range of object categories, making it a versatile choice for various applications beyond lobster orientation estimation. By integrating YOLOv7 with the proposed orientation estimation algorithm, we can benefit from its real-time performance, improved accuracy, and versatility, leading to a more reliable and efficient system for guiding a robotic arm to locate and handle lobsters.
cantly improved compared to its predecessors in terms of performance, ter detection of small objects and reduced false positives. This enhanced is essential when estimating the orientation of lobsters as they may vary Furthermore, YOLOv7 handles a wide range of object categories, ma choice for various applications beyond lobster orientation estimatio YOLOv7 with the proposed orientation estimation algorithm, we can be time performance, improved accuracy, and versatility, leading to a mor cient system for guiding a robotic arm to locate and handle lobsters. As shown in Figure 3, YOLOv7 is based on a single CNN, which is main parts: the backbone, the neck, and the head. The backbone is respo ing features from the input image. In YOLOv7, the backbone is a comb weight and a deeper CNN, allowing a balance between accuracy and s responsible for fusing the features from the backbone, providing a high tation of the input image. In YOLOv7, the neck comprises several layer and upsampling layers. The head is responsible for predicting the bou class probabilities of the objects in the image. In YOLOv7, the head c As shown in Figure 3, YOLOv7 is based on a single CNN, which is divided into three main parts: the backbone, the neck, and the head. The backbone is responsible for extracting features from the input image. In YOLOv7, the backbone is a combination of a lightweight and a deeper CNN, allowing a balance between accuracy and speed. The neck is responsible for fusing the features from the backbone, providing a higherlevel representation of the input image. In YOLOv7, the neck comprises several layers of convolutional and upsampling layers. The head is responsible for predicting the bounding boxes and class probabilities of the objects in the image. In YOLOv7, the head consists of several layers of convolutional and fully connected layers. The head takes the features from the neck as an input and produces the final predictions. YOLOv7 also employs anchor boxes, which are predefined bounding boxes with various aspect ratios, to improve detection accuracy. The model uses a prediction module that estimates class probabilities and bounding box coordinates for each anchor box.
REVIEW
6 of 16 which are predefined bounding boxes with various aspect ratios, to improve detection accuracy. The model uses a prediction module that estimates class probabilities and bounding box coordinates for each anchor box. Figure 4 shows the experimental environment for this study. The training was conducted on the Ubuntu operating system using an Acer model computer equipped with Intel Core i7-8750H @ 2.20 GHz, GPU Nvidia GeForce RTX 3060 Ti and 16 GB RAM. Then, the models were deployed and experimented on using the embedded mobile platform, Nvidia Jetson Xavier NX, which provides high artificial intelligence performance, the type of power efficiency needed for all modern AI networks, and the platform is supported by Nvidia software Jet-Pack SDK, which includes CUDA Toolkit, cuDNN, OpenCV, Ten-sorRT, and L4T with the LTS Linux Kernel. Table 1 shows the hardware specifications of Jetson Xavier NX. Figure 4 shows the experimental environment for this study. The training was conducted on the Ubuntu operating system using an Acer model computer equipped with Intel Core i7-8750H @ 2.20 GHz, GPU Nvidia GeForce RTX 3060 Ti and 16 GB RAM. Then, the models were deployed and experimented on using the embedded mobile platform, Nvidia Jetson Xavier NX, which provides high artificial intelligence performance, the type of power efficiency needed for all modern AI networks, and the platform is supported by Nvidia software Jet-Pack SDK, which includes CUDA Toolkit, cuDNN, OpenCV, TensorRT, and L4T with the LTS Linux Kernel. Table 1 shows the hardware specifications of Jetson Xavier NX.
Experimental Setup
Designs 2023, 7, x FOR PEER REVIEW 6 which are predefined bounding boxes with various aspect ratios, to improve dete accuracy. The model uses a prediction module that estimates class probabilities bounding box coordinates for each anchor box. Figure 4 shows the experimental environment for this study. The training was ducted on the Ubuntu operating system using an Acer model computer equipped Intel Core i7-8750H @ 2.20 GHz, GPU Nvidia GeForce RTX 3060 Ti and 16 GB RAM. T the models were deployed and experimented on using the embedded mobile platf Nvidia Jetson Xavier NX, which provides high artificial intelligence performance, the of power efficiency needed for all modern AI networks, and the platform is supporte Nvidia software Jet-Pack SDK, which includes CUDA Toolkit, cuDNN, OpenCV, sorRT, and L4T with the LTS Linux Kernel. Table 1 shows the hardware specificatio Jetson Xavier NX. Figure 5 illustrates the steps involved in implementing object detection models for detecting American lobster parts. Initially, images were manually annotated using La-belImg, an open-source graphical image annotation tool, as demonstrated in Figure 6. The labeling results were saved directly in YOLO format, with a text file accompanying each image, sharing the same name as its corresponding image file. Each line within the text file represents the attributes of a single object (class number, object center in x, object center in y, object width, and object height). Subsequently, the image files were partitioned into two sets: 90% for training and 10% for testing. All text files and image sets were then input into the training process, where transfer learning was employed, and model hyperparameters were fine-tuned. The size of the input images was 640 × 640. The YOLOv7, YOLOv7-Tiny, YOLOv4, and YOLOv3 algorithms, which were previously trained on the MS COCO dataset [30], were retrained using GPU. Following the training process, performance metrics and visual detection were evaluated to select the best-performing weights. For further testing and evaluation, the trained models were implemented on the Nvidia Jetson Xavier NX platform.
Experimental Setup
Designs 2023, 7, x FOR PEER REVIEW Figure 5 illustrates the steps involved in implementing object detection mo detecting American lobster parts. Initially, images were manually annotated u belImg, an open-source graphical image annotation tool, as demonstrated in Figur labeling results were saved directly in YOLO format, with a text file accompanyi image, sharing the same name as its corresponding image file. Each line within file represents the attributes of a single object (class number, object center in x, ob ter in y, object width, and object height). Subsequently, the image files were par into two sets: 90% for training and 10% for testing. All text files and image sets w input into the training process, where transfer learning was employed, and mo perparameters were fine-tuned. The size of the input images was 640 × 640. The Y YOLOv7-Tiny, YOLOv4, and YOLOv3 algorithms, which were previously traine MS COCO dataset [30], were retrained using GPU. Following the training process mance metrics and visual detection were evaluated to select the best-performing w For further testing and evaluation, the trained models were implemented on the Jetson Xavier NX platform.
Model Performance Metrics
The performance of the models was assessed using standard performa commonly employed in object detection tasks [31]. These metrics are crucial ing different models and determining their effectiveness at detecting objects
Model Performance Metrics
The performance of the models was assessed using standard performance metrics commonly employed in object detection tasks [31]. These metrics are crucial for comparing different models and determining their effectiveness at detecting objects accurately. The following section defines some basic concepts used in the calculation of these performance metrics: To compute these values, the Generalized Intersection Over Union (GIOU) score is used in order to determine if the detection is correct or not by comparing the GIOU score to a predefined threshold. The GIOU measures how well the predicted bounding box overlaps the ground truth bounding by taking into account the differences in the size and aspect ratio between the predicted and ground truth boxes, in addition to their overlap area and union area, as shown in Figure 7. GIOU has shown itself to be more robust than Intersection Over Union (IOU), especially when dealing with small or heavily overlapping objects. The following section defines some basic concepts used in the calculation of these perfor mance metrics: To compute these values, the Generalized Intersection Over Union (GIOU) score is used in order to determine if the detection is correct or not by comparing the GIOU score to a predefined threshold. The GIOU measures how well the predicted bounding box overlaps the ground truth bounding by taking into account the differences in the size and aspect ratio between the predicted and ground truth boxes, in addition to their overlap area and union area, as shown in Figure 7. GIOU has shown itself to be more robust than Intersection Over Union (IOU), especially when dealing with small or heavily overlapping objects. IOU is the Intersection Over Union score, A is the area of the predicted bounding box, B is the area of the ground-truth bounding box, C is the area of the smallest box tha completely encloses both A and B, and (A U B) is the area of the union of A and B.
The GIOU score ranges from −1 to 1, where 1 indicates a perfect match between the predicted and ground-truth bounding boxes, 0 indicates no match, and −1 indicates a com plete mismatch. The GIOU score ranges from −1 to 1, where 1 indicates a perfect match between the predicted and ground-truth bounding boxes, 0 indicates no match, and −1 indicates a complete mismatch.
Once the GIOU score is calculated for all images, precision, recall, and F1-score metrics can be calculated. Precision, which measures the accuracy of the model to identify a sample as positive, is computed with the following equation: Recall, which measures the ability of the model to identify all the positive samples as positive, is computed with the following equation: High precision means a low false-positive prediction rate and high recall means a low false-negative prediction rate. Hence, an accurate object detection model should keep a balance between precision and recall, at a fixed recall interval [0 1.0], with steps of 0.1, according to the 11-point interpolation method proposed by Gerard Salton [32]. The precision and recall curve is summed with the Average Precision (AP) metric and computed with the following equation: where P interp (R) = max R ,R ≥R P R This means that rather than using observed precision at each point R, the AP is calculated by taking the maximum precision at a recall that is greater than or equal to R. F1 score is the harmonic mean of precision and recall; it maintains the balance between precision and recall [33], and is computed with the following equation: The mean Average Precision (mAP) metric measures the object detector's accuracy over all specific classes. In other words, the mAP is the average AP over all classes [33], and is computed with the following equation: where AP i represents the AP of the class i and N is the number of all evaluated classes. Frames per second (fps) represents the number of images that can be detected per second and provides an evaluation of the detector speed.
The precision, recall, F1 score, and AP were calculated independently for each class, treating each class as a positive class and the remaining classes as negatives. The overall performance of the model was then calculated by averaging these metrics across all classes.
Results and Discussion
An experimental evaluation of the YOLOv7 model was conducted on the lobster dataset. The results of the model are illustrated in Figure 8, which shows a high degree of detection efficiency for the various target classes. A value of 96.2% was calculated for the model's mean average precision (mAP). As a result of this high level of precision, YOLOv7 is capable of identifying and distinguishing lobster body parts with high accuracy. For accurate orientation estimation and manipulation, this is crucial.
Results and Discussion
An experimental evaluation of the YOLOv7 model was conducted on the lobster dataset. The results of the model are illustrated in Figure 8, which shows a high degree of detection efficiency for the various target classes. A value of 96.2% was calculated for the model's mean average precision (mAP). As a result of this high level of precision, YOLOv7 is capable of identifying and distinguishing lobster body parts with high accuracy. For accurate orientation estimation and manipulation, this is crucial. The YOLOv7 model's accuracy was assessed using a confusion matrix as a performance metric. As depicted in Figure 9, each column represents the predicted proportions for each class. Each row corresponds to the actual proportions of each class present in the data. As per the data presented in Figure 9, the model demonstrates a high accuracy in predicting the classes "Tail", "Claw", "Head", "Body", "Fore-Claw", and "Folded-Tail", with correct prediction rates of 97%, 98%, 97%, 96%, 96%, and 97%, respectively. This evidence of the model's accuracy highlights its effectiveness in classifying and identifying various lobster parts. This suggests that YOLOv7 is well-suited to applications requiring precise detection and distinction of complex object classes.
As part of this study, the changes in the loss values, including the Box loss, the objectness loss, and the classification loss are presented in a graphical format. YOLOv7 uses the 'GIOU Loss' as the bounding box loss function. Box loss is calculated as the mean of the GIOU loss. A higher accuracy is indicated by a lower box loss value. Objectness loss measures the difference between the predicted and ground truth objectness scores, with a lower value indicating a higher accuracy. The classification loss measures the difference between the predicted and ground truth class probabilities for each object, where a lower value represents a higher accuracy. As shown in Figure 10, as iterations increase, loss values steadily decrease and eventually stabilize; after 200 iterations, convergence is achieved.
Additionally, YOLOv7 was benchmarked against other well-known object detection models, including YOLOv3, YOLOv4, and YOLOv7-Tiny, to demonstrate its effectiveness in detecting lobster body parts. Training and testing were conducted using the lobster dataset. As part of the evaluation process, performance metrics including precision, recall, F1-score, and<EMAIL_ADDRESS>were evaluated. Based on the performance metrics presented in Table 2, it is evident that YOLOv7 outperforms the other object detection models by a considerable margin. According to the results, this model achieved scores of 95.5%, 95.1%, 95.2%, and 95.3%, respectively, for precision, recall, F1-score, and<EMAIL_ADDRESS>demonstrating its superior capability when compared with other models in detecting and identifying lobster body parts. In this study, the YOLOv7 object detection model was found to be the top performer and was therefore chosen to be used in further experiments. data. As per the data presented in Figure 9, the model demonstrates a high accuracy in predicting the classes "Tail", "Claw", "Head", "Body", "Fore-Claw", and "Folded-Tail", with correct prediction rates of 97%, 98%, 97%, 96%, 96%, and 97%, respectively. This evidence of the model's accuracy highlights its effectiveness in classifying and identifying various lobster parts. This suggests that YOLOv7 is well-suited to applications requiring precise detection and distinction of complex object classes. As part of this study, the changes in the loss values, including the Box loss, the objectness loss, and the classification loss are presented in a graphical format. YOLOv7 uses the 'GIOU Loss' as the bounding box loss function. Box loss is calculated as the mean of the GIOU loss. A higher accuracy is indicated by a lower box loss value. Objectness loss measures the difference between the predicted and ground truth objectness scores, with a lower value indicating a higher accuracy. The classification loss measures the difference between the predicted and ground truth class probabilities for each object, where a lower value represents a higher accuracy. As shown in Figure 10, as iterations increase, loss values steadily decrease and eventually stabilize; after 200 iterations, convergence is achieved. Additionally, YOLOv7 was benchmarked against other well-kno models, including YOLOv3, YOLOv4, and YOLOv7-Tiny, to demonstr in detecting lobster body parts. Training and testing were conducte dataset. As part of the evaluation process, performance metrics includi F1-score, and<EMAIL_ADDRESS>were evaluated. Based on the performance m Table 2, it is evident that YOLOv7 outperforms the other object det considerable margin. According to the results, this model achieved sco 95.2%, and 95.3%, respectively, for precision, recall, F1-score, and mA ing its superior capability when compared with other models in detect lobster body parts. In this study, the YOLOv7 object detection model w top performer and was therefore chosen to be used in further experim On the GeForce RTX 3060 Ti, as shown in Table 3, the YOLOv7 detector was able to achieve an impressive detection speed of 111 frames per second. This demonstrates its ability to detect objects at high speeds. This speed is slightly slower than YOLOv7-Tiny's 188.7 fps, but it is important to note that YOLOv7 still delivers a remarkable performance in terms of frame rates. The model is able to handle real-time applications effectively, even when compared to its faster counterpart, the YOLOv7-Tiny. Models were deployed on the NVIDIA Jetson Xavier NX embedded platform. A comparison of the trained models' inference time detection is presented in Table 3. To facilitate real-time evaluation, the inference time has been converted from milliseconds to frames per second. A powerful device, the NVIDIA Jetson Xavier NX, allowed YOLOv7-Tiny to achieve 25.6 frames per second in real-time. However, YOLOv7 was deemed unsuitable for deployment on mobile devices due to its high computational requirements. It can be seen from Table 3 that YOLOv7 was only able to achieve an average frame rate of 8.6, making it infeasible for real-time detectors to operate on the Jetson Xavier NX. In contrast, the real-time performance of YOLOv7-Tiny on the Jetson Xavier NX is quite promising as it achieved a frame rate of 25.6 frames per second. This indicates that YOLOv7-Tiny may be a more suitable choice for real-time applications on resource-constrained platforms such as the Jetson Xavier NX. Figure 11 demonstrates the performance of the Jetson Xavier NX device in terms of both its accuracy<EMAIL_ADDRESS>and inference time (fps). The evaluation results clearly indicate that YOLOv7-Tiny emerges with the highest score of 105.6. This is followed closely by YOLOv7 and YOLOv4, which score 103.9 and 97.3 points, respectively. This comparison emphasizes the balance between detection accuracy and speed. Figure 11 demonstrates the performance of the Jetson Xavier NX device in terms of both its accuracy<EMAIL_ADDRESS>and inference time (fps). The evaluation results clearly indicate that YOLOv7-Tiny emerges with the highest score of 105.6. This is followed closely by YOLOv7 and YOLOv4, which score 103.9 and 97.3 points, respectively. This comparison emphasizes the balance between detection accuracy and speed. In an industrial context, with a distance between the camera and the processing line of about 50-100 cm, and the processing line normally running at about 30 m per second (m/s), the vision system should have a speed of 30 to 60 fps [33]. According to the results of the experiments, the Jetson Xavier NX can achieve real-time performance (25.6 fps) with YOLOv7-Tiny, but not with YOLOv7 (8.6 fps), which requires a high computing device for real-time lobster processing applications.
The visual detection results illustrated in Figure 12 for both YOLOv7 and YOLOv7tiny algorithms that were trained on American lobster images demonstrate that YOLOv7 was highly successful in identifying all lobster body parts with high scores, showcasing the effectiveness of the full model for this specific task. On the other hand, YOLOv7-tiny, a smaller and more compact version of the model, failed to detect some parts of the lobster. In an industrial context, with a distance between the camera and the processing line of about 50-100 cm, and the processing line normally running at about 30 m per second (m/s), the vision system should have a speed of 30 to 60 fps [33]. According to the results of the experiments, the Jetson Xavier NX can achieve real-time performance (25.6 fps) with YOLOv7-Tiny, but not with YOLOv7 (8.6 fps), which requires a high computing device for real-time lobster processing applications. The visual detection results illustrated in Figure 12 for both YOLOv7 and YOLOv7-tiny algorithms that were trained on American lobster images demonstrate that YOLOv7 was highly successful in identifying all lobster body parts with high scores, showcasing the effectiveness of the full model for this specific task. On the other hand, YOLOv7-tiny, a smaller and more compact version of the model, failed to detect some parts of the lobster. This discrepancy can be attributed to the reduced complexity and computational capacity of the YOLOv7-tiny model, which sacrifices some accuracy for the sake of increased speed and reduced resource consumption. YOLOv7 was ultimately chosen for estimating lobster orientation, leveraging its proven ability to accurately detect lobster body parts. This approach utilized the center coordinates of the head and body bounding boxes to estimate the orientation. By calculating the relative positions and angles between these two points, the model was able to infer the overall direction in which the lobster was facing. The successful implementation of YOLOv7 in this task can be seen in the results, with the lobster orientation estimation and corresponding output vectors clearly illustrated in Figure 13. These findings demonstrate the efficacy of YOLOv7 in not only detecting lobster body parts but also in extracting valuable information about their spatial orientation. In addition, this model's capability to process and analyze lobster structural details highlights its adaptability to handle similar challenges in other species or objects with complex morphologies. YOLOv7 was ultimately chosen for estimating lobster orientation, leveraging its proven ability to accurately detect lobster body parts. This approach utilized the center coordinates of the head and body bounding boxes to estimate the orientation. By calculating the relative positions and angles between these two points, the model was able to infer the overall direction in which the lobster was facing. The successful implementation of YOLOv7 in this task can be seen in the results, with the lobster orientation estimation and corresponding output vectors clearly illustrated in Figure 13. These findings demonstrate the efficacy of YOLOv7 in not only detecting lobster body parts but also in extracting valuable information about their spatial orientation. In addition, this model's capability to process and analyze lobster structural details highlights its adaptability to handle similar challenges in other species or objects with complex morphologies.
Conclusions
This study demonstrated the integration of an orientation estimation a convolutional neural network model, specifically YOLOv7, to estimate tion in images. Through rigorous comparison with other models, includin YOLOv4, and YOLOv3, YOLOv7 emerged as the top performer in terms inference time, boasting a mean average precision (mAP) of 95.3 and 111 Force RTX 3060 Ti. However, when deployed on the NVIDIA Jetson Xavie performance dropped to 8 FPS, rendering it unsuitable for real-time app platform. Nevertheless, the study adopted YOLOv7 for lobster orientatio to its superior performance, with the aim of guiding the FANUC LR Ma arm in lobster manipulation tasks within the robot's workspace. This nov the potential to overcome the limitations of FANUC's IRVision system, w struggled to detect complex lobster body parts, and paves the way for m accurate lobster processing in the food industry.
Future work should aim to explore several areas to build upon these ing experimenting with the FANUC LR Mate 200 iD robotic arm for lobst using a YOLOv7-based orientation estimation. Optimizing YOLOv7 for
Conclusions
This study demonstrated the integration of an orientation estimation algorithm with a convolutional neural network model, specifically YOLOv7, to estimate lobster orientation in images. Through rigorous comparison with other models, including YOLOv7-tiny, YOLOv4, and YOLOv3, YOLOv7 emerged as the top performer in terms of accuracy and inference time, boasting a mean average precision (mAP) of 95.3 and 111 FPS on the GeForce RTX 3060 Ti. However, when deployed on the NVIDIA Jetson Xavier NX, YOLOv7 s performance dropped to 8 FPS, rendering it unsuitable for real-time applications on this platform. Nevertheless, the study adopted YOLOv7 for lobster orientation estimation due to its superior performance, with the aim of guiding the FANUC LR Mate 200 iD robotic arm in lobster manipulation tasks within the robot's workspace. This novel approach has the potential to overcome the limitations of FANUC's IRVision system, which previously struggled to detect complex lobster body parts, and paves the way for more efficient and accurate lobster processing in the food industry.
Future work should aim to explore several areas to build upon these findings, including experimenting with the FANUC LR Mate 200 iD robotic arm for lobster manipulation using a YOLOv7-based orientation estimation. Optimizing YOLOv7 for deployment on platforms with limited computational resources, such as the NVIDIA Jetson Xavier NX, will help towards achieving high-quality real-time performance without sacrificing accuracy. Funding: This research received no external funding.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. | 8,694 | 2023-05-23T00:00:00.000 | [
"Engineering",
"Environmental Science",
"Computer Science"
] |
Impact of Hygiene Intervention Practices on Microbial Load in Raw Milk
1Tropical Infectious Diseases Research and Education Centre, University of Malaya, 50603 Kuala Lumpur, Malaysia. 2Department of Medical Microbiology, Faculty of Medicine, University of Malaya, 50603 Kuala Lumpur, Malaysia. 3Institute of Bioscience, Faculty of Science, University of Malaya, 50603 Kuala Lumpur, Malaysia. 4Economics & Social Science Research Centre, Malaysia Agriculture Research and Development Institute (MARDI), 43400 Serdang, Selangor Darul Ehsan. 5MaxVet Enterprise, 19 GF, Jalan BB 1/5, Taman Banting Baru, 42700 Banting, Selangor Darul Ehsan.
In Malaysia, consumption of fresh milk has increased over the years.This is mainly attributable to the increased awareness of milk and dairy nutritional benefits coupled with increased consumer preference towards dairy-derived products 1 .With the growing demand of milk and dairy products, food safety becomes of paramount important in ensuring that milk and dairy products are safe to be consumed.As such, various programs such as sustainable dairy farming along with good husbandry practices have been implemented or recommended 1 to further improve the food processing line from farm to fork within the dairy chain, to obtain a higher quality of dairy product for consumption.Detection of foodborne pathogen in food is a critical component in the surveillance system for food safety monitoring.
In Malaysia, agriculture contributed 8.9% to the Gross Domestic Product (GDP) citing oil palm, livestock, fishing, rubber and forestry and logging as the main contributor in 2015 2 .Dairy products for the local demand are often satisfied by the importation and as the country has increased from 5% of self-sustainability of milk demand in 2008 3 (Malaysia Plans and Agricultural and Food), programs (National Dairy Development Program and strategies (Production, trade and integration) were put in place by the Malaysian government to tackle the low local milk production since the early 1970s 5 .These governmental efforts were geared toward increasing local production of milk for self-sufficiency in the country 6 .The Department of Veterinary Services provided a range of services to assist small-medium dairy producers to increase dairy production and enhance marketing of milk 7 .Milk producers sold their milk to Government established Milk Collection Centers based on quality of milk (3 rd and 4 th Malaysia Plans) 3 The dairy sector in Malaysia continues to face challenges to meet the demand of growing consumer taste and preference towards milk and dairy products.These challenges include lack of skills and training among small-holder farmers, low breed performance and inadaptability to local environmental conditions, poor dairy farm management and inadequate nutritious feed, and high input and feed costs 1 .Amongst these challenges, poor dairy management deemed to be the most critical component to tackle food safety concerns.With the majority of the farmers involve in managing dairy farms fall into the small-scale holders 8 , tackling the food safety component becomes even more important and necessary as the skills required for efficient milk production and technical knowledge on tropical dairy production are still lacking 9 .At the international level, dairy farming has been competitive and selective breeding of high yielding cows has also led to higher susceptibility to diseases 10 .Issues with farm management by the veterinary services have been reported including issues on disease and production constraints, elaboration and education programs to be part of the integrated services provided by one department.Consequently, the outreach of these services will limited be limited and smallholder farmers -medium dairy producers will be left out.Therefore, the skills required for efficient milk production and technical knowledge on tropical dairy production will be underdeveloped 9 , particular for small dairy producers, which was the target of the dairy initiative in the first place.
To address this concern, a program on adopting sustainable hygiene dairy practices was carried out to raise awareness within the smallscale dairy farming community in Malaysia on tproducing safe and quality milk.This paper attempts to report on the impact of this intervention program in the dairy farming community.
Selection of Farms
Dairy farmers were selected to participate in this program through their feedbacks from interviews and their motivation to improve their practices.Based on the acceptability of the farm owners in allowing the research team to visit the farm, one farm were selected for this study.The selected dairy farm was located in the state of Negeri Sembilan, occupying an area of about 0.3 acres surrounded by oil palm plantation.The dairy farm, herein referred to as "Farm X", housed 50 dairy cows of various breeds and these dairy cows were milked twice in a day (once early in the morning and the next, late in the evening).These dairy cows were visually healthy with no sign of malnutrition and were released for grazing in the morning after milking.
Design of experiment
Collection of milk samples were carried out in Day 1 of the farm visit, observing the routine farm practices of the farm owner and their workers.Swab samples from milk collection bucket, and milk collection were collected.In Day 2, farmers were trained on dairy hygiene practices using appropriate sterilizing method in the pre milking (cleaning of milking equipment), during milking and post milking processes (storage and transportation practices).Sampling of the collected milk were carried out on the same time to minimize the variation in practices and time of the day.During each sampling period, farmers were also interviewed using a structured questionnaire.Samples were processed immediately on site with proper aseptic practices through serial dilution of the milk samples with phosphate buffered saline and plated on 3M Petrifilm for Staphylococcus aureus, Enterobacteriae, E. coli as well as Yeast and Mold.Petrifilm were immediately transported to laboratory for incubation at 37°C for 24 hours and enumerated the following day.Swab samples of the cleaning equipment were collected for lab analysis.Farmers and workers were allowed to complete the milking process and a final pooled sample of milk was obtained for microbiological analysis.Swabs were immediately transported back to laboratory under chilled condition and stored in -80°C until further analysis using molecular methods for detection and quantification of microbial contamination.
Detection and Quantification of Contamination from Environment
Bacterial strains used in this study was E.coli ATCC 25922, ETEC (confirmed environmental strain), B. cereus (environmental strain), Salmonella paratyphi (environmental strain), and Vibrio parahaemolyticus (food isolate) that was confirmed using PCR.Swab samples were taken from surface of milking equipments (clusters, cups and milk churns of pooled milk) pre and post intervention.Approximately 10cm 2 of surface area was swabbed at the farm, and transported back to the laboratory and kept in -80°C until further analyses.
Swab samples were mixed and vortexed in 5ml of PBS buffer, and divided to 3 dilution MPN-tubes containing 1ml each tube.A total of 9-MPN tubes were used for each sample.A simple DNA extraction was conducted using crude cell lysis 11 .Briefly, bacterial cultures were streaked on nutrient agar and purified by selection of single colony in brain heart infusion broth followed by an incubation period at 37°C for 12-16 hours in the shaker.Post incubation, the culture were harvested by transferring 1ml of the culture into microcentrifuge tubes, and centrifuged at 12,000 ×g for 1 min.Supernatant was discarded and the cultures resuspended in 1ml of 1× TBE buffer (pH 8.0) and vortexed for 30 seconds.Tubes were subjected to rapid heat (100°C) and freeze (-20°C) treatment for 20 minutes each, and continued with the final centrifugation at 12,000×g for 2 minutes.The supernatant from the extracted cell lysis was used as DNA template for the study.Control cultures were extracted using crude cell lysis and measured for absorbance reading and quantification of DNA using UV-VIS spectrophotometer.
Primers used in this study were previously used in Wang, Cao 12 .The initial nucleic acid amplification protocol for the detection of 13 bacterial species was modified to five species.Five sets of primers targeting ETEC, Salmonella spp., B. cereus, V. parahaemolyticus and Escherichia coli were selected and optimized for multiplex MPN-PCR detection and enumeration.The total concentration of primers, magnesium chloride and annealing temperature was optimized for this study.The optimized PCR profile used was pre-denaturation at 94°C, 15 s, a total of 35 cycles of denaturation at 94°C, 30s; annealing 56°C for 15s; extension of 72°C for 35s, and followed by post-extension 72°C for 2 mins with a final extension 45°C at 2 mins.Reaction mixture of each PCR reaction tube contained 1× GoTaq buffer (PROMEGA, USA), 3mM of MgCl 2 , 0.5mM of dNTP, 300 nm of each primers, 2.5U of Taq Polymerase (PROMEGA, USA) and 2.5ul of crude DNA lysate in a total of 25ul reaction tube.Visualization of the PCR product was carried out using an AATI fragment analyzer (Advanced Analytical AATI, USA) according to the manufacturer's protocol.
Analysis of Intervention Improvement using Relative Exposure
Improvement to the Good Agricultural Practice from Farm X was assessed using the relative risk associated to the analytical results of the samples analyzed from the farms.
In this analysis, baseline exposure was defined as the representative level of toxins produced by the concentration of bacteria (log cfu/ml) in the milk sample, which is the preintervention data.Adapting from 13 from using risk relative estimates in Campylobacter in broiler meat as microbiological criterion, this study adapted the relative exposure estimates of Staphylococcus aureus enterotoxin in milk based on the concentration data results in the present study.To obtain the estimated level of toxins in the milk samples in relation to the concentration of S. aureus, a constant relation between toxin production model and cell numbers developed using milk data by 14 , the following equation was used: Tox = 0.9300751 × C -6.662092Where Tox, is the toxin production (log ng/ml) and C is the number of cells (log cfu/ml).
The relative exposure indicate the exposure level of the intervention procedure preand post-intervention to the quality of milk using the toxin level produced by S. aureus enterotoxin.The model was described in 15 and modified to assess the toxin level reduction as a result of the intervention.
RESULTS AND DISCUSSION
Observation of the farm practice before the implementation of the intervention program occurred at the first day.The medium-size dairy farm have 25 milking cows with the number of workers corresponding to one to five workers at point of visit.The farm milked the cows twice daily, using calves to suckle the cows briefly, followed by the milking process.The farm workers milked the cows using vacuum milking clusters.Equipment was washed and scrubbed prior to milking using tap water without soap or sanitizer, and after milking with disinfectant.The teat and hose were not washed between each milking and milk was collected into one steel churn for the entire farm, and was covered throughout the milking process.The milk collection was then pooled in a refrigerated bulk tank with a temperature monitor of 2-4°C.Sample was collected from this pool of milk as pre-intervention process.
In the intervention program, workers were trained on how to prepare sanitizing solution in hot water (65-70°C) to firstly rinse the interior and exterior parts of all equipment, followed by a second rinse with plain hot water.The udders and teats were also cleaned with sanitizing solution and clean towel, with replacement of the solution once it is dirty.Clean towel was used and workers were advised to wash their hands with sanitizing solution and wipe dry prior to milking the cows.After the milking process, additional disinfectant on the teat such as Alfadin™ was recommended before allowing the cows to go out for grazing.Storage of milk should be at cold temperature, or delivered to vendor within 1 hour post-milking.
The results of the microbiological quality of milk were shown in Table 1.The aerobic plate count did not exceed the limits set by the Department of Veterinary Services, Malaysia for price incentive 16 .Log reduction in S. aureus, Enterobacteriae, total coliform and mold were observed in the study post-intervention.Referring to ICMSF guideline, Mesophilic Aerobic Microorganisms are generally used as an index of utility, or indicators of general contamination, shelf life or spoilage and are not usually related to a health hazard.Although the product is not meant for international trade purpose, it can be used for verification of hygiene programs.The observed increase in the log percentage post-intervention (17%) showed that there may be other contributing factors, i.e., environment to the contamination during the milking process that was beyond the control of the trainers.Using Enterobacteriaeae as indicators of the history of the hygiene of the food production process 17 , the 4% reduction shows that the hygiene practices per se that were emphasized in this intervention program could not deter contamination of the milk.In order to identify the source of contamination, quantification of the swab samples collected from the milking equipment were tested semi-quantitatively using MPN-PCR.The detection limit for each targeted pathogen for this multiplex assay was 88.5 ng/µl of DNA template from Salmonella spp., 56 ng/µl for V. parahaemolyticus, and 0.24 ng/µl of template for B. cereus, E.coli ETEC and E. coli.Swab samples collected from the equipments, milk and clusters showed absence of ETEC, Bacillus cereus and V. parahaemolyticus.However, two swab samples were shown positive for Salmonella spp., located at the cluster of the milking equipment pre and post-intervention showing low level contamination of less than 100 cfu/ml (8 and 23 MPN/ml respectively).This finding also suggests the same as what was reported earlier, that the hygiene practices per se that were emphasized in this intervention program could not deter contamination of the milk .The persistence of S. paratyphi in the clusters after the intervention process, i.e., using sanitizing solution and hot water at the interior and exterior and rinsing again with hot water may imply a possibility of biofilm adherence to the interior part of the equipment where the milk flows.Therefore, future training component related to hygiene practices should include extending the soaking and rinsing period particularly for the interior part of the clustersto tackle the contamination matter.
From the aspect of food safety, S. aureus log reduction at 40% was then used for further characterization of the intervention program for the relative exposure based on the toxin levels that may occur in the milk sample.Simple analysis showed that the hygiene practices emphasized in the training were able to reduce the exposure of the toxin levels in milk samples up to 2.4× compared to the pre-intervention samples.During the study, all collected milk were pooled and stored in a refrigerated tank with a temperature monitor between 2 to 8°C at all times.Since predictive studies of S. aureus has shown that low temperature has a strong inhibitory effect on growth rate 18 , the increase and growth of nonpathogenic and pathogenic bacteria as well as toxin production from S.aureus was assumed to be negligible during storage.
CONCLUSION
The effectiveness of the intervention program focusing on hygiene dairy practices coupled with science-based evidence can be considered good.This is demonstrated on the low contamination of milk detected in the microbiological analyses, which can be overcome by increasing the soaking and rinsing period of the cluster on the milking equipment pre and post intervention.Meanwhile, the intervention program was effective on reducing risk factors concerning the growth of nonpathogenic and pathogenic bacteria.The findings of this study demonstrate that dairy hygiene practices should be stressed upon in training programs for dairy farmers in order to tackle food safety concerns while increasing local milk production.Additionally, similar to other training programs, future dairy hygiene training programs must take into consideration of enhancing the training scope of the program so as to increase the efficacy and effectiveness of it, i.e., imparting science based evidence and solution.
4o 9.3%4.Various policies LEE et al.: HYGIENE PRACTICES IMPACT ON MICROBIAL LOAD IN MILK)
Table 1 .
Quantification of microbial analyses using Petrifilm™ of milk s amples collected pre-and post-intervention procedure at Farm X | 3,626 | 2017-09-30T00:00:00.000 | [
"Materials Science"
] |
Decomposition Free Al 2 TiO 5-MgTi 2 O 5 Ceramics with Low-Thermal Expansion Coefficient
Solid solutions of (1 − x)Al2TiO5-xMgTi2O5 (x = 0 1) doped with alkaline feldspar were prepared. Thermal decomposition largely depends on the feldspar doping as same as the x value. It was found that decomposition free ceramics over 500 hours of heat treatment at 1100 ̊C in an ambient atmosphere could be obtained for the feldspar-doped ceramics at x > 0.5 with a fracture strength of 33 40 MPa and coefficient of thermal expansion of 2.4 4.1 × 10 K. A partial decomposition was observed for a compositional range of around x = 0.75. Both composition of the solid solution and an addition of the alkaline feldspar contributed synergistically to improve such thermal and mechanical properties. The decomposition free Al2TiO5-MgTi2O5 ceramics is expected for variety of high temperature applications including diesel particulate filters.
Introduction
Most of ceramics for high temperature applications such as refractory, ceramics filters and so on are damaged by a rapid temperature change, which largely reduces the usability and productivity because the heating and cooling rate should be less than the critical heating/cooling rates of the thermal shock [1].In general, it takes several to tens hours just for setting the furnaces/device to required temperatures.Different degree of thermal expansion along with the thermal gradient in the ceramics causes such thermal shocks, therefore low-expansion ceramics are widely studied to reduce the energy usages and to improve the productivity.
Aluminum titanate (AT) ceramic is known to be an excellent thermal shock resistant ceramics [2].Therefore, many applications as refractory materials have been expected for the AT ceramics.The use of AT ceramics as refractory is, however, restricted because of the low fracture strength and thermal decomposition in the temperature range from 800˚C to 1280˚C.Many attempts to improve the thermal stability have been made so far [3][4][5][6][7][8][9][10][11][12][13].The present authors have reported that the AT ceramics doped with alkali feldspar ((Na y ,K 1−y )AlSi 3 O 8 , y = 0.5 -0.8) exhibited not only low thermal expansion coefficient comparable to non-doped AT ceramics but also high thermal stability, high refractoriness, and relatively large fracture strength [14].However, even with the feldspar-doped AT ceramics, prolonged thermal treatment at decomposition temperature range over several hundred hours results in the complete degradation into alumina and titania.
It is reported that MgTi 2 O 5 (MT) shows excellent resistivity to the thermal decompositions [15].However, its thermal expansion coefficient is reported as ~5 × 10 −6 K −1 , which is much larger than AT ceramics.Attempt was made to reduce the thermal expansion coefficient without a lack of its thermal stability by a formation of AT-MT solid solutions [16][17][18][19][20]. Hereafter, AT-MT ceramic is abbreviated as MAT in this article.Prolonged heat treatment for over several hundred hours at 1200˚C -1400˚C eventually induces complete decomposition of MAT ceramics.Accordingly, decomposition free MAT ceramics with low thermal expansion coefficient and high mechanical strength have been strongly desired.In the present study, the structure and properties of the feldspar-doped MAT ceramics were investigated.The effects of the feldspar addition on the improvement of mechanical strength and thermal stability are discussed.
Experimental Procedure
Starting oxide powders of MgCO 3 (Kamishima, Japan), TiO 2 (rutile, Sakai Kagaku, Japan) and Al 2 O 3 (corundum, Sumitomo Chemical, Japan) were weighed in a molar ratio of (1 − x)Al 2 TiO 5 -xMgTi 2 O 5 (x = 0 -1) and 4 wt% of alkali feldspar (Fukushima Choseki, Japan) was added to the mixture.The chemical composition of alkali feldspar used in the present study was analyzed as (Na 0.6 ,K 0.4 ) AlSi 3 O 8 by fluorescence X-ray analysis (Rigaku ZSX-100e, Rh source).Hereafter, (1 − x)Al 2 TiO 5 -xMgTi 2 O 5 is abbreviated to MAT100x (x value in %, indicating a fraction of MT component) and feldspar-doped MAT ceramics is to f-MAT100x.For example, f-MAT25 indicates the ceramics of feldspar-doped 0.75Al 2 TiO 5 -0.25MgTi 2 O 5 composition.Alkali feldspar is simply called feldspar for convenience.The mixture of the starting reagents with 0.5 wt% peptizer Aron A-6114 (Toa Gosei, Japan) and 30 wt% water was put into the alumina pot with alumina balls, and mixed for 5 hours by using a planetary mill (Fritsch Pulversette 5, Germany).After being dried, the mixture with 10 wt% of a binder M30 (Kyoeisha Kagaku, Japan) was molded into 10 10 40 mm shape under a pressure of 60 MPa and subjected to drying at 180˚C for 2 hours.The molded samples were calcined at 340˚C for 4 hours and successively at 700˚C for 2 hours to remove organic matters completely.Then, the samples were sintered at 1500˚C for 2 hours to form MAT ceramics.
A degree of decomposition was estimated by the method reported in Ref. [14].SEM-EDX analysis of f-MAT75 was carried out with JEOL 6500F with 15 kV acceleration voltage.Three point flexure strengths of the MAT and f-MAT ceramics were evaluated with Minerva TG-10kN testing machine according to JIS-1601.The values are estimated by averaging five measurements.Experiment error was estimated below 10%.Porosities were estimated by a conventional Archimedian method according to JIS R2205-74.Kerosene was used as a liquid media.The values were estimated by averaging three measurements and error was estimated below 5%.Thermal expansion coefficients (CTE) of the ceramics were measured by a thermal mechanical analysis equipment (Rigaku TMA 8227, Japan).CTE value was estimated from a difference of sample length at 20˚C and 800˚C.Formation temperatures of MAT0, MAT50, f-MAT0 and f-MAT50 were measured with differential scanning calorimeter (Rigaku DSC 8270, Japan).
Physical Properties of the Feldspar-Doped AT-MT Ceramics
Figure 1 shows the X-ray diffraction patterns of MAT50 and f-MAT50 ceramics.Precipitated crystalline phases in all the prepared ceramics are assigned to MAT with a pseudo-brookite structure (space group Cmnm) [14,16].Small diffractions due to corundum impurity were observed in the XRD spectra for all the ceramics obtained.Coefficient of thermal expansion (CTE) from 30˚C to 800˚C, fracture strength estimated by 3-point bending test, porosity, and AT-MT formation temperature of the representative compositions are summarized in Table 1.Although CTEs are independent of feldspar doping, fracture strength and porosity largely depends on the doping; fracture strength increased almost twice and porosity decreased.
It has been reported that the formation temperature of AT phase is decreased by the feldspar-addition [14], which is also shown in Table 1.It was explained that the decrease of the formation temperature is due to the entropic contribution by the existence of liquid phase feldspar at the formation temperature range, resulting in the denser and thermally stable AT ceramics formation through a liquid phase sintering.On the other hand, formation temperatures of MAT50 and f-MAT50 ceramics are almost identical irrespective of the feldspar addition.This is explained by that the formation temperature of MAT50 ceramics (~1225˚C) is closer to the melting temperature of the feldspar (1130˚C) than that of AT ceramics.Therefore, the effect of the liquid phase sintering for f-MAT ceramics would be less than that of feldspar-doped AT ceramics.This is in consistent with that the decrease of porosity of the f-MAT ceramics for higher MT ratios is less than that for lower MT rations.
Thermal Decomposition Behavior of MAT Ceramics Doped with Feldspar
Degrees of decomposition of MAT ceramics after heat treatment at 1100˚C for 500 hours are shown in Figure 2.
The thermal decomposition is most severe at 1100˚C where modified AT ceramics is completely decomposed into alumina and titania in less than several tens hours [14].As shown in Figure 2, MAT ceramics (without feldspar addition) showed almost complete decomposition below x = 0.8 (MAT80).On the other hand, f-MAT ceramics of large MT ratio beyond x = 0.5 (MAT50) exhibited excellent thermal stability against the heat treatment.A partial decomposition was observed around f-MAT75.However, even f-MAT75 ceramics remain 70% of its original phase by the heat treatment.We do not have clear explanation on this behavior; however it may be explained by that the decomposition around f-MAT75 composition is due to the cation site order-disorder.It has been reported that the cation ordering among two kinds of cation sites in pseudobrookite structure is related to the mechanical and thermal properties.At the f-MAT75 composition, exactly a half of the cation site is occupied by Al ions.The compositional regularity may affect the site ordering and also entropic stabilization.
Effect of Feldspar Addition
At larger MT ratio, a decrease of the formation temperature reduces the upper limit of the decomposition temperature range.Beyond x = 0.85, the ceramics exhibited excellent thermal stabilities irrespective of the feldspar addition.On the other hand, at medium composition range from x = 0.5 to x = 0.85, the thermal stability was improved a lot by the feldspar addition.Figure 3 shows the SEM-EDX images of fractured surface of f-MAT75 ceramics.It is found that the Si accommodated in the triangular points of the ceramics grains or grain boundaries.The existence of the glassy phase on the surface of grains may reduce the nucleation of alumina at the grain boundaries which reduces the thermal decomposition of f-MAT ceramics with larger MT ratios.This is the most plausible explanation of the improved thermal stability of the f-MAT ceramics.The existence of the glassy phase also contributes reduce the pore volume and to the improved mechanical properties of f-MAT ceramics.
Conclusion
Feldspar-doped (1 − x)Al 2 TiO 5 -xMgTi 2 O 5 (x = 0 -1) ceramics were prepared and they showed excellent thermal stability for prolonged heat treatment at the decomposition temperature.Decomposition-free (1 − x)Al 2 TiO 5 -xMgTi 2 O 5 ceramics could be obtained for the ceramics with x > 0.5.A partial decomposition was observed for f-MAT ceramics with MT ratio around 75%.Even the ceramics (~MAT75), great improvement of the thermal stability was realized.The thermal expansion coefficient of these ceramics were low enough comparable to that of AT ceramics.The mechanical properties are also improved by the feldspar addition.
Figure 1 .
Figure 1.X-ray diffraction pattern of MAT50 and f-MAT.
Figure 2 .
Figure 2. Degree of decomposition of MAT ceramics with and without feldspar addition after heat treated at 1100˚C for 500 hours.
Figure 3 .
Figure 3. SEM-EDX images of fraction surface of f-MAT75 ceramics (a) SEM image; EDX mapping image for (b) O; (c) Mg; (d) Al; (e) Si; and (f) Ti (white: larger amount of the element). | 2,370.2 | 2013-10-21T00:00:00.000 | [
"Materials Science"
] |
Preparation of monolayers of [MnIII6CrIII]3+ single-molecule magnets on HOPG, mica and silicon surfaces and characterization by means of non-contact AFM
We report on the characterization of various salts of [MnIII6CrIII]3+ complexes prepared on substrates such as highly oriented pyrolytic graphite (HOPG), mica, SiO2, and Si3N4. [MnIII6CrIII]3+ is a single-molecule magnet, i.e., a superparamagnetic molecule, with a blocking temperature around 2 K. The three positive charges of [MnIII6CrIII]3+ were electrically neutralized by use of various anions such as tetraphenylborate (BPh4-), lactate (C3H5O3-), or perchlorate (ClO4-). The molecule was prepared on the substrates out of solution using the droplet technique. The main subject of investigation was how the anions and substrates influence the emerging surface topology during and after the preparation. Regarding HOPG and SiO2, flat island-like and hemispheric-shaped structures were created. We observed a strong correlation between the electronic properties of the substrate and the analyzed structures, especially in the case of mica where we observed a gradient in the analyzed structures across the surface.
The strongest interaction is the antiferromagnetic coupling of the central Cr III ion with the six terminal Mn III ions which results in a spin ground state of the molecule of S t = 21/2. This high-spin ground state in combination with a strong easy-axis magnetic anisotropy and a C 3 symmetry results in an energy barrier for spin-reversal, which leads to a slow relaxation of the magnetization at low temperatures (single-molecule magnetism behavior, i.e., molecular superparamagnetism [10,11]). [Mn III 6 Cr III ] 3+ has a blocking temperature around 2 K [6,7]. Recent experimental spin resolved photoemission results of [Mn III 6 Cr III ] 3+ single-molecule magnet (SMM) [12], X-ray magnetic circular dichroism (XMCD) at a Fe-SMM-adsorbed molecule [13] and cross-comparison between spin-resolved photoemission and XMCD in Mn-based molecular adsorbates have been published elsewhere [12]. The three positive charges of [Mn III 6 Cr III ] 3+ can be neutralized by various anionic counterions. Herein, the three salts [Mn III 6 Cr III ] (BPh 4 ) 3 , [Mn III 6 Cr III ](C 3 H 5 O 3 ) 3 , and [Mn III 6 Cr III ](ClO 4 ) 3 were investigated using as three anions either tetraphenylborate (BPh 4 -), lactate (C 3 H 5 O 3 -), or perchlorate (ClO 4 -), respectively. Being able to choose between three different anions for the same core compound allowed us to study the influence of the anions with respect to the whole molecule-substrate-system. Investigation in this regime is best done via non-contact atomic force microscope (AFM) [14,15]. Due to [Mn III 6 Cr III ] 3+ simply physisorbing onto the surface, the use of non-contact (nc)-AFM allows us to observe the molecule with a decreased risk of manipulating the molecule during this process. Of special interest are the thin layers of [Mn III 6 Cr III ] 3+ and whether these layers are crystalline or amorphous [16][17][18][19].
Experiment
Preparation was carried out in air at room temperature (21 ± 1°C) and air moisture between 40% and 60% via the droplet technique using an amount of 10 μl and a concentration of 10 -5 mol/l of the solution. 3 . Either the selected concentration and amount of solution, or the number of molecules, was sufficient for the creation of approximately one monolayer. During preparation the sample was held at an angle of 57°which led to a more homogeneous wetting. Substrates (10 × 10 mm 2 ) were affixed onto Omicron carriers (Omicron NanoTechnology GmbH, Taunusstein, Germany).
The surface topography of the samples was analyzed by means of non-contact atomic force microscopy in ultra-high vacuum (UHV) (Omicron UHV-AFM/STM). The pressure of the vacuum chamber was approximately 10 -7 Pa and the measurements were taken at room temperature.
We used silicon non-contact cantilevers (NSC15, Mik-roMasch, San Jose, CA, USA) with a resonance frequency of approximately 325 kHz. The microscope was operated at a frequency shift between 20 and 80 Hz below the vacuum resonance frequency.
Image fields up to 720 × 720 nm 2 were recorded with a scan speed of approximately 350 nm/s and 300 lines per image. Standard image processing was performed using a polynomial background correction by means of Gwyddion (version 2.19) and SPIP (version 5.0.6), in order to flatten the image plane.
The X-ray photoelectron spectroscopy measurements were recorded using a PHI 5600ci multitechnique spectrometer (Physical Electronics, Chanhassen, MN, USA) with a monochromatic Al K α (hν = 1,486.6 eV) radiation of 0.3 eV FWHM bandwidth. The sample was kept at room temperature. The resolution of the analyzer depended on the pass energy. During these measurements, the pass energy was 187.85 eV, leading to a resolution 0.44 eV. All spectra were obtained using a 400 μm diameter analysis area. During the measurements, the pressure in the main chamber was kept within the range of 10 -7 Pa.
The samples were oriented at a surface-normal angle of 45°to the X-ray source and -45°to the analyzer for all core-level X-ray photoelectron spectroscopy (XPS) measurements. with height of about 2 nm. These structures appear in sizes from 10 nm diameter up to several hundred nanometers and even ones covering nearly the whole scanned area. Two main structures can be distinguished:
HOPG
The first and more common way structures appear is shown in Figure 2. The islands cover approximately 30% of the surface and are mostly attached to an atomic step of HOPG. At the atomic step, an agglomeration of [Mn III 6 Cr III ](BPh 4 ) 3 with average height of 2.2 nm occurs. The island shows also a height of 2.2 nm. It is not clear whether this is due to one layer of the stacking or two layers of [Mn III 6 Cr III ](BPh 4 ) 3 . The coverages can be divided into three groups: 1. Free islands which do not have any lateral contact. These show most often the tendency to appear in a circular manner. 2. Islands attached to a step edge. Again these tend to form a circle-like structure but are hindered by the edge. The islands do not continue their extension on the other side of the edge but seem to be cut off. No tendency can be seen as to whether these cut islands appear more often on the upper or lower side of the step edges. 3. Agglomeration along the step edges with no preference relating to upper or lower step edges.
The second way [Mn III 6 Cr III ](BPh 4 ) 3 appears is shown in Figure 3, where 95% of the whole area is covered with molecules. Two layers can be seen. The upper layer covers 23% of the surface. The layer thicknesses were estimated out of the histogram of the heights by Gaussian fits. The lower layer shows a height of 2.1 nm (see Figure 3c) while the upper layer is about 1.1 nm high and shows a higher rms roughness. Although the coverage of the area is nearly complete and even a second layer emerges on top of the first one, holes with diameters from 20 to 50 nm can be seen in the film. Because of a decreased roughness in these holes which become visible by the frequency shift image (Figure 3), we expect to see the plain substrate within the holes.
Mica
On mica with [Mn III 6 Cr III ](BPh 4 ) 3 , a stronger influence of the preparation is visible due to a structural gradient. The gradient runs horizontally over the surface. We do not know whether there is also a vertical gradient, because of the limitations of the experimental setup. We divided this gradient into three stages: 1. In Figure 4 (left hand side), 9.8% of the area was covered by 316 [Mn III 6 Cr III ](BPh 4 ) 3 particles. The average size of the particles was 11.9 nm at 161 nm 2 . 2. In Figure 4 (center), we moved along the gradient where the number of particles dropped down to 68, covering 8.4% of the surface. The mean particle size increased by a factor of 2 to 23.4 nm while the area covered rose to 640 nm 2 , and the particle height reached 1.1 nm. 3. In Figure 4 (right hand side), [Mn III 6 Cr III ](BPh 4 ) 3 can be seen to form larger structures. The number of particles did not change. The covered area rose up to 17.1% while the average particle size reached 30.3 nm at 1270 nm 2 . Again, the height of the particles reached 1.1 nm leading to the conclusion that the gradient influences the covered area only and not the thickness of the layers.
Silicon (SiO 2 , Si 3 N 4 )
We observed no difference in the investigated siliconbased materials such as SiO 2 , Si 3 N 4 . Furthermore, we used different oxide layers of SiO 2 with thicknesses of 200 and 500 nm without any significant change. Large clusters appear with height from 10 to 100 nm. Even higher clusters may exist but these exceed the capabilities of the AFM in use. For clusters with height of about 55 nm, we observed diameters of up to 130 nm and clusters with a height of 80 nm showed a diameter of nearly 300 nm shown in Figure 5. In general the clusters appear to have a hemisphere-like form. In contrast to HOPG or mica, there are almost no small particles in between the bigger ones.
Influence of the anions
Switching the anions to lactate on HOPG leads to a change in the emerging structures compared to the ones created with [Mn III 6 Cr III ](BPh 4 ) 3 . No islands are visible but the whole surface appears to be coated. It was not possible to measure the height of this film due to there being no trenches or other marks which would have allowed such an analysis. Due to non-existent islands, it is likely there is neither order in the film nor any kind of monolayer.
The film-like structure also appears on mica as shown in Figure 6. (Figure 7b), a distance of approximately 2.5 nm between the structures can be estimated.
Using (ClO 4 ) 3 as the anion, the structures on HOPG appear like the ones seen using BPh 4 but with fewer islands. These islands show height of about 1.4 nm.
Nevertheless, parts of the sample are simply covered with randomly distributed deposited small particles ( Figure 8). Most structures show a height of 1.1-1.4 nm.
The structures evolving on mica look similar to the ones created by [Mn III 6 Cr III ](C 3 H 5 O 3 ) 3 on mica. Multistep clusters with step sizes of 1.6 nm and trenches of 0.3 nm deep occur (Figure 9).
Influence of the substrate
The adsorption of any [Mn III 6 Cr III ] 3+ salt is strongly influenced by the substrate on which it is prepared. Since [Mn III 6 Cr III ] 3+ is a cation, it is crucial to neutralize its electric charge. In solution, the neutralization occurs through the anions which may move freely.
In the presence of a surface, we suggest the [Mn III
6-
Cr III ] 3+ trication could adsorb on the surface without the need of interaction with anions and bind to available adsorption sites on the substrate. An explanation for this speculation is the formation of mirror charges on the surface which assume the function of the anions. We can divide the used substrate into two principal classes.
1. Molecule-substrate interaction being stronger than molecule-molecule interaction. 2. Molecule-substrate interaction being equal to or weaker than molecule-molecule interaction. 3+ . As the trications would experience a strong electrostatic repulsion without interstitial anions, the close proximity of the anions in these double-layers appears to be very likely.
The interaction between the bottom [Mn III 6 Cr III ] 3+ layer and the substrate may rely on the emerged mirror charges created by the positive charge of the SMM. This system is already stable at ambient conditions at room temperature. On HOPG we observe different heights for the first and second layer. This may be due to different van-der-Waals or mirror-charge interaction between two SMM layers in respect to the interaction between the substrate and the first SMM layer.
In the following, we present three models to show how [Mn III 6 Cr III ] 3+ orders on top of HOPG ( Figure 10).
Model #1 SMM-Anion stacking
The first layer of the SMM is stabilized through the mirror charge. Thus a layer of anions can place itself on top of the [Mn III 6 Cr III ] 3+ layer. By creating a negative charge at the surface, a second layer of [Mn III 6 Cr III ] 3+ SMMs is attracted. If this is the case, it is unclear why this only takes place for a second layer of [Mn III 6 Cr III ] 3 + . The anions can stabilize the SMM by themselves, thus the mirror charge created in the HOPG may simply be needed just at the start of the process. In this case, a second layer of anions is needed on top (Figure 10a).
Model #2 Anions mixed with SMMs
It is more likely that a stronger interaction between the SMM and the anions leads to the anions being embedded inside a [Mn III 6 Cr III ] 3+ layer. Also, this leads to lower levels of energy and higher levels of entropy inside the layer. However, we cannot distinguish whether the anions are needed in the bottom layer because of the mirror-charge effect. Nevertheless, we expect the anions to be in the top layer (Figure 10b).
Model #3 Anions mixed with SMMs without anions in the first layer
Our results have shown a significant change in heights between the first and the following layers. This difference can be explained by a neutralization of charge of [Mn III 6 Cr III ] 3+ caused by the mirror-charge effect in the first layer but by anions in the other ones (Figure 10c).
Mica on the other hand is an insulator, but being cleaved, the K + ions in the crystal are separated due to a weak binding to the close aluminosilicate [21] thus leading to surface potentials up to -130 V [22]. This potential becomes neutralized in air within a few minutes [22] but there are still enough negatively charged sites to allow [Mn III 6 Cr III ] 3+ to adsorb at the surface. Further layers neutralize their charge the same way as with HOPG. Anions are in between the SMMs in one layer.
Two scenarios appear to be plausible which explain the observed gradient on mica. During the dropping of [Mn III 6 Cr III ](BPh 4 ) 3 on top of the mica substrate, the tilted sample may have caused the gradient by an increased or decreased flow of the solution over the surface. The other explanation involves the surface charges of cleaved mica ( Figure 11). It is known that these charges are distributed irregularly [22]. When being prepared using sticky film there is always one direction in which the film is ripped off. This may lead to a gradient in the K + ions left on the surface which influences the surface potential. [Mn III 6 Cr III ](BPh 4 ) 3 follows the gradient of this distribution.
Using lactate or perchlorate as the anion, we have not yet been able to observe such a gradient. We expect the mobility of the anion to have an influence on the way [Mn III 6 Cr III ] 3+ orders itself on the surface. The second kind of substrate does not allow neutralization of charge except the one performed by the anions. This results in [Mn III 6 Cr III ] 3+ minimizing the contact with the surface. The anions would try to minimize the contact with the surface for the same reason ( Figure 12). Thus the increased surface energy leads to [Mn III 6 Cr III ] 3+ and the respective anion used sticking together. The stoichiometry of the overall [Mn III 6 Cr III ] 3 + salt including the anions may make it unattractive to place itself alone at the surface. In this respect, the most stable way of ordering appears to be in clusters. This
Influence of the anions
The anions are crucial for the stability of the whole complex. As we have shown, changes in the anions may cause a drastic variation in the way [Mn III 6 Cr III ] 3+ is absorbed on top of the surface.
The biggest difference can be seen between tetraphenylborate/perchlorate and lactate. The former ones show a strong influence by the substrate. Depending on which substrate is used various kinds of structures can be observed: flat islands, multistackings, big clusters, and even the homogeneous coverage of large areas. The latter shows just one structure. This is the coverage of the whole sample with an inhomogeneous but continuous film.
FFT performed on any of the systems did not reveal a crystalline structure resulting in [Mn III 6 Cr III ] 3+ or its anions which is why we expect no epitactical growth. III 6 Cr III ](BPh 4 ) 3 confirmed the existence of a layer of the SMM on the HOPG surface.
XPS data gained on [Mn
The ratios between the elements, including four solvent molecules are close to the expected values for [Mn III 6 Cr III ](BPh 4 ) 3 . The errors of the ratios given in Table 1 are mainly due to the uncertainty of background substraction.
Summary
We have demonstrated a strong influence of the electric properties of the used substrates on the ordering of [Mn III 6 Cr III ] 3+ on the surface. Substrates allowing [Mn III 6 Cr III ] 3+ to neutralize its charge cause more flat structures than the others on which [Mn III 6 Cr III ] 3+ tends to form high clusters. Furthermore, we have investigated different anions used with [Mn III 6 Cr III ] 3+ and observed a drastic change in occurrences on surfaces when lactate instead of tetraphenylborate or perchlorate is used. Figure 12 Model of [Mn III 6 Cr III ] 3+ including its anion on a Si-based substrate. The substrate is an insulator and does not offer, unlike mica, any charges at the surface. This leads to the SMM and its anions minimizing the contact with the surface, which results in hemispheric-shaped clusters. | 4,338.6 | 2011-08-08T00:00:00.000 | [
"Materials Science",
"Physics",
"Chemistry"
] |
Molecular and cellular mechanisms underlying the antidepressant effects of ketamine enantiomers and its metabolites
Although the robust antidepressant effects of the N-methyl-d-aspartate receptor (NMDAR) antagonist ketamine in patients with treatment-resistant depression are beyond doubt, the precise molecular and cellular mechanisms underlying its antidepressant effects remain unknown. NMDAR inhibition and the subsequent α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid receptor (AMPAR) activation are suggested to play a role in the antidepressant effects of ketamine. Although (R)-ketamine is a less potent NMDAR antagonist than (S)-ketamine, (R)-ketamine has shown more marked and longer-lasting antidepressant-like effects than (S)-ketamine in several animal models of depression. Furthermore, non-ketamine NMDAR antagonists do not exhibit robust ketamine-like antidepressant effects in patients with depression. These findings suggest that mechanisms other than NMDAR inhibition play a key role in the antidepressant effects of ketamine. Duman’s group demonstrated that the activation of mammalian target of rapamycin complex 1 (mTORC1) in the medial prefrontal cortex is reportedly involved in the antidepressant effects of ketamine. However, we reported that mTORC1 serves a role in the antidepressant effects of (S)-ketamine, but not of (R)-ketamine, and that extracellular signal-regulated kinase possibly underlie the antidepressant effects of (R)-ketamine. Several lines of evidence have demonstrated that brain-derived neurotrophic factor (BDNF) and its receptor, tyrosine kinase receptor B (TrkB), are crucial in the antidepressant effects of ketamine and its two enantiomers, (R)-ketamine and (S)-ketamine, in rodents. In addition, (2R,6R)-hydroxynormetamine [a metabolite of (R)-ketamine] and (S)-norketamine [a metabolite of (S)-ketamine] have been shown to exhibit antidepressant-like effects on rodents through the BDNF–TrkB cascade. In this review, we discuss recent findings on the molecular and cellular mechanisms underlying the antidepressant effects of enantiomers of ketamine and its metabolites. It may be time to reconsider the hypothesis of NMDAR inhibition and the subsequent AMPAR activation in the antidepressant effects of ketamine.
Introduction
Antidepressants, including selective serotonin reuptake inhibitors (SSRIs) and selective noradrenaline reuptake inhibitors (SNRIs), are widely prescribed for the treatment of depression in patients with major depressive disorder (MDD). However, there is a significant time lag of weeks to months for the antidepressant effects of these drugs to be achieved in patients with MDD 1 . In addition, approximately one-third of patients with MDD do not experience satisfactory therapeutic benefits following treatment with SSRIs or SNRIs 1 . Importantly, the delayed onset of these antidepressants is extremely harmful to patients with depression who experience suicidal ideation 2,3 . Therefore, the development of rapid-acting and robust antidepressants is imperative to relieve the symptoms of severe depression and suicidal ideation in patients with MDD or bipolar disorder (BD) 4-12 . In 2000, Berman et al. 13 demonstrated that a subanesthetic dose (0.5 mg/kg) of ketamine, an N-methyl-Daspartate receptor (NMDAR) antagonist, produced rapidacting and sustained antidepressant effects in patients with MDD. This is a first double-blind, placebo-controlled study of ketamine in depressed patients 13 . Subsequently, Zarate et al. 14 replicated the rapid-acting and sustained antidepressant effects of ketamine for patients with treatment-resistant MDD. In addition, ketamine possesses robust antidepressant effects in patients with bipolar depression [15][16][17][18] . Ketamine has been shown to alleviate suicidal ideation in patients with treatment-resistant MDD [19][20][21] . Several meta-analyses revealed that ketamine has robust antidepressant and anti-suicidal ideation effects in depressed patients with treatment-resistant MDD or BD 2,3,22,23 .
The antidepressant effects of ketamine have attracted increasing academic attention due to its effects being rapid-acting and long-lasting effects in treatment-resistant depression 8,12,24 . Although ketamine has a robust antidepressant effect, its side effects may limit its widespread use for the treatment of depression 12,[25][26][27][28][29][30][31] . Ketamine has detrimental side effects, which include psychotomimetic effects, dissociative effects, and abuse liability; which may be associated with the blockade of NMDAR 25,26,32 . It is known that dissociative symptoms following ketamine infusion are not associated with its clinical benefits 24 , suggesting that NMDAR inhibition may not serve a key role in the antidepressant effects of ketamine. Fava et al. 33 also reported that there were no statistically significant correlations between Clinician Administered Dissociative States Scale (CADSS) scores 40 min after the ketamine infusion and Hamilton Depression Rating Scale-6 (HAMD-6) scores at day 1 and day 3 in treatmentresistant patients with depression, in contrast to the hypothesis by Luckenbaugh et al. 34 . In addition, brainimaging findings suggest that reduced subgenual anterior cingulate cortex is implicated in the antidepressant effects of ketamine in humans 35,36 . However, the precise molecular and cellular mechanisms underlying its antidepressant effects remain unclear. In this review article, recent findings on the molecular and cellular mechanisms underlying the antidepressant effects of enantiomers of ketamine and its metabolites are summarized.
Enantiomers of ketamine
Ketamine (Ki = 0.53 μM for NMDAR) (Fig. 1) is a racemic mixture consisting of equal parts of (R)-ketamine (or arketamine) and (S)-ketamine (or esketamine). The binding affinity of (S)-ketamine (Ki = 0.30 μM) for NMDAR is~4fold greater than that of (R)-ketamine (Ki = 1.4 μM) ( Fig. 1) 37 . Furthermore, the anesthetic potency of (S)-ketamine is~3-4-fold greater and the undesirable psychotomimetic side effects are greater than those of (R)-ketamine 38 . We reported that (R)-ketamine has more potent and longer-lasting antidepressant-like effects than (S)-ketamine in neonatal dexamethasone-treated, chronic social defeat stress (CSDS), and learned helplessness (LH) models of depression 39,40 . Subsequent studies have also shown that (R)-ketamine has more potent antidepressantlike effects than (S)-ketamine in rodents 41,42 . A recent study showed that the order of antidepressant-like effects in a CSDS model following the intranasal administration is (R)ketamine > (R,S)-ketamine > (S)-ketamine 43 , and that the order of side effects in rodents is (S)-ketamine > (R,S)ketamine > (R)-ketamine 43 . The side effects of (R)-ketamine in rodents were lower than those of (S)-ketamine 40,[43][44][45] . A positron emission tomography study showed a marked reduction in dopamine D 2/3 receptor binding in the conscious monkey striatum following a single intravenous infusion of (S)-ketamine but not that of (R)-ketamine, suggesting that the (S)-ketamine-induced dopamine release may be associated with acute psychotomimetic and dissociative side effects in humans 46 .
In 1995, Mathisen et al. 47 reported that the incidence of psychotomimetic side effects of (S)-ketamine in patients with orofacial pain was higher than that of (R)-ketamine, despite the dose of (S)-ketamine (0.45 mg/kg) being lower than that of (R)-ketamine (1.8 mg/kg). In addition, Vollenweider et al. 48 reported that (R)-ketamine did not produce psychotic symptoms in healthy subjects and that the majority experienced a state of relaxation, whereas the same dose of (S)-ketamine caused psychotic reactions including depersonalization and hallucinations. These findings suggest that (S)-ketamine contributes to the acute side effects of ketamine, whereas (R)-ketamine may not be associated with these side effects 49 . Importantly, non-ketamine NMDAR antagonists (i.e., memantine, traxoprodil, lanicemine, rapastinel, and AV-101) did not exhibit robust ketamine-like antidepressant effects in patients with MDD 12,22,23 . These clinical findings suggest that NMDAR may not be the primary target for the antidepressant effects of ketamine.
Taken together, (R)-ketamine is considered to be a safer antidepressant than (R,S)-ketamine and (S)-ketamine in humans 12,50-52 . On March 5, 2019, the US Food Drug Administration (FDA) approved (S)-ketamine nasal spray for treatment-resistant depression. However, it is only available through a restricted distribution system, under a Risk Evaluation and Mitigation Strategy due to the risk of serious side adverse outcomes. A clinical trial of (R)ketamine in humans is currently underway by Perception Neuroscience, Inc. 12 .
Mechanisms of action of ketamine's antidepressant action NMDAR inhibition and subsequent AMPAR activation
In 1990, Skolnick' group reported antidepressant-like effects of NMDAR antagonists in rodents 53,54 . Although 49 . The values in the parenthesis are the Ki value for the NMDAR 37,41 the precise mechanisms underlying the antidepressant effects of ketamine and its metabolites remain unclear, their rapid antidepressant effects are considered to occur via the blockade of NMDARs located on inhibitory interneurons (Fig. 2). This blockage leads to the disinhibition of pyramidal cells, resulting in a burst of glutamatergic transmission. In 2008, Maeng et al. 55 reported that α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid receptor (AMPAR) antagonists blocked the antidepressant-like effects of ketamine in rodents, suggesting a role of AMPAR activation in the antidepressantlike effects of ketamine. It has been suggested that increased glutamate release activates AMPARs, as AMPAR antagonists inhibit the antidepressant-like effects of ketamine and its two enantiomers [40][41][42][56][57][58] . Collectively, it appears that AMPAR activation serves an important role in the antidepressant-like effects of ketamine and its enantiomers [5][6][7][8][9][10]40,58 .
In contrast, non-ketamine NMDAR antagonists did not produce robust ketamine-like antidepressant effects in depressed patients 12,22,23 . In addition, (R)-ketamine has more potent antidepressant-like effects in rodents than (S)-ketamine, despite (R)-ketamine being less potent at NMDAR inhibition than (S)-ketamine. A recent functional MRI (fMRI) study in conscious rats demonstrated that, similar to the potent and selective NMDAR antagonist (+)-MK-801 (0.1 mg/kg), (R,S)-ketamine (10 mg/kg), and (S)-ketamine (10 mg/kg) produced a significant positive response in the cortex, nucleus accumbens, and striatum. In contrast, (R)-ketamine (10 mg/kg) produced negative response in the several regions 59 . This study suggests that (R)-ketamine and (S)ketamine induce completely different fMRI response patterns in rat brain, and that (S)-ketamine-induced pattern is similar to (+)-MK-801. Collectively, it is likely that at the antidepressant-like dose (10 mg/kg), (R)-ketamine does not produce NMDAR antagonist-like brain activation in the brain. Therefore, it may be time to reconsider the hypothesis of NMDAR inhibition and the subsequent AMPAR activation in the antidepressant effects of ketamine and its two enantiomers. In addition to NMDA inhibition and AMPAR activation, other important pathways, including mechanistic target of rapamycin (mTOR), the brainderived neurotrophic factor (BDNF)-tyrosine kinase receptor B (TrkB) pathway, may be involved in the antidepressant-like effects of ketamine, as discussed below.
Monoaminergic systems
A recent study using in vivo microdialysis showed that (R)-ketamine and (S)-ketamine acutely increased serotonin (5-HT: 5-hydroxytryptamine) release in the PFC in a dose-dependent manner, and the effect of (R)-ketamine was greater than that of (S)-ketamine 60 . In contrast, (S)ketamine caused a robust increase in dopamine release compared with (R)-ketamine. Differential effects between (R)-ketamine and (S)-ketamine were also observed in a LPS-induced model of depression. An AMPAR antagonist 40,72 . Although (S)-norketamine does not activate AMPAR, (S)-norketamine activates mTORC1 signaling, resulting in activation of BDNF-TrkB signaling 119 . Right: (R)-Ketamine is metabolized to (2R,6R)-HNK. Antidepressant-like effects of (R)-ketamine in rodents are more potent than (S)-ketamine, and antidepressant-like effects of (2R,6R)-HNK are inconsistent. (R)-Ketamine activates AMPAR, subsequently, (R)-ketamine might activate MEK-ERK signaling, resulting in activation of BDNF-TrkB signaling 40,72 . AMPAR activation may be necessary for antidepressant-like actions of (2R,6R)-HNK 41 . The mTORC1 signaling and BDNF-TrkB signaling may play a role in the antidepressant effects of (R)-ketamine 40,72 NBQX attenuated (S)-ketamine-induced, but not (R)ketamine-induced 5-HT release, whereas NBQX blocked DA release induced by both enantiomers. This paper suggests differences between (R)-ketamine and (S)-ketamine in their abilities to induce prefrontal 5-HT and dopamine 60 . Furthermore, Zhang et al. 61 reported that 5-HT depletion did not affect the antidepressant-like effects of (R)-ketamine in a CSDS model, suggesting that 5-HT does not play a major role in the antidepressant-like effects of (R)-ketamine.
A recent study showed that dopamine D 1 receptor activation in the medial PFC may play a role in the antidepressant-like effects of ketamine 62 . However, Chang et al. 63 reported that the pretreatment with dopamine D 1 receptor antagonist did not block the antidepressant-like effects of (R)-ketamine in a CSDS model, suggesting that dopamine D 1 receptors may not play a major role in the antidepressant-like actions of (R)-ketamine, consistent with the previous report 64 .
Collectively, it is unlikely that monoamines such as 5-HT and dopamine do not play a key role in the antidepressant-like effects of ketamine and its enantiomers although the monoaminergic system may play a role in their other pharmacological effects. A further detailed study is needed.
Mechanistic target of rapamycin complex 1 (mTORC1) mTOR is an atypical serine/threonine protein kinase consisting of 2549 amino acids belonging to the phosphatidylinositol 3-kinase-related kinase family, which combines with several proteins to form two different complexes, mTORC1 and mTORC2 65 . In addition, the signaling pathway controlled by mTOR can regulate physiological function in the central nervous system, such as neuronal development, synaptic plasticity, memory storage, and cognitive function 66 .
In 2010, Li et al. demonstrated that rapamycin, an mTOR inhibitor, inhibited the antidepressant-like effects of ketamine in rodents, which acted by increasing the number of synaptic proteins and synaptic spine density by rapidly activating the mTORC1-signaling pathway in the medial prefrontal cortex (PFC) 56 . In a forced swimming test, ketamine decreased the immobility time and increased the levels of hippocampal mTOR and BDNF, suggesting that the antidepressant-like effects of ketamine may be associated with increased hippocampal levels of mTOR and BDNF 67 . Furthermore, tramadol, an analgesic agent, enhanced the antidepressant-like effects of ketamine by increasing mTOR levels in the rat hippocampus and mPFC 68 . In addition, ketamine and its metabolites [i.e., norketamine and (2S,6S)-hydroxynorketamine (HNK)] may produce an antidepressant-like effect by increasing the phosphorylation level of mTOR and its downstream targets 69 . By contrast, rapamycin can cause neurobehavioral changes, including anxiety-like behavior, in rats and can impede the antidepressant-like effects of ketamine 70 . In addition, neuropeptide VGF (non-acronymic) knockdown attenuated the rapid antidepressantlike effects of ketamine by reducing mTOR phosphorylation 71 . In dorsal raphe neurons, ketamine transiently increased the neurotransmission mediated by spontaneous AMPAR via mTOR signaling 72 . Furthermore, activation of mTOR in the PFC was involved in the antidepressant-like effects of ketamine, whereas inhibition of this pathway may protect the brain from oxidative stress or endoplasmic reticulum stress 73,74 . The mood stabilizer lithium, a GSK-3 inhibitor, can indirectly activate mTORC1 signaling, thereby enhancing the antidepressant-like effects of ketamine 75 . In addition, we previously reported that ketamine-induced antidepressant-like effects are associated with the AMPARmediated upregulation of mTOR and BDNF in the hippocampus and PFC 76 . Collectively, it is likely that mTORC1 signaling serves an important role in the mechanism underlying the antidepressant-like effects of ketamine.
Although these aforementioned studies support the role of mTORC1 in the antidepressant-like effect of ketamine, inconsistent results have emerged in subsequent studies. Autry et al. 57 showed that the level of phosphorylated mTOR was not altered in the hippocampus of control and Bdnf-knockout mice following acute administration of ketamine, and that the antidepressant-like effects of ketamine in wild-type mice were not affected by rapamycin. In addition, another study showed no significant changes in the levels of phosphorylated mTOR in the hippocampus and prefrontal cortex of mice following administration of ketamine or (2R,6R)-HNK, whereas the levels of phosphorylated eEF2 and BDNF were significantly increased in the hippocampus following administration of ketamine or (2R,6R)-HNK 41 . This increase may partially explain the mechanisms underlying the sustainable antidepressant-like effects of ketamine 41 . Of note, we reported that mTORC1 serves a major role in the antidepressant effect of (S)-ketamine, but not (R)ketamine, in a CSDS model 77 . The antidepressant effects of (R)-ketamine may be mediated by the activation of ERK as pretreatment with SL327 (an ERK inhibitor) inhibited the antidepressant effects of (R)-ketamine 77 .
There are few clinical studies reporting the role of mTORC1 in the antidepressant effects of ketamine in depressed patients. Denk et al. 78 reported the first evidence of increased phosphorylated mTOR protein in the blood from a patient with MDD following a single injection of (S)-ketamine. Furthermore, we reported that the plasma levels of phosphorylated mTOR, GSK-3β, and eEF2 were significantly increased following a single injection of ketamine 79 . It is, therefore, of interest to investigate whether (R)-ketamine can influence ERK and its phosphorylation in the blood from patients with MDD or BD.
A recent randomized, placebo-controlled clinical study demonstrated that pretreatment with rapamycin did not alter the acute effects of ketamine in patients with treatment-resistant MDD, whereas its combination with ketamine prolonged the antidepressant effect of ketamine and the response rate 2 weeks following treatment 80 . At present, there is no evidence that a low dose of rapamycin can achieve sufficient brain levels to inhibit mTOR. It is also suggested that rapamycin may produce beneficial effects through the inflammatory system in the periphery, although further investigation is required. Taken together, the role of mTORC1 in the antidepressant effect of ketamine in patients with MDD remains contradictory. Further investigation using a larger sample size is required to determine the role of mTORC1 in the antidepressant effects of ketamine and its metabolites in patients with MDD.
BDNF
Multiple lines of evidence show that BDNF and its receptor TrkB serve a critical role in the pathogenesis of depression and therapeutic mechanisms of antidepressants [81][82][83][84][85][86][87] . In 2011, Autry et al. 57 reported that the rapid-acting antidepressant effects of ketamine depend on the rapid synthesis of BDNF, as ketamine did not elicit antidepressant-like effects in inducible Bdnf-knockout mice, indicating a key role of the BDNF-TrkB cascade in the antidepressant effects of ketamine. Subsequent studies have supported the role of the BDNF-TrkB cascade in the antidepressant effects of ketamine 67,76 . In addition, the TrkB inhibitor ANA-12 significantly inhibited the rapid and long-lasting antidepressant effects of (R)-ketamine, and (S)-ketamine in a CSDS model 40 . Furthermore, (R)ketamine produced more marked beneficial effects on reduced synaptogenesis and the BDNF-TrkB cascade in the PFC and hippocampus (i.e., CA3 and DG) of CSDSsusceptible mice than (S)-ketamine 40 . It has also been reported that the regulation of glutamate transporter 1 on astrocytes through the activation of TrkB is involved in the beneficial effects of ketamine on behavioral abnormalities and morphological changes in the hippocampus of chronic unpredictable mild stress (CUMS)-exposed rats 88 . A recent study showed that ketamine restores depressionlike phenotypes in CUMS-exposed vulnerable rats by rescuing the dendritic trafficking of Bdnf mRNA 89 . In addition, the ketamine-induced regulation of TrkB is independent of HNK 90 . Collectively, it is likely that longlasting activation of the BDNF-TrkB cascade in the PFC and hippocampus may be implicated in the long-lasting antidepressant effects of ketamine and its enantiomers.
Synaptogenesis
Preclinical studies have shown that ketamine rapidly induces synaptogenesis and reverses the synaptic deficits caused by chronic stress, resulting in its antidepressantlike effects 56,[80][81][82][83][84][85][86][87][88][89][90][91][92][93][94] . We reported that ketamine and its two enantiomers, improved decreased spine density in the mPFC of CSDS-susceptible mice 7 or 8 days following a single dose 40,95 , suggesting long-lasting effects on synaptogenesis. A recent study using single-cell two-photon calcium imaging in awake mice showed that the effects of ketamine on spine formation in the PFC were slower: spine formation rates were not significantly altered at 3-6 h following a single injection of ketamine, but were markedly altered at 12-24 h 96 . This suggests that dendritic spine formation in the PFC was required for the sustained antidepressant effects of ketamine but not for its acute antidepressant effects. By contrast, Zhang et al. 97 reported that (R)-ketamine rapidly (<3 h) ameliorated the decreased spine density in the medial PFC and hippocampus of CSDS susceptible mice, resulting in its rapid acting antidepressant-like effects in rodents. In addition, a recent study showed that (S)-ketamine rapidly (<1 h) reversed dendritic spine deficits in CA1 pyramidal neurons of Flinders Sensitive rats with a depression-like phenotype 98 . Therefore, further investigation of the acute effects of ketamine and its enantiomers in the dendritic spine deficits of rodents with depression-like phenotype is required.
Opioid system
It is well known that ketamine can interact with opioid receptors. The order of affinity for opioid receptor subtypes is mu > kappa > delta. The binding of (S)-ketamine is also known to be~2-4-fold stronger to mu and kappa receptors than that of (R)-ketamine 38,99 . In addition, ketamine has been reported to exert antagonistic effects at both mu and kappa opioid receptors, suggesting that ketamine use does not lead to opioid addiction 99 . Recently, pretreatment with an opiate receptor antagonist naltrexone (50 mg) significantly inhibited the antidepressant and anti-suicidal effects of ketamine, but not its dissociative effects, in patients with treatment-resistant MDD, suggesting that activation of the opioid system is necessary to produce the rapid-acting antidepressant effects of ketamine 100,101 . By contrast, Yoon et al. 102 demonstrated that pretreatment with naltrexone did not affect the antidepressant effects of ketamine in depressed patients with alcohol use disorder. Furthermore, ketamine had antidepressant efficacy in patients concurrently on high-affinity mu opioid receptor agonists (i.e., buprenorphine, methadone, or naltrexone), suggesting that the chronic use of opioid receptor agonists is not a contraindication for ketamine treatment for depression 103 .
Therefore, the role of the opioid system in the antidepressant effects of ketamine is controversial.
Recently, we reported that pretreatment with naltrexone did not inhibit the antidepressant-like effects of ketamine in a CSDS model and inflammation-induced model of depression, suggesting that the opioid system may not serve a role in the antidepressant-like effects of ketamine 104 . However, further clinical trials with a large sample size are required to better understand whether opioid receptor activation is necessary for the antidepressant and anti-suicidal effects of ketamine in patients with MDD and BD.
In 2016, Zanos et al. demonstrated that the generation of (2R,6R)-HNK (Ki > 10 μM for NMDAR) (Fig. 1) in the body was essential for the antidepressant-like effects of (R, S)-ketamine in rodents, and that NMDAR may not be involved in the antidepressant-like effects of (2R,6R)-HNK 41 . Of note, (2R,6R)-HNK did not produce detrimental side effects (i.e., hyperlocomotion, pre-pulse inhibition deficits, motor incoordination, and abuse liability) of ketamine in rodents at a high dose 37 . Subsequently, several groups have replicated the antidepressant-like effects of (2R,6R)-HNK in rodents 106,107 . Furthermore, Lumsden et al. 108 demonstrated that antidepressant-relevant concentrations of (2R,6R)-HNK did not inhibit NMDAR function, whereas a high concentration (50 μM) of (2R,6R)-HNK inhibited NMDAR synaptic function 109 . It is also suggested that the metabotropic glutamate mGlu 2 receptors are involved in the antidepressant-like effects of (2R,6R)-HNK as the antidepressant-like effects of (2R,6R)-HNK were absent in mice lacking the Grm2 gene, but not the Grm3 gene 110 . It is currently unknown whether mGlu 2 receptors play a role in the antidepressant-like effects of (R)-ketamine in rodents.
By contrast, our group found that (2R,6R)-HNK did not exhibit antidepressant-like effects in rodent models of depression, however, its parent compound (R)-ketamine exhibited robust antidepressant-like effects in the same models [111][112][113][114][115][116] . Pretreatment with two CYP inhibitors (ticlopidine hydrochloride and 1-aminobenzotriazole) prior to (R)-ketamine (3 mg/kg) injection increased the levels of (R)-ketamine in the blood, whereas (2R,6R)-HNK was not detected in the blood. In the presence of these CYP inhibitors, (R)-ketamine (3 mg/kg) exhibited antidepressant-like effects, although the same dose did not exhibit antidepressant-like effects in the absence of CYP inhibitors 117 . In addition, we reported that the direct infusion of (R)-ketamine in brain regions produced antidepressant-like effects in a rat LH model, suggesting that (R)-ketamine itself, but not its metabolite, produced antidepressant-like effects 118 . These data suggest that the metabolism of (2R,6R)-HNK from (R)-ketamine is not essential for the antidepressant-like effects of (R)-ketamine 119,120 . The US FDA approved (S)-ketamine, however, (2R,6R)-HNK is not prepared from (S)-ketamine, indicating that (2R,6R)-HNK is not essential for the antidepressant effects of ketamine 12 . A recent study from Zanos et al. 121 showed that (R)-ketamine may exert antidepressant-like effects partly via conversion to (2R,6R)-HNK. The conclusion was toned down 3 years after the first publication of (2R,6R)-HNK 41 .
It has also been demonstrated that (2R,6R)-HNK exerts antidepressant effects through AMPAR activation as AMPAR antagonist inhibited the antidepressant effects of (2R,6R)-HNK 41 . By contrast, at a clinically relevant unbound brain concentration (0.01-10 μM), (2R,6R)-HNK did not bind orthosterically or directly to functionally activated AMPARs 122 . Furthermore, (2R,6R)-HNK failed to evoke AMPAR-centric changes in any electrophysiological endpoint from adult rodent hippocampal sections 122 . Unfortunately, the AMPAR potentiator Org 26576 did not have antidepressant effects in depressed patients 123 . At present, a clinical trial of TAK-653, an AMPAR potentiator with minimal agonistic effects, is underway in patients with treatment-resistant depression (NCT03312894). Further investigation on the role of AMPAR in the action of enantiomers of ketamine and its metabolites (norketamine and HNK) is required.
A recent study demonstrated that a single injection of (2R,6R)-HNK (1-10 mg/kg), but not (2S,6S)-HNK, increased aggressive behaviors through AMPARdependent mechanisms in the ventrolateral periaqueductal gray matter 124 . A clinical trial of (2R,6R)-HNK in humans is currently underway at the National Institute of Mental Health, USA 12 . The aggressive effects of (2R,6R)-HNK in humans warrant investigation. In addition, it is of interest to compare the antidepressant effects of (R)ketamine and its final metabolite (2R,6R)-HNK in patients with MDD.
(S)-Norketamine (S)-Ketamine is metabolized to (S)-norketamine [Ki = 1.70 μM for NMDAR] by CYP enzymes (Fig. 1). We reported that (S)-norketamine, but not (R)-norketamine, exhibits rapid and sustained antidepressant-like effects in CSDS and inflammation models of depression. The potency of the antidepressant-like effects of (S)-norketamine is similar to that of its parent compound (S)-ketamine, although the antidepressant-like effects of (S)-norketamine are less potent than those of (R)-ketamine 125 . Unlike (R,S)-ketamine and its enantiomers, AMPAR antagonists do not inhibit the antidepressant effects of (S)-norketamine, suggesting that AMPAR activation appears to be unnecessary for the antidepressantlike effects of (S)-norketamine 125 . Therefore, it is unlikely that a rapid increase in glutamate due to the direct inhibition of NMDARs localized to interneurons is involved in the antidepressant-like effects of (S)-norketamine (Fig. 2) 125 . Furthermore, we reported that, similar to (S)ketamine, BDNF-TrkB and mTOR signaling might play a role in the antidepressant-like effects of (S)-norketamine in rodents 125 . Interestingly, the side effects of (S)-norketamine in rodents are significantly lower than those of (S)ketamine; ketamine-induced side effects may be associated with NMDAR inhibition. Taken together, (S)-norketamine appears to be a safer alternative antidepressant without the side effects of (S)-ketamine in humans 12,125,126 . Of note, unlike (S)-ketamine, (S)-norketamine is not a schedule compound.
Conclusions
The discovery of the antidepressant effects of ketamine in depressed patients was serendipitous 24 . The mechanisms underlying the antidepressant effects of ketamine have been investigated for almost 20 years, however, its precise molecular and cellular mechanisms remain to be fully elucidated. Although NMDAR inhibition is considered to serve a key role in the antidepressant effects of ketamine, clinical data of non-ketamine NMDAR antagonists (i.e., memantine, traxoprodil, lanicemine, rapastinel, and AV-101) 12 and preclinical data using two ketamine enantiomers suggest that mechanisms other than NMDAR inhibition may be involved in the antidepressant effects of ketamine. For example, a randomized, placebo-controlled study using a large sample demonstrated that lanicemine did not exert antidepressant effects in patients with MDD with a history of inadequate treatment response 127 , supporting a lack of antidepressant-like effects of lanicemine in a CSDS model 128 . On March 6, 2019, Allergan announced phase three results of rapastinel as an adjunctive treatment of MDD. In three acute trials, rapastinel treatment did not produce primary and key secondary endpoints compared with the placebo group. By contrast, rapastinel exerted rapid-acting antidepressant-like effects in a CSDS model although, unlike (R)-ketamine, rapastinel did not exhibit long-lasting antidepressant effects 129 . Collectively, nonketamine NMDAR antagonists did not produce robust ketamine-like antidepressant effects in patients with MDD, although certain NMDAR antagonists may exhibit rapid ketamine-like antidepressant-like effects in rodents. There is no guarantee that preclinical data will translate to humans 130 . At present, the general consensus is that NMDAR inhibition and the subsequent AMPAR activation serve a role in the antidepressant-like effects of ketamine and two enantiomers. However, the precise molecular and cellular mechanisms underlying ketamine's antidepressant actions are complex [5][6][7][8][9][10]94,131 . Considering the clinical data and new preclinical data using ketamine enantiomers, it is the time to reconsider the current hypothesis for the antidepressant effects of ketamine. Recently, Heifets and Malenka 130 suggested a need to conceptualize molecular mechanisms with more nuance than action at a single, broadly distributed glutamate receptor.
A number of researchers have used control stress-naive rodents to investigate the antidepressant-like effects of ketamine and its metabolites. Healthy control subjects showed significant increases in depressive symptoms for up to 1 day following a single ketamine infusion 132 , suggesting that ketamine does not produce antidepressant effects in healthy control subjects. It is also well known that ketamine can produce schizophrenia-like symptoms (i.e., positive symptoms, negative symptoms, cognitive impairment) in healthy control subjects 32,38,133 . Therefore, the use of control naive rodents may contribute to discrepancies in the antidepressant-like effects of ketamine and its metabolite HNK 12,134 . Collectively, rodents with depression-like phenotypes should be used to investigate the antidepressant effects of ketamine and its metabolites, although it is clear that animal models of depression cannot fully represent the complexity of human depression 134,135 .
On March 5, 2019, the US FDA approved (S)-ketamine nasal spray (Spravato™) for treatment-resistant depression. The clinical study of (R)-ketamine and (2R,6R)-HNK in humans is currently underway 12 . Therefore, it is of interest to compare the antidepressant effects of (R)ketamine and (S)-ketamine [or (2R,6R)-HNK] in patients with MDD or BD. Finally, the identification of novel molecular and cellular targets responsible for the rapid and sustained antidepressant effects of enantiomers of ketamine and its metabolites is useful for the development of novel antidepressants without the detrimental side effects of ketamine. | 6,791.4 | 2019-11-07T00:00:00.000 | [
"Biology",
"Psychology"
] |
Dynamic Control of Fö Rster Energy Transfer in a Photonic Environment
In this study, the effect of modified optical density of states on the rate of Fö rster resonant energy transfer between two closely-spaced chromophores is investigated. A model based on a system of coupled rate equations is derived to predict the influence of the environment on the molecular system. Due to the near-field character of Fö rster transfer, the corresponding rate constant is shown to be nearly independent of the optical mode density. An optical resonator can, however, effectively modify the donor and acceptor populations, leading to a dramatic change in the Fö rster transfer rate. Single-molecule measurements on the autofluorescent protein DsRed using a l/2-microresonator are presented and compared to the theoretical model's predictions. The observed resonator-induced dequenching of the donor subunit in DsRed is accurately reproduced by the model, allowing a direct measurement of the Fö rster transfer rate in this otherwise inseparable multichromophoric system. With this accurate yet simple theoretical framework, new experiments can be conceived to measure normally obscured energy transfer channels in complex coupled quantum systems, e.g. in photovoltaics or light harvesting complexes.
Introduction
3][4][5][6][7][8][9] The efficiency of the energy transfer depends on the spectral overlap between the emission of the donor chromophore and the absorption of the acceptor chromophore as well as on the distance and the mutual orientation of their respective transition dipole moments.While it is easily possible to design and prepare synthetic FRET-pairs and study the optical properties of the individual chromophores separately, this is not possible for many biological molecules such as the red fluorescent protein DsRed from the Discosoma reef coral.1][12] These spectral properties along with the steric composition as derived from X-ray data suggest a non-radiative Fo ¨rster energy transfer within a tetrameric unit which has indeed been experimentally proven by different spectroscopic approaches using single molecule and ensemble techniques. 2,11,13,14However, it is not possible to separate the tetramers into functional monomers by chemical or biochemical means to make the isolated chromophoric species addressable for further investigation.
A promising approach to spectrally isolate individual chromophoric subunits in biological FRET-systems is to modify the local photonic mode characteristics and density by using a l/2-microresonator.We have previously demonstrated the optical confinement effect on both the fluorescence spectrum and the emission rate of single (synthetic) dye molecules by embedding them in a transparent polymer between two planar silver mirrors separated by half of the emission wavelength. 15,16][25][26] In this article, we report the first investigation of the autofluorescent protein DsRed embedded in a l/2-microresonator by steady-state and time resolved spectroscopy down to the single molecule level.We use a novel microresonator design that allows coupling the fluorescence of individual chromophores to on-and off-axis cavity modes while maintaining physiological conditions for the embedded biomolecules.We show that, in this way, it is possible to spectrally isolate the two coupled chromophoric subunits of DsRed without destroying the composition of the tetrameric protein complex.
Rate equation model
To study the effect of a photonic environment on a FRETcoupled system, we introduce a rate equation model describing the energetic processes of the system.Shown in Fig. 1, this model comprises two three-level subsystems D and A representing the donor and acceptor molecules, respectively.Each subsystem X (X = D, A) can be excited at the rate X 0 k exc = X 0 Js, where X 0 is the probability to find the subsystem in its electronic ground state, k exc is the excitation rate constant, J is the incident illumination photon flux at the absorption wavelength and s is the corresponding absorption cross section.Optical excitation of X 0 leads to a vibronic level in the first electronically excited state X 1 0 which thermally relaxes rapidly within some picoseconds to X 1 , from which it may decay nonradiatively or radiatively at the rates X 1 k nr and X 1 k rad , respectively.Here, k nr and k rad are the nonradiative and radiative decay rate constants, respectively.In addition, the subsystems D and A are coupled via the nonradiative channel representing Fo ¨rster resonant energy transfer, described by the rate constant k T as a measure of the dipole-dipole coupling strength.This Fo ¨rster transition rate is defined as the probability per time interval that an acceptor molecule is transferred from its ground state A 0 to its electronically excited state A 1 by absorbing a photon of the optical near-field of the donor chromophore with state D 1 .The transition rate is then given by D 1 A 0 k T .The population probability dynamics for the excited states D 1 and A 1 of the subsystems can be written by a system of coupled differential equations, : where a dotted value denotes a derivative in time.Here, superscripts D and A denote values corresponding to the donor and acceptor subsystems, respectively.In equilibrium, the populations are described by the steady-state solution to eqn (1), obtained for : D 1 = : A 1 = 0.With X 0 + X 1 = 1, donor and acceptor excited-state populations can then be written as When placed in a modified photonic environment, e.g. a resonant cavity, the parameters in eqn (2) can change.First, the intensity of the incident light can be enhanced or suppressed when the cavity is excited on or off resonance, varying the incident photon flux J and thus k exc .With the intensity enhancement factor F exc = I(r)/I fs (r) denoting the incident intensity at the position r of the quantum system in a photonic environment compared to free space, the modified excitation rate constant can be expressed as k exc = F exc k exc,fs .
Second, the radiative decay rate constant k rad is proportional to the local density of optical states (LDOS) r corresponding to the transition energy.In a photonic background, r is a function of space and the emitter's orientation and can vary by many orders of magnitude, dramatically changing the behavior of the coupled quantum system.Introducing the LDOS enhancement factor F rad = r(r)/r fs (r) induced by the photonic environment at the position of the emitter, the radiative decay rate constant can be expressed as k rad = F rad k rad,fs .The value F rad is also known as the Purcell factor.
Finally, the Fo ¨rster transfer rate constant k T can be influenced by the photonic background as well.While FRET is a nonradiative process, often described as a near-field dipole-dipole interaction, it is nevertheless influenced by modifications to the electromagnetic field: if a photonic system enhances the donor dipole's near-field, it will equally enhance the induced dipole moment in the acceptor, thus increasing the FRET speed.The photonic enhancement of the donor's dipole field intensity at the position of the acceptor compared to free space thus also describes the enhancement of the FRET channel, k T = F T k T,fs , assuming that there is no change in polarization.
The radiative and FRET enhancement factors can be conveniently computed given the photonic system's electromagnetic response in the form of its dyadic Green's function G.This 3  3-tensorial function describes the electric field at an arbitrary position r 0 induced by a single dipole emitter in the photonic system, E(r 0 ) = o 2 m(r)G(r 0 ,r)Áp. (3) Here, r is the position of the dipole emitter, p is its dipole moment, ho is the transition energy and m(r) is the magnetic permeability at the position of the emitter.The LDOS can directly be computed as where G p ˆ= p ˆÁGÁp ˆand p ˆ= p/|p| is a unit vector in the direction of the emitter's dipole moment.In free space, eqn (4) results analytically in r fs (r) = o 2 /( hc 3 ); the LDOS is then homogeneous and isotropic.With eqn (4), the radiative enhancement factor can then be written as The FRET enhancement factor F T can also be derived from eqn (3) with r and r 0 describing the positions of the donor and the acceptor, respectively: The first term in eqn ( 6) can usually be neglected as the magnetic permeability is seldom changed in a photonic system.In our study, the acceptor is not directly excited, hence its excitation channel is drawn in gray.
In the second term, the absolute value of the donor's dipole moment cancels out, and so F T depends solely on |G| 2 in the direction of the donor's dipole moment.
The dyadic Green's function G can be obtained using a number of analytical or numerical approaches.For the simple case of an ideal Fabry-Pe ´rot microresonator, analytical calculations have been presented. 27For more complex resonator geometries including multiple layers and interfaces, the transfer matrix method (TMM) provides a quasi-analytical solution.For arbitrary photonic systems, numerical methods such as the finite-difference timedomain (FDTD) 28 or surface integral equation (SIE) 29 approach may be required for satisfactory results.
k/2-Microresonator
Due to the simple geometry of a l/2-microresonator, its electromagnetic response can be calculated analytically. 27The angular dependence of its modes' resonances limits the Purcell factor of a planar Fabry-Pe ´rot-type resonator to at most F rad = 3, even for perfectly reflecting mirrors.Emission inhibition, on the other hand, can be very effective, reaching values of nearly F rad E 0. The size of a l/2-microresonator is on the order of the emitted light's wavelength, thus only the far field of an embedded emitter can populate its modes: as the near field's amplitude decays with R À3 , it will have nearly vanished even before reaching the resonator's mirrors for the first time.A comparison of a dipole emitter's far field to its near field shows that the intensity of the far field which is only one wavelength l away is more than 8 orders of magnitude weaker than that of the near field at a distance of l/100.One can thus see that even a large resonant enhancement of the cavity modes will have only a minuscule effect on the FRET rate constant k T .
Changes in the FRET rate D 1 A 0 k T in a l/2-microresonator will therefore not be caused by a change in the rate constant, but instead by the changes in the donor and acceptor populations D 1 and A 0 , respectively. 30In particular, efficient emission inhibition of the donor's emission wavelength can effectively increase its excited state population D 1 , leading to an increase in FRET.Similarly, inhibiting acceptor fluorescence can lead to a depletion of the ground state population A 0 , preventing Fo ¨rster transfer.
Experimental results
To observe the effects predicted by our model, we experimentally studied the fluorescence of a Fo ¨rster-coupled system in a l/2-microresonator.As a FRET system, we chose the autofluorescent protein DsRed, a complex molecule containing two spectrally isolated chromophoric subunits with fluorescence maxima at 505 nm and 580 nm, respectively.These two subunits can couple nonradiatively, allowing energy to be transferred from the energetically higher subunit to the lower subunit via FRET.The gray shaded area in Fig. 2(b) shows the free-space emission spectrum of DsRed when illuminated at 473 nm, clearly displaying the two fluorescence peaks.
The photonic background in our study was defined by a l/2 Fabry-Pe ´rot microresonator enclosing the DsRed molecules.
A schematic diagram of the sample-microresonator system is shown in Fig. 3.While one of the resonator's mirrors is flat, the other is minimally curved with a radius of R = 150 mm.This curvature is slight enough that the mirrors can still be assumed to be parallel, yet causes a well-defined variation in the mirror separation L(x,y) in the resonator plane.The longitudinal resonance wavelength can thus be tuned by scanning the detection point across the resonator.
The blue and red solid lines in Fig. 2(b) show the fluorescence spectra of DsRed in the microresonator for two different mirror separations L. The corresponding white-light transmission spectra, indicating the resonator's longitudinal resonances for the two mirror separations, are shown by the shaded dashed lines of the same color.The amplitudes of these spectra are not shown to scale but magnified to aid interpretation.Immediately, one can see that by choosing the correct resonance wavelength, one emission peak can be greatly enhanced while the other is nearly completely suppressed.For the blue curve, the normally dominant peak at 580 nm is so effectively suppressed by the resonator that it is visible only as a slight hump on the blue peak's flank.For the red curve, the off-resonance peak at 505 nm has completely disappeared.In both curves, the resulting peaks are asymmetric, showing a steep flank on the red side and a slow roll-off on the blue side.This is typical for emitters in a l/2-resonator as the longitudinal resonance wavelength also corresponds to the resonator's cutoff wavelength: light with a wavelength longer than the longitudinal resonance cannot populate any mode in the resonator.Shorter wavelengths, however, can populate off-axis modes which are no longer parallel to the z-axis but which can nevertheless be collected by the high NA of the used objective.
The spectra of single DsRed tetramers shown in Fig. 2(a) in blue (donor resonat) and red (acceptor resonat) illustrate that the influence of the resonator on transfer coupled systems is observable even on the single particle level.This enables a precise control and study of individual chromophores within one distinct transfer coupled complex, whose optical properties may vary by i.e. induced environmental influences.To verify that the influence of the resonator on the molecules' emission spectra is indeed an effect of their varied emission rates and not simply a filtering of the emitted light, the acceptor fluorescence lifetime t A was studied as a function of the cavity resonance wavelength, viz.Fig. 2(c).The points show measured lifetimes and the curve is a calculation using the transfer matrix method (TMM) assuming a free-space fluorescence lifetime of t A rad,fs = 6.7 ns and an emission quantum yield of F A rad,fs = 25.2%.The dramatic change in the measured lifetime agrees perfectly with the calculation's prediction.The red and blue arrows indicate the two resonator configurations at which the spectra in Fig. 2(b) were recorded, corresponding to the cases of inhibition (blue) and enhancement (red) of strong acceptor emission.
To study the resonator's effect quantitatively and to verify the rate-equation model presented above, we study the resonator-induced dequenching of the donor chromophore: when the resonator is tuned to the emission peak at 505 nm, Fig. 2(b) shows that besides amplifying the donor emission, the acceptor fluorescence is effectively suppressed.If the quantum yield of the acceptor chromophore is sufficiently high, the lifetime of the A 1 state will then be considerably increased.From eqn (1) one follows that an excited acceptor choromophore cannot participate in FRET and so this decay channel is lost to the donor.The resulting shift in the relative transition efficiency causes an increase in donor emission intensity compared to acceptor fluorescence.Fig. 4 shows the donor-to-acceptor fluorescence ratios for DsRed in a microresonator tuned to 505 nm (circles) and in free space (triangles), measured for increasing excitation power.In free space, this ratio remains in the order of 0.5 for all illumination power.With the acceptor fluorescence suppressed by the resonator, however, the donor dominates the fluorescence by a ratio of up to 10/1 in the measured range.
Typically, this behavior is difficult to observe in free space, since, on the one hand, the fluorescence lifetime of a typical acceptor dye is rather short and, on the other hand, the fluorescence lifetime of a typical (unquenched) donor, being in the same range, is too long.Hence, the acceptor has already relaxed to the ground state while the donor is still excited, allowing for another energy transfer which quenches the emission of the donor.However, using a microresonator system, it is possible to precisely adjust the radiative rates of the respective chromophores.Thus, one can significantly shorten the lifetime of the donor chromophore while the lifetime of the acceptor chromophore is lengthened.
One might argue that the larger D/A fluorescence ratio in the resonator is simply due to the fact that the donor fluorescence is enhanced and the acceptor fluorescence is suppressed by the resonator, even without a change in the FRET efficiency.This static effect, however, should not depend on the illumination power P exc .In fact, the effect of static fluorescence enhancement can be observed for P exc -0.The modified fluorescence speed k rad thus causes a change from f D /f A E 0.5 to f D /f A E 2.0, while the illumination-dependent modification of the FRET efficiency increases the ratio to f D /f A E 10.
The dynamic behavior observed in the measurement is accurately reproduced by the rate equation model presented in this paper.The blue line in Fig. 4 shows the donor-toacceptor fluorescence ratio D 1 k D rad /(A 1 k A rad ) predicted by our model for decay efficiencies F x = k x /k tot given in Table 1.These values correspond to excited state fluorescence lifetimes of 2.8 ns and 2.6 ns for the uncoupled donor and acceptor, respectively, with fluorescence quantum yields (without FRET) of 18.1% and 25.2%.The FRET rate constant k T then corresponds to a value of 2.1 GHz, in agreement with previously measured data. 31With these values, the model's predictions are in excellent agreement with our experimental results.
Having confirmed the accuracy of our model, we can now explore the parameter space of the studied system.In Fig. 5, we plot the donor-to-acceptor fluorescence ratio (red surface, left scale) and the FRET efficiency, (green surface, right scale) for typical values of the resonator mirror separation L and illumination power P exc .Many interesting features can be observed in this representation.First, one can see that the large increase in the D/A fluorescence ratio is only possible if the resonator is tuned to the correct wavelength.A large enhancement can be seen if the acceptor fluorescence is effectively inhibited while allowing, or even enhancing, donor emission.For larger L, both donor and acceptor emissions are allowed, and so the D/A ratio is similar to that in free space (cf.triangles in Fig. 4).For very small L, both donor and acceptor emissions are suppressed.While the D/A ratio is not strongly enhanced in this case, it shows a saturation onset at very low power P exc .This is due to the fact that, with fluorescence being inhibited, Fo ¨rster transfer plays the dominant role in the energy dynamics of the coupled system.As the resonator modes prevent the acceptor from decaying radiatively, the resulting FRET inhibition is clearly visible already at very low power.Finally, one can see that the FRET efficiency F T varies greatly across the shown parameter space.Depending on the incident power, tuning the resonator mirrors allows us to reduce the FRET efficiency between 50% and 75%.It should be pointed out that this is not a modification of the FRET rate constant k T as per the factor F T (here, F T = 1), as the resonator is not capable of sufficiently modifying the near field of the donor dipole.Much more, it is an active modification of the other transition parameters D 1 and A 0 , allowing us to selectively change the rate and efficiency of the Fo ¨rster transfer.
Fig. 1
Fig.1System used to model the FRET-coupled system.Radiative transitions are shown as solid lines, nonradiative transitions as dashed lines.In our study, the acceptor is not directly excited, hence its excitation channel is drawn in gray.
Fig. 2
Fig. 2 Measured spectra: (a) single DsRed tetramers in a l/2-microresonator tuned to the donor emission wavelength (blue curve) and acceptor emission (red curve).(b) Ensemble DsRed in free space (gray shaded area) and in l/2-microresonator tuned to the donor emission wavelength (blue curve) and acceptor emission (red curve), along with the respective white-light transmission spectra (shaded dashed lines).(c) Ensemble DsRed acceptor fluorescence lifetime t A for different cavity resonance wavelengths.Blue and red arrows correspond to the two resonator configurations shown in (a) and (b).
Fig. 3
Fig.3Experimental setup consisting of a l/2 microresonator with embedded DsRed molecules in a physiological environment.The resonator is placed in a confocal laser microscope with an additional white-light source for measuring its transmission spectra.
Fig. 5
Fig. 5 Simulated D/A ratio (red surface) and FRET efficiencies (green surface) for different resonator widths d and excitation power P exc .
Table 1
Free-space decay efficiencies used in our model to reproduce the measured behavior | 4,730.6 | 2014-06-05T00:00:00.000 | [
"Physics"
] |
Making the leap from structure to mechanism: are the open states of mammalian complex I identified by cryoEM resting states or catalytic intermediates?
Respiratory complex I (NADH:ubiquinone oxidoreductase) is a multi-subunit, energy-transducing mitochondrial enzyme that is essential for oxidative phosphorylation and regulating NAD+/NADH pools. Despite recent advances in structural knowledge and a long history of biochemical analyses, the mechanism of redox-coupled proton translocation by complex I remains unknown. Due to its ability to separate molecules in a mixed population into distinct classes, single-particle electron cryomicroscopy has enabled identification and characterisation of different complex I conformations. However, deciding on their catalytic and/or regulatory properties to underpin mechanistic hypotheses, especially without detailed biochemical characterisation of the structural samples, has proven challenging. In this review we explore different mechanistic interpretations of the closed and open states identified in cryoEM analyses of mammalian complex I.
Introduction to complex I
Respiratory complex I is a key metabolic enzyme in mammalian mitochondria [1,2]. Due to its defining roles in NADH homeostasis, respiration and oxidative phosphorylation, defects in complex I are associated with a wide range of clinical mitochondrial diseases [3e5]. Complex I catalyses oxidation of NADH and reduction of ubiquinone-10, coupled to transfer of four protons across the inner-mitochondrial membrane to generate the proton-motive force (Dp) that powers ATP synthesis and transport processes. Furthermore, mammalian complex I is a thermodynamically reversible catalyst, switching cleanly into reverse (Dp-driven ubiquinol-10:NAD þ oxidoreduction or 'reverse electron transfer') when Dp is high enough [6,7].
Recent developments in single-particle electron cryomicroscopy (cryoEM) have led to an explosion of complex I structures from mammals, plants, single-cell eukaryotes and bacteria [8e22], providing unprecedented opportunities for understanding its mechanisms of catalysis and regulation. They have revealed the conserved architecture of complex I and key elements of its catalytic machinery [ Figure 1a]. Redox catalysis occurs by fast and reversible NADH oxidation by a flavin mononucleotide at the top of the hydrophilic domain; electron transfer by a chain of ironesulphur clusters; and reduction of ubiquinone-10 in a long, amphipathic channel at the interface of the hydrophilic and membrane domains.
Proton transfer occurs against Dp in a series of four modules; it is most likely powered by energy transfer along a chain of charged residues from the ubiquinone-binding site. The coupling mechanism, the least understood component, is intimately linked to ubiquinone reduction, but there is no consensus on the molecular coupling mechanisms that use redox catalysis to trigger and drive proton transfer and conserve the energy released. Notably, two recent publications [11,12] have both been entitled "the coupling mechanism of mammalian [respiratory/mitochondrial] complex I" d but the two mechanisms proposed have little in common and neither has been substantiated by coherent biochemical and biophysical analyses. Nevertheless, contrasting them highlights a key issue at the heart of the debate: are the conformational states of mammalian complex I identified by cryoEM (in both these and earlier studies) off-cycle resting states or on-cycle catalytic intermediates?
Observation of multiple states in cryoEM analyses of mammalian complex I As soon as cryoEM particle classification was employed to investigate the homogeneity of 'as-prepared' samples of mammalian complex I (grouping protein molecules according predominantly to their global conformations), it was obvious that more than one class is typically present. We described three major classes in our preparations of bovine (Bos taurus) complex I [8,23] but only one in the mouse (Mus musculus) enzyme [24]. The ovine (Ovis aries) and porcine (Sus scrofa) enzymes were resolved into four [11,25] and two [12] major classes, respectively.
One of the major classes, described as the 'closed' state as it exhibits the smallest apparent angle between the hydrophilic and membrane domains that form the complex I L-shape, is common to all the analyses. As expected for a catalytically relevant state, the loops that compose the ubiquinone-binding channel at the domain interface in the closed state are well ordered [ Figure 1b]. Common to work on the bovine and porcine enzymes is also a well-defined 'open' state characterised by disorder in the ND3-transmembrane helix (TMH) 1e2, ND1-TMH5e6 and NDUFS2-b1eb2 loops that form the ubiquinone-binding channel, changes to the NDUFS7-Arg77 loop, as well as associated changes in Architecture of complex I and key elements of the active/closed-deactive/open transition. a) A schematic overview of mammalian complex I. Key subunits involved in the active/closed-deactive/open transition are shown in colour, with the remaining core and supernumerary subunits outlined in black or shaded in grey, respectively. The ubiquinone-binding channel is indicated with a box and four proton transfer routes are shown schematically with dotted lines. b) A cartoon representation of the ubiquinone-binding channel in the active/closed state [PDB: 7QSL (protein), 7QSK (Q 10 )], outlining ND1-TMH4 and the ND1-TMH5-6, ND3-TMH1-2, NDUFS2-b1-b2, and NDUFS7-Arg77 loops. the membrane domain involving the ND6-TMH3 and ND1-TMH4 helices [ Figure 1c and d]. As a result of the disordered ubiquinone-binding channel in the open state, the domain interface has relaxed and the apparent hinge angle has opened, in a movement most clearly visualised by the relative positions of subunits NDUFA5 and NDUFA10 on the hydrophilic and hydrophobic domains, respectively [ Figure 1c].
Due to variations between species, preparations, and classification strategies, different numbers of open states have been observed. An additional open class identified for the bovine enzyme has been named the 'slack' state because disorder in specific elements of the membrane domain opens the ND2eND4 interface and further relaxes the global structure [8,23]. Similar characteristics were observed in the major class from a cryoEM analysis of complex I from macaque (Macaca mulatta), which displayed only very low activity [26]. The functional competence of the slack state is thus uncertain, and we do not consider it further here. For simplicity, we focus only on the two (closed and open) states described above. Finally, we note that multiple open classes of ovine complex I have been described [11,25]. They have much in common with the open states described for the bovine and porcine enzymes, but have not been distinguished structurally in the same way. Instead, they have been described as a distribution that can be further divided by more detailed classification [25], consistent with a progressive relaxation of structural restraints (that extends to include slack-like states). The status of the distinguishing elements in 'as-prepared' samples of the four mammalian species discussed are summarised in Table 1. Next, we consider how the closed and open states of mammalian complex I observed in cryoEM may be interpreted and reconciled with what is known about its activity and behaviour. The active/deactive transition of complex I was first described by Vinogradov and co-workers [27], and later proposed to be prominent in ischaemiaereperfusion injury [28e30]. When mammalian complex I stops catalysing, it adopts the so-called 'active' resting state, a 'ready-to-catalyse' resting state. The active resting state gradually converts to the 'deactive' resting state, a pronounced resting state that requires reduction by NADH and reactivation by ubiquinone to return to catalysis [Scheme 1a] [27,29,31]. The two resting states can be differentiated biochemically by their sensitivity to Nethylmaleimide (NEM), which prevents reactivation of the deactive state by derivatising ND3-Cys39 on the ND3-TMH1e2 loop [ Figure 1d] [32e34]. On this basis, preparations of mammalian complex I typically comprise a mixture of the active and deactive resting states, so we proposed [23] that the closed structure corresponds to the active resting state, and the open structure to the deactive resting state. The tightly defined closed structure and its well-ordered ubiquinone-binding channel suggests it is ready to begin catalysing immediately, whereas the disordered elements of the open state are consistent with its need for restructuring and reactivation. Furthermore, ND3-Cys39 is occluded in the closed structure but in the open structure the ND3-TMH1e2 loop is disordered, consistent with exposure of Cys39 [ Figure 1d]. Our initial assignment was substantiated by cryoEM analysis of a sample of bovine complex I prepared from purposefully deactivated membranes, which displayed deactive biochemical characteristics (NEM sensitivity and a catalytic lag phase during reactivation [27,33]) and which was highly active following reactivation [35]. CryoEM revealed the open structure described above as the dominant state in the deactivated sample, and a matching structure was determined similarly for purposefully deactivated mouse complex I [24].
We propose that the structural elements that change during deactivation [ Figure 1 and Table 1] are unstable in the active resting state, perhaps because they are conformationally mobile during catalysis and/or because they are destabilised when the resting binding channel is occupied by water molecules or fatty acids [36] instead of ubiquinone. Therefore, they slowly relax in the resting enzyme, in a coordinated transition to the open/deactive resting state. Their closed/active conformations are recovered when substrates stimulate and template their restructuring [Scheme 1a]. Therefore, the open/deactive resting state does not feature on the catalytic cycle and sustained catalysis, including ubiquinone binding and ubiquinol release, occurs within a set of closed intermediates. Several mechanistic proposals are consistent with this interpretation [12,15,37e40], which provides a broad structural rationale for long-standing biochemical observations on the deactive state, including its sensitivity to NEM, catalytic lag phase during reactivation, and slow but spontaneous formation in ischaemic tissue that is protective against ischaemiaereperfusion injury [10,28e30].
Interpretation 2: opening and closing as an intrinsic feature of catalysis Kampjut and Sazanov recently proposed an alternative interpretation of the mixture of open and closed states that they observe in their 'as-prepared' resting ovine complex I [11,41]. Based on the fact that it is catalytically competent, they proposed that all the states observed by cryoEM in the preparation (open and closed alike) are catalytic intermediates. Consequently, they proposed that opening and closing is an intrinsic and essential part of the catalytic cycle, in which ubiquinone-binding is initiated in the open state, the enzyme closes for ubiquinone reduction, and then reopens again as ubiquinol is released [Scheme 1b]. The closed enzyme is predicted not to exist without Table 1 The status of structural features identified to differ between the mammalian closed/active and open/deactive states in preparations of complex I from different species.
Species
Given All the structures listed are 'as-prepared' enzymes that have not been treated with substrates, inhibitors or to activate or deactivate them. In S. scrofa and T. thermophila the enzyme is contained in a supercomplex. Each structural feature is compared to its structure defined in this laboratory in the 'active' Similarly, opening either occurs slowly during deactivation d or rapidly, during every turnover cycle.
In Interpretation 1, a mixture of active and deactive (and inactive) enzymes will immediately begin to turnover upon substrate addition due to the ready-to-catalyse active molecules, then increase its rate upon reactivation of the deactive molecules (inactive molecules will remain inactive). Therefore, the active/deactive model can explain the catalytic competence of a mixed population. To capture known biochemical active/deactive behaviour for ovine complex I in their proposal, Kampjut 24,35], and these changes to ND6, together with loss of density for nearby subunit NDUFA11, may instead reflect the known instability of ovine complex I in the absence of complex III [25], exacerbated by the incubation in detergent. It is possible that ovine ND6 recovers its native structure, together with the structural elements highlighted in Figure 1, during global reactivation of the deactive enzyme. Alternatively, its altered structure may not affect catalysis, as suggested by the different positions/disorder of ND6-TMH4 in structures of all the non-mammalian species listed in Table 1.
The challenges of combining catalysis and cryoEM
Experiments to freeze complex I onto cryoEM grids while it is catalysing have been carried out in a quest to observe the structures of the intermediates present directly d perhaps to catch the elements discussed above in different conformations. For complex I, the experiment is technically demanding due to challenges in providing sufficient electron acceptor (ubiquinone or O 2 ) to sustain catalysis in a high concentration sample undergoing rapid turnover for long enough for grid preparation and freezing. Figure 1 and Table 1] are conformationally mobile during catalysis, and note that movement of the ND3-TMH1e2 loop has been suggested on the basis that using cross linking to restrict it was observed to decouple catalysis [44]. In the future, using the same method to restrict opening and closing movements may provide an alternate approach to evaluate their catalytic relevance.
Insights from cryoEM analyses of nonmammalian complexes I
The fourteen 'core' subunits of complex I are conserved in all species and considered sufficient for catalysis. Therefore, we expect the mechanism to be conserved, and all species to catalyse via a matching set of catalytic intermediates and transition states. In contrast, different enzymes, with different thermal stabilities, supernumerary subunits and physiological environments, and isolated using different procedures, may relax differently in the absence of substrates and rest in different conformations. Therefore, we surveyed structures of complex I from plants, single-celled eukaryotes, bacteria and archaea to evaluate the status of the key structural elements that change in the mammalian complex during deactivation/opening [see Table 1].
It is clear that the simple binary nature of the active/closed and deactive/open resting mammalian enzymes does not extend to the other organisms, with many exhibiting mixed characteristics. Only one, from the ciliate Tetrahymena thermophila, is observed by cryoEM in a homogeneous active/closed state under resting conditions. This structure argues against opening and closing during catalysis as it contains species-specific supernumerary subunits that lock the substrate-binding site in the active/closed conformation [17]. Common to all others is the deactive-type p bulge in ND6-TMH3 that disrupts the proposed connection between the ubiquinone-binding site and protontranslocating modules [8,11,38]. Nearby is ND1-TMH4, in the deactive-type straighter form in most species. The active-type bent form correlates partially with ordered active/active-like conformations of the ND3-TMH1e2, ND1-TMH5e6, NDUFS2 and NDUFS7 loops that form the ubiquinone-binding channel [ Fig. 1b]. These varied combinations of active-and deactive-type elements may suggest each species of enzyme relaxes differently and to a different extent, from the same initial resting state when catalysis stops. For example, Y. lipolytica complex I has a lower energy barrier for its (limited) active-to-deactive transition than the mammalian enzyme [45,46]. In contrast, other enzymes show no evidence of undergoing a transition: T. thermophila complex I has been proposed to be structurally trapped in the closed state [17], and Paracoccus denitrificans complex I (which contains a Cys39 equivalent) displays no sensitivity to NEM [47]. Between the mammalian enzymes, the relative proportions of open and closed states observed suggest that the mouse enzyme (predominantly closed) has the highest barrier for deactivation/opening and the ovine enzyme (mostly open) the lowest. Alternatively, the different combinations of conformations may result from different species adopting different initial resting states (at different stages around the cycle) when catalysis stops, so they might provide insights into how different mobile elements change their conformations individually during catalysis.
The ND6-P25L mouse model: rapid deactivation and unidirectional catalysis | 3,431.4 | 2022-09-07T00:00:00.000 | [
"Chemistry"
] |
Integration of simulated annealing into pigeon inspired optimizer algorithm for feature selection in network intrusion detection systems
In the context of the 5G network, the proliferation of access devices results in heightened network traffic and shifts in traffic patterns, and network intrusion detection faces greater challenges. A feature selection algorithm is proposed for network intrusion detection systems that uses an improved binary pigeon-inspired optimizer (SABPIO) algorithm to tackle the challenges posed by the high dimensionality and complexity of network traffic, resulting in complex models, reduced accuracy, and longer detection times. First, the raw dataset is pre-processed by uniquely one-hot encoded and standardized. Next, feature selection is performed using SABPIO, which employs simulated annealing and the population decay factor to identify the most relevant subset of features for subsequent review and evaluation. Finally, the selected subset of features is fed into decision trees and random forest classifiers to evaluate the effectiveness of SABPIO. The proposed algorithm has been validated through experimentation on three publicly available datasets: UNSW-NB15, NLS-KDD, and CIC-IDS-2017. The experimental findings demonstrate that SABPIO identifies the most indicative subset of features through rational computation. This method significantly abbreviates the system’s training duration, enhances detection rates, and compared to the use of all features, minimally reduces the training and testing times by factors of 3.2 and 0.3, respectively. Furthermore, it enhances the F1-score of the feature subset selected by CPIO and Boost algorithms when compared to CPIO and XGBoost, resulting in improvements ranging from 1.21% to 2.19%, and 1.79% to 4.52%.
INTRODUCTION
As 5G networks continue to advance and the number of access devices increases, network traffic has also increased significantly.With higher bandwidth, lower latency, and greater connection density, 5G networks are more vulnerable to insidious and efficient network attacks.To address network security concerns, it is recommended to implement a network intrusion detection system (NIDS) (Tsai & Lin, 2010) on computer systems to scan for any signs of unauthorized intrusion.The connection of a large number of devices to the 5G network requires NIDS to be capable of handling such a large-scale operation.Nevertheless, network data is characterized not only by its substantial volume but also by its high dimensional nature (Ganapathy et al., 2013), resulting in prolonged model training times and diminished predictive performance (Hastie et al., 2009).Hence, the significance of feature selection algorithms in NIDS is self-evident (Alazab et al., 2012).Feature selection offers a means of identifying significant features and eliminating extraneous ones from a dataset.The objective is to choose the most indicative subset of features from the initial dataset, thereby reducing model complexity and enhancing predictive performance.Feature selection reduces model complexity, improves predictive performance, and enhances the accuracy and reliability of intrusion detection by minimizing false alarms and preventing missed alarms (Thakkar & Lohiya, 2022).NIDS that uses feature selection algorithms have been extensively researched and implemented.They are a critical technical tool for ensuring network security.
The aim of feature selection is to identify a subset of features that closely approximates the optimal feature subset within a reasonable timeframe.The inclusion of feature selection has greatly improved the effectiveness of NIDS, aiming to identify a more suitable solution rather than an optimal one.At present, bio-inspired algorithms utilizing feature selection techniques exhibit superior performance when compared to other methods.Bionic algorithms draw inspiration from the collective behaviors of various animals (such as fireflies, wolves, fish, and birds), and researchers have introduced diverse computational approaches to emulate these species' behaviors for problem optimization, known as foraging.These approaches include the Chaotic Firefly Algorithm, Grey Wolf Optimization, Artificial Fish Swarm Algorithm, and the Bird Swarm Algorithm, among others (Shoghian & Kouzehgar, 2012).Each member within a swarm intelligence algorithm embodies a potential solution, generating fresh individuals through continuous mutation and crossover.The Pigeon-Inspired Optimizer (PIO) algorithm is an emerging swarm intelligence algorithm, which has obvious advantages in global search ability, convergence speed and robustness compared with other swarm intelligence algorithms.
Effective feature selection algorithms can enhance the detection capabilities and efficiency of NIDS.Scientific and efficient decision-making in feature selection has emerged as a critical method to guarantee the operational security of networks.However, feature selection algorithms currently face several issues, including excessive feature pruning (Zhou et al., 2020), disregard for inter-feature correlations (Li et al., 2020), susceptibility to anomalous traffic, and difficulty in handling large datasets (Jaw & Wang, 2021).These challenges can lead to a decline in the model's generalization ability, increased complexity, and reduced stability, ultimately impacting the model's detection performance and efficiency (Rashid et al., 2022).To tackle the aforementioned issues, this article presents a feature selection algorithm for NIDS based on an improved binary pigeoninspired optimization algorithm, aiming to enhance the accuracy and efficiency of feature selection in the context of network intrusion detection.The goal is to reduce false positive rate and false negative rate in NIDS.This approach utilizes mutation and simulated annealing mechanisms during the map and compass operator phases to expand the search scope and prevent the feature subset from being stuck in local optima.Furthermore, it introduces a population decay factor in the landmark operator phase to control rapid population decline and regulate the algorithm's convergence rate.The article presents a method that selects the most representative feature subset through reasonable computation.This leads to a significant reduction in model training and testing times, while enhancing the model's detection rate and accuracy.The key contributions of this study include: (1) We conduct an investigation and analysis of existing NIDS feature selection algorithms, leading to the proposal of an improved NIDS feature selection algorithm based on enhancements to the binary PIO algorithm; (2) during the map and compass operator phase, a mutation mechanism is introduced to increase the diversity of the population, thereby expanding the search space of the algorithm.Additionally, a simulated annealing approach is incorporated to accept new solutions that are worse than the current solution with a certain probability, facilitating escape from local optima; (3) during the landmark operator phase, a population decay factor is proposed to dynamically adjust the population size for each iteration based on the fitness distribution of the population.The objective of this adjustment is to regulate the convergence speed of the algorithm; (4) the improved PIO algorithm was combined with a classifier and applied to NIDS.The algorithm was evaluated against state-of-the-art feature selection algorithms using datasets such as UNSW-NB15, NSL-KDD, and CIC-IDS-2017.
The remaining sections of this article are organized as follows."Related Work" provides an overview of previous related work conducted by other researchers.In "Continuous Pigeon Inspired Optimizer", we present the architecture and formulation description of the continuous PIO algorithm."Proposed Improvement of PIO" describes the model of the proposed feature selection algorithm and provides detailed information on the updating steps.In "Experiments and Results", we conduct simulation experiments and evaluate the performance of our approach.Finally, in "Conclusion", we conclude and discuss future research directions.
RELATED WORK
The classification performance of network intrusion detection system models is significantly constrained by the high dimensionality and sheer volume of network traffic data.In light of the increasing volume of data, researchers have investigated sample selection methods to enhance the efficiency of the training model process.Feature selection algorithms have been developed to tackle challenges associated with high data dimensionality, as well as the presence of irrelevant and redundant features (Alazab et al., 2012) in datasets.Feature selection is crucial for enhancing model performance by eliminating irrelevant and redundant information from the dataset.By selecting only the most significant features for model training, it helps prevent overfitting and reduces feature dimensionality, thereby improving the efficiency of model training and prediction processes.
Traditional feature selection algorithms can be categorized into three types: filtered, wrapper, and embedded methods (Di Mauro et al., 2021).Filtered feature selection operates independently of the classifier, while wrapper methods involve evaluating the classifier during feature selection.Embedded methods integrate feature selection directly into the training process of the classifier.Each type has distinct benefits and is appropriate for different situations depending on the specific needs of the task.Filtered algorithms are computationally efficient but do not guarantee optimal feature selection.Embedded algorithms perform feature selection during intrusion model training and are computationally expensive for large datasets.Conversely, wrapper algorithms exhibit higher accuracy than the previous two algorithms but are sensitive to the quality of the training data.Achieving high accuracy is crucial for NIDS, and training time for offline data is not a significant concern.Therefore, this article uses the wrapper algorithm as the preferred method for feature selection, as it has been shown to provide the best results.
Table 1 provides a summary of the performance of different feature selection methods on different datasets, categorizing them into filtering methods, embedding methods, and wrapping methods.It includes details such as the number of features selected, the detection rate, and the false alarm rate for each method on each dataset.This table serves as a comprehensive overview of how these methods perform in the context of feature selection for intrusion detection.
Filtered feature selection method
Filtered feature selection algorithms do not use explicit criteria to determine the size of the subset.Instead, they rank features based on various evaluation metrics and select the top N features with the highest scores.This selection process is based on the intrinsic characteristics of the dataset and does not consider feedback from classification results for the features already selected.By focusing on feature ranking and selection independently of the classification model, filtered feature selection algorithms aim to identify the most relevant features for the given dataset without being influenced by the performance of a specific classifier.Amiri et al. (2011) introduced a mutual information-based feature selection (MIFS) technique for NIDS.However, the accuracy of mutual information estimation may be compromised in scenarios with limited data, resulting in the identification of suboptimal sets of features.Ambusaidi et al. (2016) proposed a mutual information-based method to select optimal feature subset for classification from linear and nonlinear correlated data.The ARM feature selection model proposed by Moustafa & Slay (2017) focuses on enhancing detection performance by filtering out irrelevant features, retaining only significant ones, and leveraging association rule mining to identify feature combinations with strong correlations.The comprehensive results show that ARM effectively minimizes false alarms and significantly reduces processing time while maintaining accuracy.Stiawan et al. (2020) conducted experiments using the mutual information selection technique with a NIDS on 20% of the streams from the CIC-IDS-2017 dataset.By reducing the number of features selected, the accuracy decreased, but the execution time also decreased significantly.differential evolution (DBDE) and quadratic discriminant analysis (QDA) to accelerate the process of wrapper feature selection.This approach aims to swiftly identify the optimal prediction features with minimal dimensions, thereby reducing the computational time needed.The experimental results demonstrate that DBDE-QDA offers decreased computational costs and effectively shortens the classification algorithm's computational time for network intrusion detection systems (NIDS).However, it may lead to a slight reduction in the detection rate for certain intrusion detection dataset.
Embedded feature selection method
In conclusion, while existing feature selection algorithms have partially addressed the challenges of intrusion detection systems (IDS) in 5G network environments, they still suffer from issues such as excessive feature selection, disregard for feature correlations, sensitivity to abnormal traffic, and difficulty in processing large-scale data.These challenges can reduce the model's generalisation ability, increase complexity, and decrease stability, thereby affecting detection performance and efficiency.To address these challenges, this article proposes a feature selection method based on an improved binary pigeon swarm optimization algorithm.In comparison to existing methods, the proposed approach incorporates mutation and simulated annealing mechanisms in the map and compass operator stages.These modifications are designed to enhance population diversity, expand the search space, and facilitate the acceptance of new solutions that are worse than the current solution with a certain probability, thereby facilitating escape from local optima.Furthermore, the proposed algorithm incorporates a population attenuation factor in the landmark operator stage.This factor dynamically adjusts the population size of each iteration based on the fitness distribution of the population, thus controlling the algorithm's convergence speed.The objective is to achieve improvements in key performance indicators such as detection rate, false alarm rate, and processing time.
CONTINUOUS PIGEON INSPIRED OPTIMIZER
In 2014, Duan & Qiao (2014) researched pigeon behavior.They found that pigeons use geomagnetic cues and landmarks to navigate, determine direction, and find their nests.Based on these findings, the PIO algorithm was developed to imitate pigeons' migration behaviors and help find optimal solutions through communication and cooperation.The algorithm includes the map and compass operator phase and the landmark operator phase.
Map and compass operator phase
The map and compass operator phase emulates how the sun and geomagnetic forces influence pigeon navigation.Pigeons assess the sun's position and geomagnetic cues to make real-time adjustments to their flight direction and strategize optimal routes.As pigeons approach their destination, they rely less on solar and geomagnetic guidance.During this phase, each pigeon is characterized by its positional and velocity data.
The PIO algorithm defines V t i as the velocity of the (i)-th pigeon in the (t)-th iteration, and P t i as its position.In each iteration, every pigeon adjusts its position P t i and velocity V t i according to Eqs. ( 1) and ( 2) (Duan & Qiao, 2014): In Eq. ( 1), R represents the map and compass operator, t denotes the current number of iterations and the random function rand 2 ½0; 1, P global stands for the globally optimal position obtained by comparing the positions of all pigeons in (t À 1)-th iteration.
Landmark operator phase
The landmark operator mimics how navigational landmarks affect pigeons.Pigeons have the ability to rapidly store details about surrounding landmarks during navigation.As they approach the target location, pigeons rely on nearby landmarks to construct a mental map and fine-tune their position and speed in response to these landmarks until they reach the intended destination.If a pigeon is unfamiliar with the local landmark, it will adjust its flight based on the flight patterns of nearby pigeons that are familiar with the landmark.During the iterative process of the landmark operator phase, pigeons are eliminated based on their fitness disparity, removing the less adapted half of the pigeons.The central position of the remaining, more adept pigeons is then computed as the reference direction within the population.The position of the pigeon is updated at this phase based on Eqs.
(3)-( 5) (Duan & Qiao, 2014). Fitness The iteration of the center position of the pigeon group can be denoted by Eq. ( 3), where Num t pigeon denotes the quantity of pigeon groups in the (t)-th iteration, t signifies the present iteration number, and the fitness function Fitness adopts distinct valuation methods for various issues.In instances where the aim is to minimize a problem, involving the reciprocal, P tÀ1 center represents the position of the pigeon center (desired destination) in the (t À 1)-th iteration.
Among them, the sorting function sort represents sorting the pigeon group according to adaptability.The iterative decay of the population can be described by Eq. (4).
Equation ( 5) describes how the remaining flock adjusts its position relative to the center position of the flock by incorporating the random function rand 2 ½0; 1.
The PIO algorithm is logically coherent, easy to understand, robust, and has significant research implications.The PIO algorithm has been shown to be effective in addressing various challenges, including the unmanned aerial vehicle path planning dilemma (Yuan & Duan, 2024), the security concerns associated with medical image encryption (Geetha et al., 2022), and the optimization of large-scale hydroelectric short-term generation (Tian et al., 2020).While the PIO algorithm exhibits superior performance compared to other population intelligence algorithms, it still suffers from drawbacks such as rapid convergence and a tendency to explore.To address the issue of rapid iteration and susceptibility to local optima in the PIO algorithm, this study introduces mutation and simulated annealing techniques to broaden the search scope.Additionally, a population decay factor is suggested to regulate the algorithm's convergence rate, thereby enhancing its overall performance, diminishing feature selection data dimensionality, and boosting the efficiency of intrusion detection.
PROPOSED IMPROVEMENT OF PIO
This article proposes a method to integrate the Simulated Annealing into the Binary PIO (SABPIO) algorithm for feature selection in NIDS.The approach incorporates simulated annealing and mutation into the conventional PIO algorithm, expanding the search scope and mitigating the risk of local optima.Additionally, a population decay factor is introduced to regulate the algorithm's convergence speed.The proposed SABPIO feature selection algorithm is shown in Fig. 1.
The proposed method generates the initial positions of the pigeons by utilizing randomly chosen features from the dataset, establishing the initial population.Decision tree (DT) and Random Forest (RF) classifiers are used to determine the search subject, which is the position of the pigeon closest to the target.These classifiers evaluate the fitness of each pigeon position within the population, and the positions of the remaining pigeons are adjusted based on the optimal solution.Following this, the pigeon swarm undergoes probabilistic positional adjustments utilizing simulated annealing.This mechanism aids in steering clear of local optimal solutions and enhances solution diversity within the search process.Finally, the population attenuation factor is used to decrease the pigeon population, which improves the exploration of solutions within the search space.The output of each iteration serves as the input for the following iterations until the optimal feature subset is identified.
Pigeon encoding
The pigeon position symbolizes the potential selection for features, with a single pigeon representing a particular feature subset.As shown in Fig. 2, the upper vector in the encoding denotes the feature's order number (dimension), while the lower vector indicates the pigeon's binary position within each dimension.The spatial dimension denoted by d explored by the pigeon corresponds to the quantity of network features.In the binary vector P i ¼ ðp i1 ; p i2 ; …; p id Þ, when p ij ¼ 1, it signifies that feature j within the feature subset represented by pigeon i is chosen.Conversely, when p ij ¼ 0, it indicates that feature j in the feature subset represented by pigeon i is not selected, meaning it is excluded from the optimal feature subset.
Fitness function
The Fitness Function evaluates the fitness of every individual.It is formulated considering the individual's traits and the specifications of the given problem, converting the individual into a numerical value that reflects their suitability for problem-solving.Given that the two metrics of true positive rate (TPR) and false negative rate (FPR) serve as effective gauges for assessing the model's efficacy in identifying attacks and managing false positives in routine activities, a majority of researchers opt to employ TPR and FPR (Thakkar & Lohiya, 2023) as the fitness criteria (Louk & Tama, 2023) in their calculations.
Equations ( 6) and ( 7) provide the calculation formulas for TPR and FPR.In the feature selection problem of a NIDS, TP refers to the system identifying abnormal traffic as attack events, and TN refers to the system identifying normal traffic as non-attack events.FP refers to the system identifying correct traffic as an attack event, and FN refers to the system identifying abnormal traffic as a non-attack event.The SABPIO algorithm incorporates the ratio of selected features into the fitness function to account for their potential impact on intrusion detection time.This adjustment aims to eliminate features within the subset that do not significantly contribute to detection accuracy.The present study also introduces a fitness function formula, shown in Eq. ( 8), which reframes the optimization of feature selection as a minimization task.
where Num SF denotes the number of selected features, k is the weighting factor and k 2 ð0; 1Þ.In Eq. ( 8), the numerator considers the impact of the selected feature quantity on adaptability, while the denominator accounts for the NIDS performance's influence on adaptability.Through the Fitness Function, the SABPIO algorithm strikes a balance between feature quantity and classification performance, effectively enhancing classification efficiency while ensuring the accuracy of NIDS detection.
Binary mapping strategy
The continuous pigeon-inspired optimizer (CPIO) algorithm involves a process of continual spatial repositioning for the pigeon, enabling it to traverse any point within space.However, in certain discrete scenarios such as feature selection, the pigeon's position, representing a solution matrix, consists of binary values of 0 and 1.Therefore, updating continuous values requires the application of appropriate position adjustment techniques in addition to discretization operations.
In the context of feature selection, the pigeon's position within each dimension of the search space is constrained to 0 or 1.However, the velocity associated with each dimension is not subject to such limitations.Therefore, the integration of a conversion function becomes essential to effectively map the position variables onto binary values.After conducting experimental analysis, we selected the Tanh function to map pigeon velocities into the binary space.The Tanh function formula (Sood et al., 2023) is shown in Eq. ( 9), and the positions of the individual pigeon flocks are updated using a uniform random number r 2 ð0; 1Þ, with the Tanh value through Eq. ( 10) in this article.
The individual pigeon's position is updated according to Eq. ( 10).In this process, for each dimension of the position, the velocity is evaluated against a randomly generated number r and the current dimension's pigeon velocity.In instances where the mapping value Tanh ðV t i ½ jÞ > r in the ongoing velocity iteration, there is a strong positive correlation between velocity and position, the position from the previous iteration in the current dimension is preserved.If Tanh ðV t i ½ jÞ < À r, there is a strong negative correlation between velocity and position, a reverse operation is applied to the position from the prior iteration in the current dimension.In all other scenarios where there is a weak correlation between velocity and position, the optimal position value from the previous iteration is directly utilized.
Improved map and compass operator phase
(1) Simulated annealing To tackle the issue of rapid convergence observed in conventional PIO algorithms, the proposed approach introduces a simulated annealing mechanism during this phase to prevent premature trapping in local optimal solutions.In the map and compass operator phase, each iteration of the pigeon undergoes adjustments to both velocity and position.The influence of the map and compass operators R on the population decreases as the algorithm approaches later stages of iteration.At this point, the algorithm relies mainly on the current globally optimal position P global .
This approach integrates simulated annealing to enhance the inner loop with each iteration.During the loop, a random pigeon undergoes perturbation, resulting in the modification of one value in the vector, such as changing a 1-value to a 0-value.Then, the fitness is recalculated, and a new feature subset is accepted based on the probability determined by the Metropolis criterion (Hao et al., 2023).The purpose of this criterion is to determine whether to accept a new state based on the change in energy value before and after the state modification.The study employs the Metropolis criterion outlined in Eq. ( 11): where pðP global ) P i 0 Þ represents the probability of accepting the new solution P i 0 , while DE represents the energy difference, denoted as Fitness ðP 0 i Þ À Fitness ðP global Þ in this context.In this article, the fitness is transformed into a minimization problem.If the fitness of the new solution P i 0 is lower than the fitness of the globally optimal solution P global , implies that the feature subset represented by the new solution P i 0 is superior to the globally optimal solution P global , P i 0 is accepted as the current globally optimal solution with a probability of 1. Conversely, the probability p ðP global ) P i 0 Þ is used to determine whether the new solution should be accepted.
(2) Multi-dimensional similarity strategy During the map and compass operator phase, the white pigeon adjusts its flight position by tracking the position of the best pigeon (blue pigeon), as shown in Fig. 3.
Instead, the pigeon computes its velocity by subtracting its own position vector from the global optimal vector.In a discrete problem, it is not feasible to directly subtract the pigeon's position vector as done in the continuous problem due to the nature of discrete variables.This article introduces a multi-dimensional similarity strategy for computing pigeon velocities.The strategy includes metrics such as Pearson correlation coefficient (Saviour & Samiappan, 2023), cosine similarity (Alazzam, Sharieh & Sabri, 2020), and Jaccard similarity coefficient (Yin et al., 2023), as shown in Eqs. ( 12)-( 14).All three similarity indicators have limitations.To balance these limitations, this article employs weighted calculations to avoid relying on a single indicator.In this phase, each pigeon updates its velocity and position for this iteration based on Eqs. ( 15) and ( 10).
Equation ( 15) requires normalizing Jaccard's correlation coefficient to the range of [−1, 1], given that Pearson's correlation coefficient and Cosine similarity have a value range of [−1, 1], and Jaccard's similarity coefficient ranges from [0, 1].In this, x 1 , x 2 and x 3 represent the weighting coefficients of Pearson's correlation coefficient, cosine similarity, and Jaccard's similarity coefficient, respectively, with (3) Mechanism of mutation When the initial number of pigeon flocks is high, there is a greater chance that two pigeons will represent the same solution, which will reduce the search ability of the algorithm.Therefore, this approach includes a mutation mechanism in the flock's position updates.It checks for the existence of a solution with a similar position before adding the updated pigeon to the flock.If such a solution is found, all dimensions of the current pigeon undergo mutation based on the probability derived from a uniformly distributed random number r 2 ½0; 1, which expands the search space.
Improved landmark operator phase
During each iteration of the landmark operator phase, the pigeons are sorted based on their fitness values.Then, half of the pigeons with lower fitness values are eliminated.The current center position of the remaining dominant breeder flock is considered the desired destination of the flock.The remaining flock adjusts its flight position towards the position of the desired destination, also known as the Blue Pigeon, as shown in Fig. 4. The population decay factor a was proposed to regulate the decay rate of the population, as the traditional pigeonholing algorithm tends to converge too quickly and fall into local optimal solutions during the landmark operator phase.
Equation ( 16) defines b as a constant between (0, 1), and t as the number of iterations of the landmark operator.The SABPIO algorithm improves the traditional pigeon colony algorithm by halving it with the number of iterations and updating the population size according to Eq. ( 17), thus prevent rapid population decay in the early stage.During the map and compass operator phase, all pigeons calculate their speed and position using Eqs.( 15) and (10).
Algorithm 1 outlines the procedure for the feature selection algorithm SABPIO, which is based on an improved binary PIO framework.The upgraded algorithm introduces a simulated annealing loop within the initial phase of the main iteration loop to extend the exploration of the global search space.Additionally, the secondary while loop uses a population decay factor to regulate the pace of population reduction and mitigate premature convergence of the algorithm.
Experimental dataset
(1) UNSW-NB15 dataset The UNSW-NB15 dataset represents network traffic data collected by a cybersecurity research laboratory in Australia utilizing the IXIA Perfect Storm tool.It comprises four CSV files encompassing 254,047 data entries, featuring nine attack classifications, 43 descriptive attributes, and two classification labels for each entry.The detailed feature attributes of this dataset are outlined in Table 2.
(2) NSL-KDD dataset The NSL-KDD dataset serves as an updated iteration of the renowned KDD99 dataset, comprising 148,517 entries.Each entry is composed of 41 descriptive attributes and one class label, encompassing a total of 39 attack classifications.Within the training set, there 3.
(3) CIC-IDS-2017 dataset The CIC-IDS-2017 dataset encompasses network traffic data gathered by the Canadian Institute of Cybersecurity (CIC) from authentic network scenarios.This dataset is constructed using actual network traffic captures, encompassing a broad spectrum of network intrusions and regular activities.It comprises real network traffic observed across various laboratory network environments designed to replicate the network traits found in commercial and industrial entities.The CIC-IDS-2017 dataset encompasses a diverse array of network intrusion behaviors and standard network operations.It aligns with the traits of contemporary networks and stands as one of the presently recommended datasets.Featuring 2,830,743 records, each entry comprises 78 defining attributes and one class label, covering a total of eight attack classifications.The detailed characteristics of the dataset are presented in Table 4. (4) Data preprocessing 1) Data conversion During the algorithm's execution, solely numerical data is utilized for training and testing purposes.Hence, the initial step involves transforming non-numeric data within the dataset into numerical format.Taking the UNSW-NB15 dataset as a case in point, out of the 45 descriptive attributes, three are non-numeric and necessitate conversion via onehot encoding.For instance, consider the "proto" attribute containing 133 distinct values such as "tcp," "udp," and "sctp."These values are encoded into numerical representations ranging from 0 to 132.Subsequently, the "service" and "state" attributes undergo a similar transformation into numerical format utilizing the aforementioned method.
2) Normalized Normalization is a crucial data preprocessing technique that facilitates the comparison and analysis of data by standardizing data with varying scales and distributions onto a uniform scale (Devendiran & Turukmane, 2024).This process enhances the accuracy and efficiency of data analysis and machine learning algorithms while mitigating biases stemming from variations across different variables.The normalization formula, as illustrated in Eq. ( 18), plays a pivotal role in this standardization process.
where x max is the maximum of the eigenvalues, x min is the minimum of the eigenvalues and x norm is the output value which is between [0, 1].
Evaluation indicators
Multiple metrics are available for evaluating feature selection algorithms.In this study, we use evaluation metrics derived from the confusion matrix, including detection rate (DR), false alarm rate (FAR), Accuracy (Acc), Precision (Pre), and F1-score (Thakkar & Lohiya, 2023), as shown in Eqs. ( 19)-( 23).Table 5 shows the confusion matrix.(1) Detection rate (DR) The detection rate, also known as the true positive rate (TPR), signifies the capacity to accurately recognize all true positive samples.It represents the proportion of positive samples correctly identified by the model.In the realm of network intrusion detection, it signifies the percentage of intrusion events effectively identified by the model.A heightened detection rate implies the model's enhanced ability to accurately identify potential intrusions.
(2) False alarm rate (FAR) The false alarm rate, denoted as the false positive rate (FPR), represents the ratio of negative samples that the model inaccurately identifies as positive samples.In the context of network intrusion detection, it signifies the percentage of normal behaviors erroneously identified as intrusions by the model.A diminished FPR indicates the model's efficacy in minimizing false alarms.
(3) Accuracy Accuracy refers to the proportion of correctly predicted samples by the model, serving as a measure of the model's overall predictive precision.
(4) Precision Precision is the ratio of correctly identified positive samples by the model.In the context of network intrusion detection, it reflects the accuracy of the model in identifying all samples flagged as intrusions.Enhanced precision indicates greater reliability of the model in reporting alarms and a reduced occurrence of false alarms.(5) F1-score F1-score is a measure that assesses the balance between precision and recall while taking into account the model's accuracy and comprehensiveness.It is calculated as the harmonic average of precision and recall.
Experimental results
In this study, we conducted experiments to evaluate the proposed approach utilizing Python 3.11.2 on 64-bit Windows 11 operating system.The were carried out on Intel(R) Core (TM) i5-11400H processor with 16.00 GB of RAM.All feature selection algorithms were assessed using the decision tree (DT) classifier and Random Forest (RF) classifier from the scikit-learn library for evaluation purposes.Compared to alternative base classifiers, DT and RF are less sensitive to missing values and more robust against outliers and noise.This makes them well-suited for assessing feature selection issues.Table 6 delineates the parameter configurations of the SABPIO algorithm.Through rigorous experimental analysis, we found that the performance of the algorithm is optimal when the number of individuals in the pigeon swarm is within the range of [80,150].Consequently, we set the number of pigeons to 128.It's important to note that while a larger number of pigeons allows for a broader exploration of the search space, it also increases computational complexity.In the inner loop of the annealing iterations, only the new solution and the current local optimal solution are compared.As such, the number of iterations has a minimal impact on time complexity.Therefore, we set the number of iterations in the simulated annealing inner loop to 100.In the calculation of the fitness function, we considered the number of feature selections.If the weight factor is too large, the pigeon swarm may overly pursue feature subsets with fewer elements rather than optimal feature subsets.To prevent the fitness calculation from ignoring the impact of TPR and FPR, we set the weight factor of the number of feature selections to 0.0075.
(1) Results of UNSW-NB15 The performance, convergence, and efficiency of SABPIO were compared to those of CPIO, SPIO, XGBoost, PSO, and ARM algorithms using the UNSW-NB15 dataset.Figure 5 shows the convergence curves of SABPIO, CPIO, SPIO, and PSO during the feature selection process under the random forest classifier.In "Proposed Improvement of PIO" of the article, fitness is defined as a minimization problem.The data suggests that SABPIO converges faster than SPIO and PSO algorithms and achieves better fitness values with each iteration compared to CPIO, SPIO, and PSO algorithms.
In Fig. 5, it is evident that the SABPIO algorithm exhibits the swiftest rate of adaptation decay within the initial 30 iterations.Conversely, the SPIO algorithm showcases a rapid decay rate within the first 10 iterations; however, subsequent iterations reveal that the SPIO algorithm becomes ensnared in a locally optimal solution, impeding the exploration for a superior solution.By the 50th iteration, the SABPIO algorithm embraces a suboptimal solution based on the Metropolis criterion probability, resulting in a slight fitness increase.By the time all algorithms reach convergence at 100 iterations, it is apparent that the solution derived by SABPIO demonstrates reduced adaptation.The experimental findings affirm that SABPIO boasts enhanced convergence efficiency compared to SPIO and PSO, along with greater efficacy in the selected feature subset than CPIO, SPIO, and PSO.In Fig. 6, the detection rate (DR) and false alarm rate (FAR) of the SABPIO algorithm, assessed on DT and RF classifiers with a subset of features selected by other algorithms, are presented.Each bar in the figure represents the results and standard deviation obtained from 100 repeated runs of the feature subset selected by the algorithm in the DT and RF classifiers.In Fig. 6A, it is observed that SPIO achieves the highest DR among the DT classifiers, slightly surpassing the performance of the SABPIO algorithm.However, it is crucial to acknowledge that DR is not the sole metric utilized in this study for evaluating the feature subset in network intrusion detection.Moving on to Fig. 6B, SABPIO exhibits a 2% lower FAR compared to SPIO, while only experiencing a marginal 0.2% reduction in DR.In comparison to CPIO, XGBoost, PSO, and ARM algorithms, the proposed SABPIO algorithm demonstrates advantages in both DR and FAR.Notably, the ARM algorithm prioritizes a high detection rate as the optimization objective, neglecting the impact of the false alarm rate on NIDS, thereby resulting in an elevated false alarm rate within this dataset.Within the RF classifiers, SABPIO showcases more significant improvements than other algorithms.The mean performance of the proposed algorithm across 100 repeated experiments surpasses that of other algorithms, with the standard deviation consistently maintained at a low level.
Figure 7 displays the accuracy and precision test results for the six algorithms on UNSW-NB15.Each bar represents the result and standard deviation obtained from 100 repeated runs of the feature subset selected by the algorithm using the DT and RF classifiers.The experimental results show that the SABPIO algorithm improves the accuracy rate by 0.12% to 4.89% and the precision rate by 0.19% to 5.98% compared to other algorithms, for an equivalent number of iterations.It is important to consider both accuracy and precision rates, as well as other relevant indicators.The PSO and ARM algorithms are highly accurate but have low precision, indicating a higher likelihood of false predictions in samples identified as cyber-attacks.This tendency often leads to higher misclassification rates within the models, resulting in more instances of misclassifying normal traffic as attacks.
Figure 8 shows the mean and standard deviation of the F1-Score from 100 repeated experimental tests for the feature subsets selected by the SABPIO algorithm and other algorithms using the DT and RF classifiers.The results indicate that the F1-score achieved by SABPIO is 0.920 in the DT classifier and 0.927 in the RF classifier, demonstrating superior performance compared to the other five algorithms.Furthermore, the lower standard deviation highlights the improved performance of the SABPIO algorithm, indicating better consistency and stability.The selected feature subset demonstrates superior feature representation capabilities and heightened performance stability.
Figure 9 shows a comparison of training and testing times before and after feature selection for different feature subsets selected by various feature selection algorithms on UNSWNB15.The results demonstrate that the number and quality of features have a significant impact on the model's training and testing time.The SABPIO feature selection algorithm can significantly reduce model training time and improve efficiency without compromising detection results.The experiment evaluated the training time using the RF classifier.The training time of the RF classifier before feature selection was 1.21 s.After SABPIO feature selection, the training time reduced to 0.29 s, which is about 3.2 times faster than using all the features.Additionally, the testing time decreased from 0.096 to 0.068 s.
(2) Results of NSL-KDD The NSL-KDD dataset was utilized to evaluate the detection performance of various algorithms including SABPIO, CPIO, SPIO, IG, PSO, and ARM. Figure 10 illustrates the DR and FAR of 100 repeated tests on DT and RF classifiers using the SABPIO algorithm and the feature subset selected by the recent feature algorithm.As shown in Fig. 10A, SABPIO outperforms the other DT classifiers with a DR of 90.2% (±1.3%), which is an improvement of approximately 3.6% compared to the next best CPIO.In Fig. 10B, the SABPIO algorithm prioritizes the balance of DR and FAR.Although the FAR is slightly higher compared to other algorithms such as SPIO, IG, PSO, and ARM, it still has a significant difference with the proposed algorithms in terms of DR.Similar to the DT classifier experiments, the RF classifier using the SABPIO algorithm showed significantly better DR means than the other algorithms in 100 repetitions, with slightly higher FAR.The proposed algorithm also maintained a low standard deviation, demonstrating the robustness and interpretability of the selected feature subset.
Figure 11 displays the results of 100 repeated experiments on NSL-KDD for the feature subsets selected by six feature selection algorithms.The experimental results indicate that the SABPIO algorithm outperforms the other five algorithms in terms of accuracy and precision, achieving 90.6% (±0.7%) and 91.5% (±0.6%) respectively, under the same number of iterations, whether using a DT classifier or an RF classifier.The accuracy of the other algorithms was 87.6% (±1.4%) and 83.2% (±1.2%).
Figure 12 displays the mean and standard deviation of the F1-Score from 100 repeated experimental tests on the DT and RF classifiers for the feature subsets selected by the 99.897% in DT and RF classifiers, respectively.It is important to note that all evaluations are objective and based on empirical evidence.Figure 13B indicates that SABPIO is at the optimal level of FAR in both classifiers, except for a slightly higher FAR in the DT classifier compared to the IG algorithm.Figure 14 displays the accuracy and precision test results of the feature subset selected by six feature selection algorithms on 20% of CIC-IDS-2017, repeated 100 times.The experimental results indicate that, under the same number of iterations, the SABPIO algorithm outperforms the other five algorithms in terms of accuracy and precision for both DT and RF classifiers, achieving 99.72%, 99.80%, and 99.38%, respectively, with an overall accuracy of 99.44%.
Figure 15 displays the mean and standard deviation of the F1-score from 100 repeated experimental tests on DT and RF classifiers for the feature subsets selected by the SABPIO
CONCLUSION
Network intrusion detection detects attacks by monitoring traffic.However, the large volume and high dimensionality of network data pose challenges to intrusion detection.Redundant and irrelevant features seriously affect detection performance.To address these, by incorporating mutation and simulated annealing into the map and compass operator, as well as introducing a population decay factor in the landmark operator phase.
Experimental results indicate that the SABPIO algorithm effectively improves the detection rate and reduces false alarms, as well as training time.
However, it should be noted that SABPIO is subject to limitations that depend on the quality and completeness of the data.In the event that there are a significant number of missing or outlier values in the dataset, SABPIO may not be able to achieve optimal performance.In our future work, we will investigate how to improve the SABPIO algorithm to handle incomplete data.Meanwhile, the number of network attack samples and normal traffic samples is unbalanced in the actual network environment, so in further research, we will consider the impact of sample distribution imbalance on the feature selection algorithm.
Embedded feature selection algorithms are integrated with the machine learning model training process in a seamless manner.This approach offers the advantage of performing feature selection and model training simultaneously, resulting in optimized performance in both aspects.Embedded feature selection algorithms view the feature selection process as an integral part of model training.Feature weights are assigned concurrently with model training, all within a unified framework.Yulianto, Sukarno & Suwastika (2019) sought to enhance machine learning-based NIDS by incorporating principal component analysis and ensemble feature selection techniques for feature selection.Due to AdaBoost's high
Table 1
Summary of related works.
(WOA)to address their respective limitations through hybridization.The experimental findings indicate that when combined with the Artificial Neural Network Weighted Random Forest (AWRF), the OWSA achieved an accuracy of 99.92% on the NSL-KDD dataset and 98% on the CICIDS2017 dataset.Zorarpaci (2024) presented a rapid wrapper feature selection technique, termed DBDE-QDA, which integrates two-class binary Algorithm 1 Simulated Annealing Binary Pigeon Inspired Optimizer (SABPIO).Input: Number of pigeons Num pigeon , Number of iterations Num t , Fitness function Fitness, Number of annealing iterations Num at Check for duplicate items at each pigeon's position ½P 1 ; P 2 ……::; P Num pigeon 05: Calculate fitness Fitness P i ð Þ of each pigeon's position ½P 1 ; P 2 ……::; P Num pigeon 06: Find global optimal solution P global ¼ minfFitnessðP i Þ j i 2 ½0; Num pigeon g pigeon ! 1) // Landmark operator phase 14: Update center position of all pigeons P center by Eq. (3) 15: Update number of pigeons Num pigeon by Eq. (17) global Huang et al. (2024), PeerJ Comput.Sci., DOI 10.7717/peerj-cs.217615/32 are 125,973 data points encompassing 22 distinct attack types, while the test set consists of 22,544 entries featuring a further 17 attack categories.The defining attributes within this dataset are detailed in Table
Table 2
UNSW-NB15 dataset features and types.
Table 3
NSL-KDD dataset features and types.
Table 6
Detailed parameters of SABPIO. | 9,975 | 2024-07-16T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
Exploration of the Characteristics of Intestinal Microbiota and Metabolomics in Different Rat Models of Mongolian Medicine
Background Mongolian medicine is a systematic theoretical system, which is based on the balance among Heyi, Xila, and Badagan. However, the underlying mechanisms remain unclear. This study aimed to explore the characteristics of intestinal microbiota and metabolites in different rat models of Mongolian medicine. Methods After establishing rat models of Heyi, Xila, and Badagan, we integrated 16S rRNA gene sequencing and metabolomics. Results Heyi, Xila, and Badagan rats had significantly altered intestinal microbial composition compared with rats in the MCK group. They showed 11, 18, and 8 significantly differential bacterial biomarkers and 22, 11, and 15 differential metabolites, respectively. The glucosinolate biosynthesis pathway was enriched only in Heyi rats; the biosynthesis of phenylpropanoids pathway and phenylpropanoid biosynthesis pathway were enriched only in Xila rats; the isoflavonoid biosynthesis pathway, the glycine, serine, and threonine metabolism pathway, and the arginine and proline metabolism pathway were enriched only in Badagan rats. Conclusions The intestinal microbiota, metabolites, and metabolic pathways significantly differed among Heyi, Xila, and Badagan rats compared with control group rats.
Introduction
Traditional Mongolian medicine is an indigenous medicine system widely practiced in China, especially in the Inner Mongolia region [1]. e systematic theoretical system of Mongolian medicine is based on the balance among three roots: Heyi, Xila, and Badagan. Generally, the ratio of three roots in different individuals depends on genetic and environmental factors. An imbalance in these roots results in disease. e Heyi, Xila, and Badagan rat models constructed based on the "Four-Part Medicine Classics" [2] would enhance our understanding of Mongolian medicine. Current research on Mongolian medicine has mainly focused on clinical practice or drugs prescribed in Mongolian medicine [3][4][5]. However, the underlying mechanisms based on Heyi, Xila, and Badagan are unclear. e digestive system status usually plays an important role in the diagnosis of different diseases in Mongolian medicine, among which the gut environment and intestinal microbiota carry significant importance [6,7]. erefore, it is important to understand the underlying pathogenesis of different diseases by comparing the composition of intestinal microbiota in Heyi, Xila, and Badagan rat models. e intestinal microbiota refers to the various microorganisms present in the gastrointestinal tract, including bacteria, fungi, and viruses [7]. About 3 × 10 6 genes exist in microbial genome sequences [8]. With the development of next-generation sequencing technology, 16S rRNA sequencing made it possible for us to link intestinal microbiota to various diseases [9]. Dysbiosis is known to cause diseases, such as hypertension [10], inflammatory bowel diseases [11], and type 2 diabetes [12]. To the best of our knowledge, no previous study has linked intestinal microbiota to the different aspects of Heyi, Xila, and Badagan in Mongolian medicine. After oral administration, drugs used in traditional medicine often work by interacting with the intestinal microbiota [9,13], implying that the intestinal microbiota is important in traditional Chinese medicine and Mongolian medicine. Metabolomics analysis has emerged as an effective tool to study pathogenetic mechanisms [14]. Some recent studies have combined metabolomics analysis with intestinal microbiota analysis [15][16][17]. A recent study in a twin model demonstrated complex links between host phenotypes and intestinal microbiota based on metabolic profiling [18].
us, metabolomics analysis and determination of the composition of intestinal microbiota would help us better understand different diseases in Mongolian medicine.
In the current study, we explored the characteristics of intestinal microbiota and metabolites via an integrated analysis of 16S rRNA sequencing and metabolomics in three Mongolian medicine rat models (Heyi, Xila, and Badagan rat models) to further understand the possible mechanisms underlying diseases related to the three roots in Mongolian medicine.
e results would also provide deeper insights into the function of intestinal microbiota. ere was no intervention put on rats in the control group (MCK). And all rat models were constructed mainly based on the "Four-Part Medicine Classics" [2]. When the rats showed corresponding characteristics of Heyi, Xila, and Badagan described in the "Four-Part Medicine Classics," the model was considered to be constructed successfully (Supplementary Material 1). A Heyi rat model was constructed according to the following methods: First, diet intervention, drinking water was replaced by cold black tea (5 g tea + 100 mL distilled water) and the rats were fed buckwheat (8.5 g/day). Second, behavior intervention, the rats were exposed to the continuous cat audio at 70 decibels. ird, Mongolian medicine intervention, the rats were given a dose of 1 mL/100 (g d) Gaburi by gavage. Finally, 0.1 mL tail vein bloodletting was performed on the rats at 5 pm every two days. It took 31 days to construct the Heyi rat model. A Xila rat model was constructed according to the following methods: First, diet intervention, the rats were given 1 mL liqueur by gavage once every other day, were given 0.7 g/kg fruit oil at 6 am every day, and were fed yellow rice (15 g/day). Second, behavior intervention, the rats were under the environment of 29 ± 2°C. ird, Mongolian medicine intervention, the rats were given 0.7 g/kg pepper by gavage daily at 12 noon. It took 21 days to construct the Xila rat model. A Badagan rat model was constructed according to the following methods: First, diet intervention, the rats were fed lard and wheat flour (mixed in a ratio of 1 : 4). Second, behavior intervention, the rats were under the environment of 60 ± 5% humidity.
Materials and Methods
ird, Mongolian medicine intervention, the rats were given 4 mL dandelion (200% decoction) by gavage. It took 49 days to construct the Badagan rat model.
Fecal Sample Collection and DNA Extraction.
Fecal samples were collected in a sterile conical tube and stored at −80°C. According to the manufacturer's instruction, the DNA was extracted using an E.Z.N.A. feces DNA kit (Omega Bio-Tek, Norcross, GA, USA). e DNA quality was determined by 1% agarose gel electrophoresis, and the DNA concentration was determined using a NanoDrop 2000 spectrophotometer ( ermo Fisher Scientific, USA).
16S rRNA Microbial Community Analysis.
e primer 341F (5′-CCTAYGGGRBGCASCAG-3′) 806R (5′-GGAC-TACNNGGGTATCTAAT-3′) was used to amplify the bacterial 16S rRNA gene V3-V4 region, which was performed on the Illumina HiSeq sequencing platform (Illumina, USA). As for the paired-end sequences obtained, the primer adapter sequences were removed, and various samples were distinguished based on the barcode tag sequences. And the valid data of the samples were obtained after the quality control filtering. FLASH software (version 1.2.11) [19] was used to splice the paired-end sequences, and Trimmomatic software (version 0.33) [20] was used to filter the spliced sequences. UCHIME software (version 8.1) [21] was used to remove the chimera sequences in order to obtain valid data for further analysis. Based on 97% similarity, all sequences were clustered in operational taxonomic units (OTU) using USEARCH software (version 10.0) [22], which was filtered with 0.005% of all sequences as a threshold. In order to determine the classification, RDP Classifier software (version 2.2) (http://rdp.cme.msu.edu/classifier/classifier. jsp) [23] was used to compare the representative sequence of each OTU with the Silva database (https://www.arb-silva. de/) [24]. e alpha diversity of microbiota was calculated using mothur software (version 1.30) [25], including Chao1, ACE, Shannon index, and Simpson index. e β diversity was estimated according to the Bray Curtis distance algorithm and then was visualized using nonmetric multidimensional scaling (NMDS). e differential biomarkers between different groups were found according to the linear discriminant analysis effect size (LEfSe) [
Serum Sample Preparation for Metabolome.
Blood samples were collected from the abdominal aorta, and the serum was separated and stored at −80°C. en, 200 μL of the serum was taken and 3 times volume of precooled acetonitrile solution was added. e sample was vortexed and mixed, and then it was placed in a refrigerator at −20°C for 30 min. Subsequently, the sample was centrifuged (14000 g, 4°C, 15 min) and the supernatant was transferred in a new centrifuge tube for concentration and drain. e 1 : 1 (v/v) mixture of mobile phase A (ammonium acetate) and mobile phase B (acetonitrile) was used to redissolve the sample; after the high-speed centrifugation, the sample was used for HPLC-MS analysis. e target compounds were separated on an Accucore Hilic (100 × 2.1 mm, 2.6 μm) liquid chromatography column, using Vanquish ( ermo Fisher Scientific) ultra-performance liquid chromatography. And the liquid chromatography consisted of 10 mM ammonium acetate as phase A and acetonitrile/10 mM ammonium acetate as phase B (9 : 1). Gradient elution was used: 0∼1 min, 100% A; 1∼9 min, 0%∼100% B; 9∼12 min, 100% B; and 12.1∼15 min, 100% A. e flow rate of mobile phase was 0.35 mL/min; the column temperature was 35°C; the sample tray temperature was 4°C; and the injection volume was 2 μL.
Metabolite Analysis.
e serum metabolites were analyzed using ermo Scientific's Q Exactive mass spectrometer. e positive and negative ions were scanned once each. e positive ion scan was performed firstly, after which, the negative ion scan was performed. e full scan range was 70-1050 m/z. In the full scan, the precursor ions with TOP10 ion intensity were selected for secondary MS identification. e precursor ion was fragmented according to the HCD method, which was used for secondary mass spectrometry sequence determination and then generated the mass spectrometry detection original file. Subsequently, the raw data were transformed in mzML format using ProteoWizard software, and XCMS was used to perform retention time correction, peak identification, peak extraction, peak integration, peak alignment, etc. en, identification of metabolites was based on the Compound Discover V3.0 (CD) and mzCloud database. SIMCA-P software was used for orthogonal partial least square discriminate analysis (OPLS-DA). In order to select different variables as potential markers, VIP-plot (VIP >1) was obtained from OPLS analysis. e differential metabolites with VIP >1 and P value < 0.05 were screened.
e OPLS-DA model was validated using 7-fold crossvalidation. en, R 2 Y (model explainability of the categorical variable Y) and Q 2 (predictability of model) were used to determine the validity of the model. Finally, the permutation test was performed to further test the validity of the model, which was done via randomly changing the arrangement of categorical variable Y (n � 200 times) and obtaining random Q 2 values.
Statistical Analysis.
e Wilcoxon rank-sum test (R software v3.6.2) was used to compare the α diversity (ACE index, Chao1 index, Shannon index, and Simpson index) and microbiota between various groups, and P < 0.05 was considered as the significance threshold. Analysis of similarities (ANOSIM) was used to analyze the differences between and within groups. e Kruskal-Wallis sum-rank test was used to determine the alterations in abundance between different groups in LEfSe analysis, and |LDA score| > 3 and P < 0.05 were taken as the difference screening thresholds.
Changes in Intestinal Microbiota
Diversity among Different Mongolian Medicine Rat Models. Based on the results of 16S rRNA sequence analysis, the changes of intestinal microbiota of three different Mongolian medicine rat models and control group rats were investigated. e results were clustered in operational taxonomic units (OTU) based on over 97% similarity. e rarefaction curves, based on the number of sample reads and OTUs, tended to be flat ( Figure S1), indicating that the amount of sequencing data was sufficient to reflect the species diversity in all samples. e ACE, Chao1, Shannon, and Simpson indexes were used to evaluate microbial alpha diversity.
Compared with the MCK group, the ACE and Chao1 indexes of intestinal microbiota in the Heyi rat model significantly decreased (P value < 0.05), but the Shannon and Simpson indexes did not significantly differ. Compared with the MCK group, the ACE and Chao1 indexes of intestinal microbiota in the Xila rat model did not significantly differ, but the Shannon index increased significantly and the Simpson index decreased significantly. Compared with the MCK group, the ACE index of intestinal microbiota in the Badagan rat model significantly decreased, but the Chao1, Shannon, and Simpson indexes did not significantly differ (Figures 1(a)-1(d)). e results showed that compared with the MCK group, the richness of intestinal microbiota in Heyi rats and Badagan rats was decreased, but the diversity did not significantly differ; however, the richness of intestinal microbiota in Xila rats did not significantly differ, but the diversity increased.
Compared with Heyi and Badagan rat models, the ACE, Chao1, and Shannon indexes of intestinal microbiota in the Xila rat model significantly increased, but the Simpson index decreased significantly. Compared with the Heyi rat model, the Shannon index of intestinal microbiota in the Badagan rat model decreased significantly, but the ACE, Chao1, and Simpson indexes did not significantly differ (Figures 1(a)-1(d)).
e results above indicated that the richness and diversity of intestinal microbiota in Xila rats both increased significantly compared with Heyi and Badagan rats. Moreover, compared with Heyi rats, the richness of intestinal microbiota in Badagan rats did not significantly differ, but the diversity decreased significantly.
According to the results of β diversity analysis, significant differences were noted between rats in the Heyi and MCK groups ( significantly different from that of the MCK group. Collectively, the intestinal microbial structure of Heyi, Xila, and Badagan rats significantly differed from that of the control group rats.
To investigate the specific intestinal bacterial biomarkers at the genus level, line discriminant analysis (LDA) effect size (LEfSe) analysis was performed on all three different Mongolian medicine rat models. e bacteria abundance of 23 genera in the Heyi rat model was significantly higher than that in MCK rats (LDA >3, P value < 0.05) (Figure 3(d)). e bacteria abundance of 30 genera in the Xila rat model was significantly higher than that in MCK rats (LDA >3, P value < 0.05) (Figure 3 index. X axis: different groups; Y axis: the corresponding diversity index. e Wilcoxon test was used to determine the significance of differences between any two groups. Statistical significance: * P < 0.05, * * P < 0.01, * * * P < 0.001, and * * * * P < 0.0001. genera in the Badagan rat model was significantly higher than that in MCK rats (LDA >3, P value < 0.05) (Figure 3( (Figure 3(g)). Compared with the MCK group, the abundance of 3 KEGG pathways significantly increased and that of 10 pathways significantly decreased in the Heyi rat model, total 13 KEGG pathways were significantly different between Heyi and MCK groups (Figure 4(a)). Compared with the MCK group, the abundance of 4 KEGG pathways significantly increased and that of 13 pathways significantly decreased in the Xila rat model, and 17 KEGG pathways were significantly different between Xila and MCK groups (Figure 4(b)). Compared with the MCK group, the abundance of 6 KEGG pathways significantly increased and that of 14 pathways decreased significantly in the Badagan rat model, and 20 KEGG pathways were significantly different between the Badagan and MCK groups (Figure 4(c)).
Prediction of the Function of Intestinal Microbiota in
Moreover, the abundance of 4 KEGG pathways was significantly different between all three experimental groups (Heyi, Xila, and Badagan rat models) and the control group Y axis: beta distance. All between: beta distance of samples in all groups; all within: beta distance of samples within the group. P value < 0.05 was considered as significant difference. R > 0 indicated that the difference between groups was greater than the difference within the groups; R < 0 indicated that the difference within the group is greater than the difference between the groups; the greater the |R| value, the greater the relative difference. ( Figure 4(d)), among which the abundance of "global and overview maps" pathway and "energy metabolism" pathway was significantly increased in the experimental groups ( Figure 4(e)), but that of "digestive system" pathway and "endocrine and metabolic diseases" pathway was significantly reduced (Figure 4(f )). Furthermore, the abundance of some pathways was only changed in one certain model. e abundance of "cardiovascular diseases" pathway and "neurodegenerative diseases" pathway was significantly decreased only in the Heyi rat model. e abundance of "metabolism of cofactors and vitamins" pathway was significantly increased and the abundance of "nucleotide metabolism" pathway, "replication and repair" pathway, "infectious diseases: parasitic" pathway, and "cancers: overview translation" pathway was decreased only in the Xila rat model. e abundance of "cellular communityprokaryotes" pathway was significantly increased and the abundance of "transport and catabolism" pathway, "glycan biosynthesis and metabolism" pathway, "cell motility" pathway, "environmental adaptation" pathway, "excretory system" pathway, "aging" pathway, "immune system" pathway, "signal transduction" pathway, and "membrane transport" pathway was decreased only in the Badagan rat model. Our data implied that the abundance of various pathways has been altered in different models compared with the MCK group.
Identification of Serum Metabolic Profile and Metabolic Markers in Different Mongolian Medicine Rat Models.
According to the results of metabolite profile analysis, Heyi, Xila, and Badagan rat models presented significant differences compared with the MCK group (Figures 5(a)-5(f )).
In the OPLS-DA multivariate model, metabolites with VIP scores >1 and P value < 0.05 were considered as differential metabolites. Compared with the MCK group, 30 differential metabolites were detected in the Heyi rat model after positive ion scan and 64 differential metabolites were noted after negative ion scan (Table S4), resulting in 94 differential metabolites involved in 7 metabolic pathways ( Figure 5(g)). Compared with the MCK group, 35 differential metabolites were detected in the Xila rat model after positive ion scan and 51 differential metabolites were noted after negative ion scan (Table S5), resulting in 86 differential metabolites involved in 8 metabolic pathways ( Figure 5(h)). Compared with the MCK group, 37 differential metabolites were detected in the Badagan rat model after positive ion scan and 63 differential metabolites were noted after negative ion scan (Table S6), resulting in 100 differential metabolites involved in 8 metabolic pathways (Figure 5(i)).
In addition, 22, 11, and 15 differential metabolites were detected only in the Heyi, Xila, and Badagan rat models, respectively ( Figure 5(j), Table S7). Moreover, the "glucosinolate biosynthesis" pathway was enriched only in the Heyi rat model; the "biosynthesis of phenylpropanoids" pathway and "phenylpropanoid biosynthesis" pathway were enriched only in the Xila rat model; the "isoflavonoid biosynthesis" pathway, "glycine, serine, and threonine metabolism" pathway, and "arginine and proline metabolism" pathway were enriched only in the Badagan rat model.
Discussion
We explored the characteristics of intestinal microbiota and metabolomics in different Mongolian medicine rat models. In this study, a joint analysis of 16S rRNA gene sequencing and metabolomics was conducted to investigate the potential mechanisms underlying diseases related to Heyi, Xila, and Badagan. Our results indicated that the intestinal microbiota of Heyi, Xila, and Badagan rat models significantly differed from that of control rats. Metabolites and metabolic pathways in Heyi, Xila, and Badagan rats were also significantly different from those in control rats. e alpha and beta diversity and intestinal microbiota composition were investigated, as the link between intestinal microbiota changes and many metabolic diseases have been reported [28,29]. Our results showed that compared with the MCK group, the alpha diversity of intestinal microbiota in Xila rats was increased, but that in Heyi and Badagan rats showed no significant difference. e alpha diversity of intestinal microbiota in Xila rats was also higher than that in Heyi and Badagan rats. e results of beta diversity indicated that there was a significant dissimilarity between the control group and Mongolian medicine rat models. e intestinal microbiota of Heyi, Xila, and Badagan rats was significantly altered compared with that of control group rats, which was probably an important contributor to different diseases in Mongolian medicine. Furthermore, intestinal microbiota composition was also investigated. Firmicutes and Bacteroidetes were found to be the dominant strains in all rats, consistent with former studies reporting that Firmicutes and Bacteroidetes are two main phyla comprising gut microbiota in healthy humans [30]. Moreover, the abundance of some phyla in our disease models was significantly different from that in control group rats. Verrucomicrobia, usually colonized in the mucosal layer, is considered a promising probiotics [31,32]. e abundance of Verrucomicrobia was increased in Heyi rats but decreased in Badagan rats compared with control group rats, which might be responsible for the contrary manifestations of Heyi disease and Badagan disease. e abundance of Proteobacteria was increased both in Xila and Badagan rats, and it is reported that the increase in abundance of Proteobacteria resulted in an imbalanced gut microbiota composition and consequently metabolic disorders [33]. Furthermore, we found that bacteria biomarkers of some genera existed specifically in certain disease in Mongolian medicine. ere were 11, 18, and 8 bacterial biomarkers that were increased only in Heyi, Xila, and Badagan rats, respectively. Some of these bacterial genera have been associated with diseases, such as Faecalibacterium [34], Prevotella_1 [35], Prevotellaceae [36], Alistipes [37], Bilophila [38], Hungatella [39], and Sellimonas [40], which might contribute to diseases in Mongolian medicine. In addition, the differential KEGG pathways were found between Mongolian medicine rat models and control group rats. e abundance of "global and overview maps" pathway and "energy metabolism" pathway was significantly increased in the disease models, but the abundance of "digestive system" pathway and "endocrine and metabolic diseases" pathway was significantly decreased. We suspected that these pathways might play an essential role in three Mongolian medicine rat models. However, further studies should be conducted to further understand the intestinal microbiota in different diseases in Mongolian medicine.
Evidence-Based Complementary and Alternative Medicine
Furthermore, investigations of metabolites and metabolic pathways in Heyi, Xila, and Badagan rats revealed 22, 11, and 15 differential metabolites specific to Heyi, Xila, and Badagan rats, respectively. e "glucosinolate biosynthesis" pathway was enriched only in Heyi rats, whereas the "biosynthesis of phenylpropanoids" pathway and "phenylpropanoid biosynthesis" pathway were enriched only in Xila rats.
e "isoflavonoid biosynthesis" pathway, "glycine, serine, and threonine metabolism" pathway, and "arginine and proline metabolism" pathway were enriched only in Badagan rats. ese specific metabolic pathways probably play an important role in the characteristics of the various Mongolian medicine rat models. Moreover, we noticed that most of these pathways were related to the metabolism of certain amino acids, which might be correlated with the diseases in Mongolian medicine. For example, it has been reported that glycine-conjugated metabolites were involved in chronic kidney disease and hypertension in rats [41]. Gut-derived D-serine has renoprotective effects on the kidney in acute kidney injury [42]. L-arginine protects the intestinal barrier by promoting expression of tight junction proteins in rats [43]. erefore, the kidney and gut are probably more important for the manifestations of diseases in Mongolian medicine. Collectively, although metabolomics analysis helps us better understand the three disease aspects in Mongolian medicine, the specific role of each pathway in various diseases in Mongolian medicine remains to be further studied.
In spite of this, there are still several limitations in our present study. First, only 16S rRNA sequencing and metabolomics were included in our research, which might lead to inevitable deviation in the results. Multiple omics data like methylation data could be further studied in the future. Moreover, the detailed underlying reasons for the pathway changes should be further explored.
Conclusions
In conclusion, we have firstly explored the characteristics of intestinal microbiota and metabolomics in different Mongolian medicine rat models by integrating 16S rRNA sequencing and metabolomics approaches. Our data showed that the intestinal microbiota, metabolites, and metabolic pathways in Heyi, Xila, and Badagan rats were significantly different from those in control group rats. Despite the detailed mechanisms remain to be clarified, our research has provided more reference information for diseases in Mongolian medicine.
Data Availability
e data sets of this study are available on request to the corresponding author. is study was also supported by the National Natural Science Foundation of China (No. 82060912, to Eerdunchaolu). Figure S1: the rarefaction curves of all samples. Table S1: relative abundance of microbial phylum (percentage) in the Heyi rats and control rats. Table S2: relative abundance of microbial phylum (percentage) in the Xila rats and control rats. Table S3: relative abundance of microbial phylum (percentage) in the Badagan rats and control rats. Table S4: differential metabolites of Heyi rat samples compared with control group. Table S5: differential metabolites of Xila rat samples compared with control group. Table S6: differential metabolites of Badagan rat samples compared with control group. Table S7: differential metabolites only present in a group of rats. (Supplementary Materials) | 5,633.6 | 2021-08-03T00:00:00.000 | [
"Biology",
"Environmental Science",
"Medicine"
] |
Non-Stationary Bandit Strategy for Rate Adaptation With Delayed Feedback
Rate adaptation is an efficient mechanism to utilize the channel capacity by adjusting the modulation and coding scheme in a dynamic wireless environment. The channel feedback, such as acknowledgment/negative acknowledgment (ACK/NACK) messages or the channel measurement such as received signal strength indicator (RSSI) can be applied to the rate adaptation. Existing rate adaptation algorithms are mainly driven by heuristics. They can not achieve satisfactory transmission rates in the time-varying environment. In this paper, we focus on the rate adaptation problem in a time-division duplex (TDD) system. A multi-armed bandit (MAB) strategy is applied to learn the changes of the channel condition from both RSSI and ACK/NACK signals. A discounted upper confidence bound based rate adaptation (DUCB-RA) algorithm is proposed. We show that the performance of the proposed algorithm is converged to the optimal with mathematical proofs. Simulation results demonstrate that the proposed algorithm can adapt to the time-varying channel and achieve better transmission throughput compared to existing rate adaptation algorithms.
I. INTRODUCTION
Comparing to wired communication systems, wireless communication systems suffer from time-varying channels caused by channel fading or interference [1]- [4]. These stochastic effects become more severe when the environment changes, for example, the movements of mobile stations [5], [6]. Rate adaptation (RA) is necessary to meet the channel changes by adjusting the modulation and coding schemes of the transmitter.
In order to determine appropriate transmission rate, the channel condition needs to be evaluated. The channel state information (CSI) that contains all information about the channel properties is the most accurate measure of the channel. However, complete CSI feedback is costly and usually infrequent [7]. Other performance metrics such as signal-to-noise ratio (SNR), received signal strength indicator (RSSI), and acknowledgement/negative acknowledgement (ACK/NACK) can be used to select appropriate transmission The associate editor coordinating the review of this manuscript and approving it for publication was Lei Guo .
rates [8], [9]. Based on the selection of channel condition evaluation metrics, the RA schemes can be classified as frame-level and measurement-based schemes [10].
Frame-level RA schemes determine the transmission rate of current packet from the knowledge of previous transmissions. Such knowledge is available in the form of the ACK/NACK signals. Frame-level RA schemes usually can not respond to channel variations that occur on short timescales. For comparison, measurement-based schemes can respond to fast channel variation. These schemes determine the transmission rate based on the channel measurements. The channel measurements include SNR, RSSI, etc. The mapping between the rate and the channel measurement can change when the channel condition changes [11].
Auto rate fallback (ARF) adjusted the transmission rate intuitively: it decreased the transmission rate by one gear when missing 2 ACKs consecutively and increased the transmission rate by one gear when receiving 10 ACKs consecutively [8]. Adaptive ARF (AARF) increased the time interval between attempts at a higher rate when encountering successive probe failures [12]. ARF and AARF classified channel VOLUME 8, 2020 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ conditions as either ''good'' or ''bad'' based on received ACK signals, and adjusted the rate accordingly. This binary classification was not efficient in converging to the optimal rate, especially when facing a large collection of available rates or a rapidly varying channel. Minstrel utilized a mechanism called multi-rate retry chain to update the rate adaptation strategy [13]. The retry chain consisted of four different rates and the corresponding number of attempts. Minstrel allocated 90% transmission for the normal transmission, and the rest 10% transmission for probing other rates. SampleRate attempted the available rate in the order from the highest one to the lowest one [14]. SampleRate allocated a fixed percentage of packets to probe other rates. Due to the fixed exploration ratio, SampleRate and Minstrel algorithms could not adapt to the fast time-varying channel quickly, and their performance degraded in the static channel. The SNR-guided rate adaptation (SGRA) algorithm set up the relationship between SNR and frame delivery ratio (FDR), and utilized forced probes to calibrate such relationship in a real-world channel [9]. SGRA ignored the fact that the mapping from SNR to the transmission rate was not deterministic in practical channel conditions. In [15] and [16], the authors acknowledged that RSSI or SNR alone did not accurately capture the changes in wireless channels.
Besides, The ACK/NACK signals are always delayed feedbacks of previous transmissions [17], while the impact of the delay is ignored in existing work. The delayed feedback may cause incorrect rate selection thus degrade the overall throughput.
To address the above issues, reinforcement learning algorithms can be appropriate tools. All RA strategies have to trade-off between exploitation and exploration in the dynamic wireless environment. It is straightforward to map the exploitation and exploration phases of the multi-armed bandit (MAB) algorithm into the RA framework and it is easy to control the switches between the exploitation and exploration phases [18]. In addition, in the RA problem, the channel state transition does not depend on the rate selection (action). The action reward does not depend on the previous channel state either. Thus, MAB is sufficient and effective to model the RA problem.
In this paper, we model the RA problem as a MAB problem. A discounted upper confidence bound based rate adaptation (DUCB-RA) algorithm is proposed. Our contributions can be summarized as follows: First, both of RSSI and ACK/NACK signals are adopted to determine appropriate transmission rates, while major existing work only deals with one of them. Secondly, we treat the ACK/NACK signals as delayed responses to the quality of the transmission, which is typical in most wireless systems such as the long term evolution (LTE) system [17]. Thus, our model is more accurate than other works. Thirdly, we model the RA adaptation as a MAB problem. We show theoretically that the proposed algorithm is asymptotically optimal.
The rest of the paper is organized as follows. Section II introduces the system model. Section III presents the proposed DUCB-RA algorithm, and Section IV gives a theoretical analysis to the proposed algorithm's performance. Section V discusses the RA performance under various simulation scenarios. Finally Section VI concludes the paper.
II. SYSTEM MODEL
For a wireless communication system, the capacity of an additive white Gaussian noise (AWGN) channel depends on the channel bandwidth and SNR. Thus the mapping from SNR to the optimal transmission rate is fixed. When taking the channel fading into consideration, such relationship is no longer fixed. The relation between the optimal rate and SNR can be highly dynamic when the channel condition changes. In addition, it is not trivial to obtain a reliable estimate of the SNR of a link. Many radio interfaces only provide RSSI as an uncalibrated SNR estimate.
In this work, we consider a time-division duplex (TDD) based system. Fig. 1 depicts the simplified transmission between the transmitter and the receiver. At time slot 1, the transmitter sends packet to the receiver. The receiver reports ACK/NACK signal to the transmitter. The transmit process and receive process occur alternatively. In this system, we assume that the channel condition is ''slowvarying''. In other words, the channel condition does not change within a packet transmission, while it can change from packet to packet. For each packet, the transmitter needs to select a rate from the set of available rates {R k } K k=1 . In this set, we assign a larger index to a larger rate, i.e., R 1 is the lowest rate, and R K is the largest one.
In the RA problem, performance metrics that measures the channel conditions can be considered. The RSSI can be measured when a packet is received. The RSSI of the received signal can be a measure of the SNR of the transmit channel considering the reciprocity of the TDD system. The ACK/NACK signal, usually provided as the control information, can be obtained by the transmitter as an indicator.
The ACK/NACK signals, in general, are delayed response to the quality of the previous transmissions. For example, in time division LTE (TD-LTE) system, the ACK/NACK responds to the transmitted signal 3 time slots ago [17]. Without loss of generality, we assume the ACK/NACK signals are delayed for D time slots, where D ≥ 3, and D is odd due to the nature of the TDD system. If the ACK signal is received, the transmission is successful. If the NACK signal is received or the ACK/NACK signal is lost, the transmission fails. Let θ t,k denote the probability of a successful transmission in the time slot t with rate R k .
Next, we consider the channel conditions. According the variation of the channels, we classify the channels into two categories: stationary channel environment, non-stationary channel environment.
A. STATIONARY CHANNEL ENVIRONMENT
In stationary channel environment, the channel is considered to be static. A stationary channel implies that θ t,k does not evolve over time. In stationary channel environment, a larger rate may incur more errors, thus may have lower successful transmission probability than a smaller rate, i.e., θ t,k ≤ θ t,k−1 .
B. NON-STATIONARY CHANNEL ENVIRONMENT
In practice, channel conditions are always non-stationary, i.e., θ t,k may evolve over time. Due to the movement of users or objects, the wireless communication system suffers from fading effects. The RSSI varies over time.
In addition, if we consider the changes of the environment, the mapping between the RSSI and the optimal transmission rate may also changes.
We assume that the RSSI can be divided into discrete L levels {S l } L l=1 based on the fading status, and the channel states is also discretized into M states {C m } M m=1 based on the noise level to simplify the discussion. Furthermore, we assume that the channel state becomes worse from C M to C 1 . In a worse channel, higher RSSI is required to achieve the same rate. The channel state transitions can be modeled as a hidden Markov model (HMM) [19]. The transition probability from channel state i to state j is defined as In this work, we assume the channel changes slowly, meaning the channel state transition only occurs between adjacent channel states. Clearly, when the channel state changes, the probability of successful transmission θ t,k may also change.
In order to determine the appropriate transmission rate, RA algorithms can be applied. For example, The LA algorithm compares the average RSSI RSS avg with the threshold Th k for rate R k to select the appropriate rate [20]. The average RSSI RSS avg is defined as where RSS avg is the average RSSI of previous time slot, RSS i is the RSSI value observed in time slot i, and a 1 ∈ [0, 1] is a time decaying factor. When the transmission fails, the LA algorithm updates the threshold Th k associated with the rate as where a 2 ∈ [0, 1] is the weight of RSS i value observed in time slot i. RSSI is not a perfect indicator for the channel condition. When noise or interference changes, the RSSI may stay unchanged. In this case, the LA algorithm may be stuck at a low rate.
The enhanced history-aware robust rate adaptation algorithm (HA-RRAA) proposed in [21] selects a new rate for the packet transmissions in the next adaptive time window T R . HA-RRAA observes the ACK/NACK information in the time window, and calculates the packet loss ratio of these frames. It decreases the rate to the next lower one if the loss ratio is greater than a threshold, or increases to the next higher one if the loss ratio is smaller than another threshold.
RSSI and ACK/NACK provide information about the channel condition from different perspectives. RSSI is an estimate of SNR, which gives a direct suggestion to the transmission rate. ACK/NACK signals record authentic channel response of each packet transmission. Conventional RA algorithms generally performed adaptation based on one of these signals. In our proposed algorithm, we take both RSSI and ACK/NACK signals as inputs. The proposed algorithm is discussed in detail in the next section.
III. DISCOUNTED UPPER CONFIDENCE BOUND BASED RATE ADAPTATION ALGORITHM
In this section, we present a discounted upper confidence bound based rate adaptation (DUCB-RA) algorithm that adopts both RSSI and ACK/NACK information. The rate selection problem can be modeled as a MAB problem. Under each RSSI level S l , there are K possible transmission rates R k . The K transmission rates can be considered as arms in the MAB problem. A total of L MAB problems can be formulated. When a transmission of rate R k at RSSI level S l occurs, the specific arm of the MAB problem is pulled. A reward that evaluates the performance of the current selection can be calculated when the corresponding ACK/NACK signal is received. Next transmission rate can be determined based on the RSSI measurement and the updated reward estimations. The overall algorithm is detailed as follows.
To begin with, for RSSI level S l , an initial guess of the transmission rate can be obtained. For example, such guess can be obtained during the random access phase. Then, each rate R k is assigned with initial performance estimate r 0,k , which is the instantaneous reward obtained of each transmission. If the transmission rate is smaller than the initial guess, the instantaneous reward is assigned as r 0,k = R k /R K . If the transmission rate is larger than the initial guess, r 0,k = 0. The accumulated estimated reward r k is initialized by the instantaneous reward, or r k = r 0,k .
In the subsequent transmissions, the estimated reward r k is updated based on the ACK/NACK feedbacks. The instantaneous reward of selected rate R k is given by r t,k = ACK t R k /R K , where ACK t = 1 when an ACK signal is received at time slot t, or ACK t = 0 otherwise. As we discussed earlier, the ACK/NACK feedback is a delayed version.
The ACK/NACK received in time slot t gives a quantified evaluation of the transmission with rate R k occurred in time slot (t − D). Since R K is the largest rate in the available set. It is clear that the instantaneous reward is bounded with r t,k ∈ [0, 1]. Based on the recorded historical reward and the instantaneous reward r t,k , the estimated reward r k at time slot t can be updated as where γ is a forgetting factor, δ t,k is an indicator parameter.
In (4), N t,k is the discounted number chosen times of R k , which is denoted as When the channel is non-stationary, the parameter γ in (4) is used to limit the influence of outdated observation. The parameter δ t,k indicates which rate is selected at time slot t, which is defined as Since the initial guess of the rate is performed for the system, the parameter δ t,k can initialized by setting δ 0,k = 1. The estimated reward r k is updated after the ACK/NACK signal is fed back. If there is no ACK/NACK signals received, r k can still be updated by assuming a NACK signal is received. In this case, however the current estimate of RSSI can not be obtained. The transmitter maintains the previous estimation of RSSI to perform the rate selection.
For the next transmission, the rate associated with the maximum estimated reward r k is selected for transmission. The rate selection mechanism is given by When the channel condition changes, (7) may not be able to track the optimal rate. For example, when the channel condition becomes better, the above RA algorithms will continue to select the previous rate as it yields the best estimated reward. The new best rate is not chosen due to lack of exploration. A bias term c t,k can be introduced to increase the probability of exploring other rates. With exploration, more rates can be probed with updated performance estimates. The decision of rate adaptation is more appropriate, especially in a time-varying channel. Based on the one-sided confidence interval derived from the Chernoff-Hoeffding bound [22], [23], c t,k is set as where ξ is an adjustable parameter to control the exploration. In (8), n t represents the total discounted number of chosen times of all the R k , which is denoted as Thus the new rate selection mechanism is given by With the bias term c t,k expressed in (8), the new rate selection mechanism suggests that if a rate is less selected, a larger bias is applied to the estimated reward. The probability of been selected is increased. The parameter ξ in (8) can be applied to control the exploration ratio. In a fast-varying channel, more exploration is needed. A larger ξ can be applied. Otherwise, a smaller ξ can be applied.
The proposed DUCB-RA algorithm is summarized in Algorithm 1. The transmitter selects the rate according to the initial channel interaction at the beginning, then updates the estimated r k based on the ACK/NACK feedbacks. The rate of the next transmission can be determined with (10). With the proposed DUCB-RA algorithm, the transmitter can track the time-varying channel through both RSSI and ACK/NACK information and provide appropriate rate for transmission. Obtain the ACK/NACK signal. 4: Update the estimated reward r k associated with S l 5: using (4). 6: if no ACK/NACK signal then 7: Keep the previous RSSI as S l for the next rate 8: selection. 9: else 10: Measure current RSSI as S l for the next rate selec- 11: tion. 12: end if 13: Select rate for the next time slot (t + 1) using (10) according to S l . 14: end for
IV. PERFORMANCE ANALYSIS
In this section, we study the performance of the proposed DUCB-RA algorithm. We show theocratically that the regret performance achieves a sub-linear order. In other words, the proposed algorithm is asymptotically optimal.
Ideally, if a rate adaptation mechanism has perfect knowledge of the current channel condition, an appropriate rate that maximizes the throughput can be selected. Regret is introduced to measure the performance loss between the proposed algorithm and the omniscient RA mechanism with perfect knowledge of the channel condition. The accumulative regret R T is introduced to measure the performance loss of the proposed algorithm compared to the omniscient RA mechanism. The R T is the gap between the accumulative reward obtained from the proposed algorithm and that of the omniscient RA mechanism. We show next that with properly selected parameters, the regret performance of proposed DUCB-RA algorithm achieves a sub-linear order. The sub-linear growth of the accumulative regret demonstrates that the proposed algorithm is asymptotically optimal [22].
We first analysis the performance of the algorithm assuming that RSSI is constant and ACK/NACK signals are fed back timely.
Let T s represents the time slots that transmission occurs and T r represents the time slots that the reception occurs.
In the subsequent notations in this section, t ∈ T s if there is no extra explanation.
The expectation of R T is given by where E[·] denotes the expectation, 1 {·} is an indicator function, which is 1 if the statement in the parentheses is true, or 0 otherwise. In (11), µ t,k = θ t,k R k /R K is the expected reward of choosing R k . Let k * t denote the optimal rate in time slot t. If µ t,k < µ t,k * t , R k is not the optimal choice. The term δ t,k 1 {µ t,k <µ t,k * t } in (11) represents the case that the sub-optimal rate is selected in time slot t. The term r t,k * t − r t,k represents the reward loss due to the selection of the sub-optimal rate.
Let M T (k) denote the number of times of choosing the sub-optimal rate R k in the first T time slots, which is expressed as Since r t,k ∈ [0, 1] and r t,k * t > r t,k , we have substituting (12) and (13) into (11), we have According to (14), if we find the upper bound of the expectation of the number of times that suboptimal rates are selected, we can establish the upper bound of the expected regret.
In (12), M T (k) can be further divided into two parts by a control parameter for the subsequent proof.
According to the Lemma 1 in [23], the first part 1 in (15) where the notation · means the round up process. If the channel state changes, the estimated reward for R k can be poor for D(γ ) rounds. As shown in [23], D(γ ) is given by where n K is calculated according to (9). The natural logarithm base e is omitted in this paper. Let T denotes the rest rounds except D(γ ) rounds, the second part 2 in (15) is bounded by where ϒ denotes the number of times of channel changes within T . We further divide the term 3 in (18) into two parts: δ t,k 1 {k =k * t } and N t,k ≥ . The term 3 is satisfied as the intersection of the above two terms. The first part holds in the following cases. First, µ t,k and µ t,k * t are close to each other, the bias term c t,k can not discriminate them. Thus (10) may choose the sub-optimal rate. Second, the estimated reward r k * t of the optimal rate is under-estimated. Third, the estimated reward r k of the sub-optimal rate is over-estimated. These cases can be summarized as The union of the above three terms is the necessary and sufficient conditions for δ t,k 1 {k =k * t } . Combing (19) with N t,k ≥ , we can get the requirement of the term 3 . Next, we analysis (19a) to (19c) in detail.
Let µ represent the minimum gap between µ t,k and µ t,k * t . If c t,k ≤ µ 2 , (19a) never occurs. Substituting c t,k ≤ µ 2 , we obtain the value of the control parameter = 16ξ log n t / µ 2 . VOLUME 8, 2020 For terms (19b) and (19c), the proof in [23] shows that the probabilities of (19b) and (19c) are equal and are bounded by Combining (16), (18), (19) and (20), we conclude where The bound (21) is associate with T and ϒ. Setting The growth rate √ T log T is lower than the linear growth rate T . We know that the growth rate of E [M T (k)] achieves the sub-linear order. From (14), we conclude that the growth rate of E [R T ] also achieves the sub-linear order.
The above proofs are conducted under the assumption that the RSSI remains the same. When the RSSI changes, RSSI may vary when channel state remains the same. The worst case is that every RSSI level is excited in all ϒ channel state changes. Thus the expected regret is bounded by which has the same sub-linear growth characteristic as the fixed RSSI case. Next, we consider the performance loss due to the delayed feedback. We first consider a stationary MAB problem which the reward is fixed delayed by one time slot. On other words, there is another selection between the action selection and the reward reception. The intercalary rate selection does not receive any guidance as a result of the delayed reward [24]. In a stationary MAB problem, the only performance loss comes from the possible suboptimal action selection caused by the first delayed feedback. ACK/NACK signals are delayed D time slots in the system model we formulate in Section II. The delayed feedbacks cause (D − 1)/2 times rate selection being unguided. The expectation of the accumulative regret E R T in the delayed model is bounded by Next, we consider the changes in the RSSI. Since there are L MAB problems, i,e., RSSI is varying over time, the additional loss is multiplied by L at most. We have We further consider the performance loss caused by channel state changes, The previous obtained knowledge is outdated if the channel state is changed. The RA mechanism needs learn the new condition. Thus extra loss caused by delayed feedback need to be included whenever the channel state is changed. We have From (28), we conclude that E R T also achieves the sub-linear order, thus the proposed algorithm is asymptotically optimal. The theoretical analysis sets the upper bound of the performance loss of proposed DUCB-RA algorithm in the time-varying channel. In section V, we provide simulation results to demonstrate the performance of the proposed algorithm.
V. SIMULATION RESULTS
In this section, we present numerical results of the proposed DUCB-RA algorithm. A system with 15 different transmission rates {R k } 15 k=1 is considered. Channel states are divided into 10 states {C m } 10 m=1 depending on the interference and noise level. The path-loss model and Rician fading are introduced to simulate the time-varying RSSI. The RSSI is quantized into 35 levels {S l } 35 l=1 . In our experiment, the transmitter sends packets to the receiver from time slot 1. For each packet transmission, the transmitter selects the rate from the set {R k } 15 k=1 based on the the RSSI evaluation and estimated reward.
We would like to examine the performance of DUCB-RA compared with the other RA algorithms: ARF, LA, Minstrel, and HA-RRAA in both stationary radio environments and non-stationary radio environments. In the simulation, the multi-rate retry chain in Minstrel is updated when every 10 data packets is sent to match the simulation condition of the DUCB-RA algorithm. The initial threshold of LA is set according to the best estimate of the initial channel condition.
We first consider a static channel condition. Both the channel state and the RSSI are constant. The simulations are conducted with different states and RSSIs. Since the channel is static, we set the forgetting factor γ to 1, meaning all previous samples are counted, and ξ to 0.1 since there is no need to explore much.
The performances of above algorithms under different channel settings are recorded in Table 1. In this table, the throughputs are normalized by the optimal throughput under selected channel conditions. From Table 1, we observe that all algorithms achieve satisfactory performance in all cases. Due to the fixed exploration ratio, ARF and Minstrel algorithms probe other rates frequently in the stationary channel condition. Their performance degrades in such case. Comparing the performance in channel 3 to other channel conditions, we observe that the performance of all RA algorithms becomes better. In this case, the channel condition is the best and the optimal rate selection is the highest rate. There are no higher rates for exploration. The potential performance loss caused by the exploration is reduced.
Let us look into the rate selections in different time slot in one of the simulations. Fig. 2 shows the actual rate that the transmitter chooses with ARF and DUCB-RA algorithm in a fixed channel condition (channel 2 in Table 1). The optimal rate is selected by the omniscient strategy with perfect knowledge of the channel condition. It can be observed that the proposed DUCB-RA algorithm always pick the optimal rate except several probes. On the other hand, ARF, which is driven by heuristics, probes the higher rate frequently in the stationary channel condition, resulting considerable performance loss. Next, we study the RA performance in a time-varying channel. In this simulation, 500 packets are sent within T = 1000. The channel state transitions are generated by the HMM model with neighborhood transition probability of 0.04. There are 9 channel state transitions in the specific realization. To track the time-varying channel, the parameters γ and ξ are set to 0.95 and 0.65 respectively. The obtained throughputs of different RA algorithms are shown in Fig. 3. From this figure, we observe that the proposed DUCB-RA algorithm can track the time-varying channel better than other algorithms and provide the best average throughput among all RA algorithms. From this figure, we also observe that all RA algorithms can sense the degradation of the channel better than sense the improvement of the channel. It is natural as the loss of packet is easy to observe. On the other hand, when the channel condition improves, appropriated exploration mechanism is needed to track the changes of the channel. Let's further study how the RA algorithms adapt to the time-varying channel by showing their real-time rate selection activities in one realization of the simulations. In Fig. 4, we show the real-time rate selection results of the proposed FIGURE 4. The rate selection in a time-varying channel. VOLUME 8, 2020 algorithm and the Minstrel algorithm in the above simulation. From Fig. 4, we observe that the proposed DUCB-RA algorithm can track the time-varying channel better than the Minstrel algorithm. When the variation of current channel is gentle, DUCB-RA algorithm does not explore other rates as aggressively as Minstrel. Minstrel random selects a rate in exploration transmissions which is higher than or equal to the previous best one. The random probe mechanism causes the performance loss in the stationary channel, and is inefficient when the channel condition improves fast.
In the third case, we study the impact of delayed feedback. We compare the simulation results with the cases that ACK/NACK signals are fed back timely. The normalized throughputs are shown in Table 2 for different RA algorithms. We observe that comparing to the timely fed-back ACK/NACK, all RA algorithms are getting worse if the ACK/NACK signal are delayed. The performance degradation with delayed feedback is small for Minstrel and the proposed DUCB-RA algorithms, suggesting that the rate adaptation algorithms of these two algorithms are robust against feedback delays. In the last simulation, we study the regret performance of the proposed DUCB-RA algorithm. The accumulated regret of the proposed algorithm in different channel conditions are shown in Fig. 5. In this simulation, channel 1 is the static channel, channel 2 is the time-varying channel with timely feedback and channel 3 is the time-varying channel with delayed feedback. 2500 packets are sent within T = 5000. The channel state transitions are generated by the HMM model with neighborhood transition probability of 0.04. There are 100 channel state transitions in channel 2 and channel 3. The parameters settings are γ = 1, ξ = 0.1, γ = 0.9, ξ = 0.7 and γ = 0.9, ξ = 0.7 respectively in these three channel conditions. We observe that the regret performance of the proposed DUCB-RA algorithm achieves a sub-linear order in all three cases, indicating asymptotically optimal throughput.
VI. CONCLUSION
In this paper, a robust rate adaptation algorithm DUCB-RA for time-varying channels is proposed. The rate selection problem is formulated as a non-stationary MAB problem. The proposed algorithm utilizes both the ACK/NACK and RSSI information to select appropriate rate. Specifically, the ACK/NACK signals are treated as delayed responses to the quality of the transmission, which is typical in practice yet is ignored in most studies. Thus our algorithm is more accurate when choosing appropriate rate. We show that the performance of the proposed algorithm is upper bounded by a sub-linear order with mathematical proofs. Simulation results demonstrate that the proposed algorithm can achieve better performance than existing rate adaptation schemes in both static and time-varying channel conditions. | 7,347.2 | 2020-01-01T00:00:00.000 | [
"Computer Science"
] |
Temperature Statistics in Turbulent Rayleigh-B\'enard Convection
Rayleigh-B\'enard convection in the turbulent regime is studied using statistical methods. Exact evolution equations for the probability density function of temperature and velocity are derived from first principles within the framework of the Lundgren-Monin-Novikov hierarchy known from homogeneous isotropic turbulence. The unclosed terms arising in the form of conditional averages are estimated from direct numerical simulations. Focusing on the statistics of temperature, the theoretical framework allows to interpret the statistical results in an illustrative manner, giving deeper insight into the connection between dynamics and statistics of Rayleigh-B\'enard convection. The results are discussed in terms of typical flow features and the relation to the heat transfer.
Introduction
Rayleigh-Bénard convection is a paradigm of a pattern forming system far from equilibrium. Convective fluid motion in a vessel is induced by a vertical temperature gradient between the bottom and top boundaries due to buoyancy forces. In dependence on this temperature gradient, the geometry of the experiment and the fluid properties, a whole zoo of instabilities has been observed ranging from laminar, spatially coherent convective motion over spatially ordered but temporally chaotic up to highly turbulent fluid motion. We refer the reader to reviews available on the topic [1,2].
Recently, much efforts have been devoted to the analysis of turbulent Rayleigh-Bénard (RB) convection both by experimental as well as theoretical means [3,4]. Direct numerical simulations allow one to consider the dynamical and statistical properties of RB turbulence and the transitions between different types of flows in fine detail.
It is obvious that the analysis of turbulent convective fluid motion has to be based on a combination of tools from dynamical systems theory, statistical physics, and the theory of stochastic processes. A necessary step is the statistical formulation of the underlying basic fluid dynamic equations, which for the most simple case are the Oberbeck-Boussinesq equations for the velocity field u(r, t), the temperature field T (r, t), and the pressure field p(r, t): ∂ ∂t + u · ∇ T (r, t) = ∆T (r, t) ∂ ∂t + u · ∇ u(r, t) = − ∇p(r, t) + Pr [∆u(r, t) + Ra T (r, t)e z ] (1.1) ∇ · u(r, t) = 0 The equations have been nondimensionalized using the Rayleigh number Ra = αg∆T h 3 νκ , which is a dimensionless measure of the temperature gradient across the fluid layer (with thermal expansion coefficient α, gravitational acceleration g, outer temperature difference ∆T , and distance of top and bottom plate h), as well as the Prandtl number Pr = ν κ as the ratio of kinematic viscosity ν to heat conductivity κ of the fluid. Thus, the vertical spatial coordinate obeys z ∈ [0, 1], and the boundary conditions of the temperature at the bottom and top plate are T (z = 0) = 1 2 and T (z = 1) = − 1 2 . For the velocity, no-slip boundary conditions u(z = 0) = u(z = 1) = 0 are assumed. These equations are solved numerically by a suitably designed penalization approach, described in section 5. A snapshot of the temperature field is exhibited in figure 1. The statistical analysis is based on joint probability density functions (PDFs) for the temperature and the velocity at a single point in space and time. The basic fluid dynamic equations require the validity of certain relations among these PDFs. From these relations, corresponding expressions relating the various moments of the fields can be derived. For the case of incompressible turbulence, these relations have been formulated by Lundgren, Monin and Novikov [6,7,8], and Ulinich and Lyubimov [9] and are sometimes known as the Lundgren-Monin-Novikov (LMN) hierarchy. They are directly related to Hopf's functional equation, which can be viewed as the basic statistical formulation of the Navier-Stokes equation in the Eulerian framework [10]. Similar relations can be derived for the corresponding Lagrangian quantities [11]. It is evident that an analogous treatment is feasible for the Oberbeck-Boussinesq equations.
In the present article we will use this approach in order to analyze the single-point temperature probability density function for stationary turbulent RB convection. Our analysis combines direct numerical simulations with the relation of the LMN hierarchy for the single-point PDF. The result is a partial differential equation for the temperature PDF. The derivation of this relation is outlined in section section 2.
The aim is to formulate an equation which characterizes this PDF. Starting from the Oberbeck-Boussinesq equations, we derive an evolution equation for the singlepoint joint probability density function of velocity and temperature along the lines of Lundgren, Monin and Novikov [6,7,8] for the case of incompressible turbulence. This equation is unclosed due to the fact that it contains unclosed expressions which can be related to fluid pressure, viscous dissipation and heat diffusion. However, these expressions can be treated by introducing conditional averages, which can be extracted from direct numerical simulations. This leads to a partial differential equation for the joint temperature-velocity PDF. A similar approach has been performed by Novikov [12,13], and more recently by Wilczek et al. for the PDFs of vorticity [14] and velocity [15] for stationary, isotropic turbulence. On the other hand, modeling of unclosed terms is also a possible method, as performed by e.g. Pope [16,17]. As we shall indicate, the analysis of the evolution equation for the temperature PDF yields a comprehensive description of the dynamical processes in RB convection.
The article is structured as follows: In section 2 we will derive the evolution equation for the temperature-velocity joint PDF. In section 3, we reduce the joint PDF to the temperature PDF and make use of statistical symmetries to cut down the complexity of the evolution equation. Then we present a descriptive way to deal with this equation involving the method of characteristics. Section 4 briefly discusses connections to the Nusselt number, relevant for the heat transport. These theoretical results are complemented by results from direct numerical simulations, which will be discussed in section 5, followed by a summary in section 6.
Single-Point PDF
We are interested in the joint temperature-velocity probability distribution f (τ, v; r, t) and want to derive the corresponding evolution equation. Formally, the probability density function is obtained as a suitable average over the so-called fine-grained probability distribution f (τ, v; r, t) = δ(τ − T (r, t)) δ(v − u(r, t)). (2.1) It is important to distinguish between sample space variables τ , v and the corresponding realizations of the temperature and velocity fields T (r, t), u(r, t). Therefore, one could think of f as the PDF of one particular realization of the fields. Also, the notation of the arguments in f (τ, v; r, t) emphasizes the difference between the sample space variables τ , v and the coordinates r, t -the PDF is normalized with respect to the sample space variables, the coordinates are just parameters. The full probability density function is now obtained as an ensemble average over all possible realizations of the temperature and velocity fields: The brackets · denote the ensemble average, in contrast to spatial averages · V and · A over the whole fluid volume, or a horizontal plane at height z, respectively. The definition of the fine-grained PDF (2.1) can be differentiated with respect to the space and time variables, giving as the spatial gradient, and an analogous equation for the temporal derivative ∂ ∂t f . Note that the operators ∂ ∂τ and ∇ v act on f . Now, multiplying (2.3) by u i and adding the temporal derivative allows us to make use of the basic Oberbeck-Boussinesq equations (1.1). This results in the desired evolution equation for the fine-grained PDF: Performing the ensemble average of this equation in order to arrive at an equation for the full PDF (2.2), one encounters the closure problem of turbulence, since the unclosed averages ∆T f , −∇p f and ∆u f show up. The LMN hierarchy ansatz would mean to treat these terms via a coupling to the two-point PDF; the evolution equation of this two-point PDF would in turn introduce a coupling to the three-point PDF, and so on [6].
Instead of introducing this hierarchy of coupled evolution equations for the multipoint PDFs, our strategy [14,15] is to express the unclosed terms as conditional averages, since these are accessible to direct numerical simulations. The result is a partial differential equation governing the joint temperature-velocity PDF and relating its shape as a function of space point r to the conditional averages. The functional form of these conditional averages is a signature of the underlying dynamical processes of RB convection.
Introducing the conditional averages we arrive at the following relation for the single-point probability distribution f (τ, v; r, t): Since at the boundaries of the RB cell the velocity and temperature fields are statistically sharp quantities, the probability distribution has to obey the conditions for arbitrary x, y. We note in passing that a different version of the evolution equation can be obtained by introducing the Laplacian of the PDF. Specializing in the case Pr = 1, it is possible to re-express the conditional averages ∆T |τ, v, r, t and ∆u|τ, v, r, t via the relation Here and in the following, Einstein's summation convention over repeated indices is used. Again, we can introduce conditional expectations, where the arguments of the conditional averages have been abbreviated as = τ, v, r, t. The resulting evolution equation reads A somewhat more complicated equation holds for Pr = 1. This relationship shows that the single-point joint temperature-velocity PDF f (τ, v; r, t) is essentially determined by the conditionally averaged dissipation-like terms (∇T ) 2 | , ∇T · ∇u j | and ∇u i · ∇u j | as well as the conditional pressure gradient −∇p| .
Single-Point Temperature PDF and Implications of Statistical Symmetries
In the following we shall restrict our attention to the reduced temperature probability distribution and its evolution equation. As it will turn out, already this equation gives insight into the connection between RB dynamics and temperature statistics, besides obviously describing the temperature statistics itself. Also, because the final evolution equation of temperature PDF involves scalar valued functions only, a numerical approach is easily feasible.
The reduced temperature PDF is obtained by integrating out the velocity part: Starting from equation (2.6) we obtain the simple equation where we have performed the integration with respect to the velocity v. Thereby, we had to introduce the conditional averages u|τ, r, t and ∆T |τ, r, t . Alternatively, one could re-enact the derivation performed in section 2 for the fine-grained PDF of the temperature h(τ ; r, t) = δ(τ − T (r, t)). Analogous to (2.8), we can derive a further relation using the identity This relation also follows from (2.7). With this equation, we can now summarize the equation for the temperature PDF h(τ ; r, t) in the form Here, the conditional average of the term (∇T ) 2 comes up, which is related to the Nusselt number. Details will be discussed in section 4. The evaluation of the conditional averages appearing in (3.2) is greatly simplified by considering convection that is statistically stationary in time and has periodic horizontal boundaries (i.e. is homogeneous with respect to the horizontal coordinates). Under the assumption of a statistically stationary flow, the PDFs and therefore the conditional averages cannot depend on the time variable; also the dependency on the horizontal coordinates drops out. So instead of dealing with statistical quantities that depend on r and t, we simply have to retain the z-dependence.
Let us first consider the determining equation for the temperature PDF h(τ ; z) in the form obtained from (3.2). This equation in principle has to be solved together with the boundary conditions h(τ ; 0) = δ τ − 1 2 and h(τ ; 1) = δ τ + 1 2 and with appropriately modeled expressions for the conditional averages u z |τ, z and ∆T |τ, z . This direct approach will not be conducted in the present paper though, but can be taken as a starting point for future modeling.
Instead, this first order partial differential equation (PDE) can be analyzed with the help of the method of characteristics [18]. Applying this method, one can find curves τ (s), z(s) in the τ -z-phase space parameterized by s along which the PDE (3.5) transforms into an ordinary differential equation which can be integrated. This approach will be sketched in the following.
Writing h(s) = h τ (s); z(s) and calculating the derivative of it gives The PDE (3.5) is re-expressed in the form Comparing these two equations identifies the characteristic curves as solutions of Along these curves, the PDE (3.5) becomes which can be integrated to This equation describes the evolution of the PDF along a trajectory τ (s), z(s) starting at point τ (s 0 ), z(s 0 ) in phase space. A particularly appealing property of this formalism is that it allows to interpret the statistical results in an illustrative manner, because the characteristics, i.e. trajectories in τ -z-phase space, show the evolution of the "averaged" physical process.
It is tempting to interpret the characteristics as a kind of Lagrangian dynamics of a tracer particle inside the RB cell. However, the dynamic of a tracer particle is stochastic, whereas the characteristics defined by (3.8) describe purely deterministic trajectories and, thus, take the stochastic properties into account only in an averaged way. In a sense, the characteristics describe the averaged evolution of an ensemble of fluid particles that are defined by their initial condition in the τ -z-plane.
Thinking of turbulent RB convection with some physical intuition, one can expect certain features from the statistical quantities introduced in this section. The conditionally averaged vertical velocity u z |τ, z should show positive correlation with the temperature, i.e. it should mirror the well-known fact that hot fluid rises up and cold fluid sinks down. Also the no-slip boundaries should be recognizable for z ≈ 0 and z ≈ 1, respectively. The absolute value of the heat diffusive term ∆T |τ, z should be highest near the boundaries because of the sharp change of the temperature profile. As the characteristics in a way describe the average path a fluid particle takes through τ -zphase space, the typical Rayleigh-Bénard cycle of fluid heating up at the bottom, rising up, cooling down at the top and sinking down again should find its correspondence in the statistical quantities describing the evolution of the PDF. Actual numerical studies of these quantities will be discussed in detail in section 5.
In the 1990s, an approach similar to the presented one was undertaken by Yakhot and followers, where PDFs of various quantities in a stationary flow were expressed as integrals over conditionally averaged variables. Considered quantities included passive scalars by Sinai and Yakhot [19], active scalars, such as temperature fluctuations by Yakhot [20] and temperature increments by Ching [21], and even general functions of an arbitrary quantity measured in the flow by Pope and Ching [22,23]. The conditional averages included spatial and temporal derivatives of these quantities.
In contrast to these works we do not assume homogeneity in the spatial coordinates, i.e. we still have the z-dependency present in our PDF equations. This allows us to discuss the PDFs with respect to the z-coordinate and observe qualitatively different statistics in different regions of the flow. As a result, we are able to see the differing behaviour of bulk and boundary parts of the convection cell and how these are connected to the conditional averages and their dependency on the vertical position. This has not been addressed in the literature.
Connection to the Heat Transport
The Nusselt number Nu as the ratio of convective to conductive heat transfer plays a key role in the analysis of RB convection. It serves as a measure of how efficient heat can be transported through the convection cell.
Though we did not consider the Nusselt number so far, we note that there is an interesting connection between Nu and the conditional averages that appear in our calculations. Because the Nusselt number may be defined as the volume average Nu = (∇T ) 2 V in non-dimensional units, the following expression comes up: Therefore, (∇T ) 2 |τ, r can be viewed as a conditional Nusselt number density. We point out that this quantity should be of considerable interest for the evaluation of theories concerning the Rayleigh number dependency of the Nusselt number based on a decomposition of the heat transport into bulk and boundary contributions, which underlies the Grossmann-Lohse theory [24] outlined in the review of Ahlers et al. [4]. Also, this term is linked with the temperature dissipation rate, which is discussed in [25] with respect to temperature PDFs. In a similar manner, we can employ the temperature-velocity joint PDF f (τ, v; r) to derive an equation relating Rayleigh and Nusselt number. From the relation Ra(Nu −1) = (∇u) 2 V it is straightforward to see that These two exact relations underline the importance of the conditional averages of (∇T ) 2 and (∇u) 2 that naturally come up in our derivations.
Numerical Results
The benefit of our theoretical approach is that we can easily provide it with measurements and data in the form of numerical results. To this end, we solve the basic Oberbeck-Boussinesq equations (1.1) with a standard dealiased pseudospectral code on a three-dimensional equidistant Cartesian grid with periodic boundary conditions. For an introduction into this topic the reader is referred to [26,27]. Periodic boundaries are required in horizontal direction, but in vertical direction Dirichlet conditions for velocity and temperature (i.e. no-slip boundaries of constant temperature) are needed. They are enforced by a volume penalization ansatz [28,29,30]: 1+d]. Inside the fluid domain Ω ⊂ Ω c the unaltered Oberbeck-Boussinesq equations are solved, while in the appended extra regions Ω c \Ω a strong exponential damping (− 1 η u and − 1 η θ, respectively, with η 1) is added to the evolution equations (1.1) of velocity and temperature that damps the fields to zero. By simulating the deviation from the linear temperature profile, θ(r, t) := T (r, t) + z − 1 2 , instead of the temperature itself, the desired boundary conditions read u = 0 and θ = 0 for z = 0 and z = 1. This change of variables T → θ allows us to make use of the volume penalization approach in a straightforward manner.
The reason we choose this numerical scheme instead of often used Chebyshev-based codes, described for example in [26] and used in e.g. [31,32], is because it allows for almost arbitrary shaped boundaries and sidewalls. Although this feature is not used in the present paper due to the required horizontal homogeneity, it even allows to simulate cylindrical vessels on a (numerically cheap) Cartesian grid. A more detailed report on this will be published in the future.
Our theoretical derivation relies on the concept of ensemble averages. Of course, through our numerics we can only access a finite subset of all possible ensemble members. So due to the statistical symmetries and by assuming ergodicity, the ensemble average is substituted for a combined volume-time average in the numerics. Likewise, the volumeaverages · V and · A introduced earlier are actually evaluated as combined volume and time averages. Here, time averaging means averaging over 1250 statistically independent snapshots of the fields. The line plots below only show the parts of the statistical quantities where the statistics converged, i.e. where a significant number of events was obtained.
The simulation was conducted for the parameters Ra = 4.33 × 10 7 and Pr = 1. The computational domain Ω c is resolved with N x × N y × N z = 256 × 256 × 192 gridpoints on an equidistant Cartesian grid, where the fluid domain Ω ⊂ Ω c is represented by 256 × 256 × 128 gridpoints. Thus, the aspect ratio is Γ = 2, with the two horizontal dimensions being identical. The Nusselt number is estimated to Nu = 24.2. We mention that we repeated the analysis presented below with a simulation of aspect ratio Γ = 4 and found basically the same features, due to the fact that for sufficiently large Γ the periodic boundaries do not have a significant influence on the flow. deviation. Figure 2(b) shows slices in τ -direction indicated by the dashed lines in figure 2(a). In the color plot, one clearly observes the sharp change of the temperature PDF from a δ-function at the boundaries across the boundary layer to a shape exhibiting larger tails in the bulk. In addition to these tails, another feature of the PDF is the hump close to the τ = 0-line. This hump corresponds to the most probable value of the temperature. One expects two different dynamical features to be responsible for this special shape; a tempting explanation would be to attribute the hump to the background temperature field of mean temperature, and account the wings for large |τ | for plumes that carry fluid that is much colder or hotter than the surrounding fluid. An evidence for this is the shape of the PDF close to the bottom or top boundaries (but still outside the boundary layers): At z = 4δ T , though the most probable temperature value is moved slightly towards lower temperatures, the PDF exhibits a large tail at high temperatures. The interpretation is that mostly cold fluid gathers in the lower regions of the bulk, being almost at rest (compare the region of u z |τ, z = 4δ T in figure 3(b) corresponding to the hump), while very hot fluid is a more rare event, because hot fluid is convected away quickly due to plume dynamics. The reason why the hot fluid takes greater temperature values than the cold fluid (in terms of absolute value) is that very cold fluid detaching from the top plate already heats up on its way down. PDFs of the same shape for Rayleigh numbers of the same order are reported in [25], where also the dependence on the vertical coordinate is taken into account. The experimental data in [33,21] shows a more pronounced exponential shape of the temperature PDF, which can be attributed to the difference in the Rayleigh numbers which are several orders of magnitude above ours; the numerical data in [25] suggests that the PDFs become more exponential with increasing Rayleigh number. Figure 3 and 4 exhibit the conditional averages introduced in section 3. One can clearly observe the features that were suggested in the aforementioned section. The conditional vertical velocity u z |τ, z is high (low) for hot (cold) fluid respectively, and the no-slip boundary conditions manifest in the fact that u z |τ, z is close to zero for z ≈ 0 and z ≈ 1. Additionally, one observes a stripe close to the τ = 0-line of almost vanishing vertical velocity which coincides with the reddish core (the hump, i.e. the most probable value) of the temperature PDF in figure 2(a). The interpretation is that fluid that is as hot as the mean temperature is neutrally buoyant and neither moves up nor down. Another striking feature is the sudden increase of the vertical velocity for high τ near the boundary layer, i.e. for z = δ T , which we attribute to rising plumes that detach from the hot bottom plate. Again, it must be stressed that these interpretations hold in an averaged sense. Figure 4 shows that the conditional heat diffusion term ∆T |τ, z is (in terms of absolute value) highest at the boundaries, with the term being positive (negative) at the hot bottom (cold top) plate. On the contrary, in the bulk the absolute value is high (low) for very cold (hot) fluid, i.e. in the wings of the temperature PDF. Additionally, the τ -slice near the boundary in figure 4(b) shows an under-and overshoot. The connection of these unique features to the RB dynamics has yet to be understood. By combining the two aforementioned conditional averages to the vector field (3.8) that defines the characteristics as suggested in section 3, one arrives at the vector field depicted in figure 5 -one of our central results. It is easy to interpret this graph by tracing the vector field; one can qualitatively reconstruct the typical RB cycle of fluid heating up at the bottom, rising up while starting to cool down, cooling down drastically at the top plate, falling down towards the bottom plate while warming up a bit and heating up again at the bottom. It is especially illustrative to see that the main contribution of cooling and heating (i.e. biggest movement in τ -direction of phase space) takes place near the boundaries, highlighting the importance of the boundary layers, while obviously the biggest movement in z-direction occurs in the bulk.
Yet one has to consider, for example, that although hot fluid rises up very quickly (referring to the vectors pointing upwards at the right side in figure 5), this does not contribute much to heat transport because these events occur rarely, as indicated by the temperature PDF shown along with the vector field governing the characteristics.
In figure 6, the conditional heat dissipation rate (∇T ) 2 |τ, z is shown, which can be interpreted as a Nusselt number density according to (4.1). The noteable features are a pronounced minimum near the most probable temperature, again coinciding with the reddish core of the PDF. Also, in the boundary layer this quantity attains huge values (note the logarithmic scaling in the color plot), which highlights the fact that the boundary layer contributes much to the heat transport. A similar shape of a related quantity is reported in [25] (using the deviation from the mean temperature profile instead of the temperature itself), though there the conditional average is taken over the whole fluid volume and is hence lacking the z-dependency.
Summary
In the present work, we have analyzed the single-point temperature PDF on the basis of the Lundgren-Monin-Novikov hierarchy by truncating the hierarchy on the first level via the introduction of conditional averages. We have first derived the evolution equation of the full joint PDF of temperature and velocity. Then we focused on the temperature PDF only, which is the central point of our paper, and obtained an evolution equation for it by reducing the joint PDF equation. We assumed rather weak symmetry conditions of statistical stationarity in time and homogeneity in lateral spatial directions; these conditions should be fulfilled at reasonably high aspect ratios even for closed vessels, i.e. are a good approximation of experimental setups in the bulk of the flow. Under these symmetry considerations, the evolution equation of the temperature PDF becomes fairly simple. The arising conditional averages of temperature diffusion ∆T |τ, z and vertical velocity u z |τ, z are estimated by direct numerical simulations using a suitably designed penalization approach, and features of them are discussed. It shows that expected features such as properties of the temperature and velocity boundary layers, correlation of temperature and velocity and so on are related to the form of these conditional averages that naturally come up in our derivations.
The evolution equation of the temperature PDF is readily treated by the method of characteristics. Due to the applied symmetry conditions, the phase space which describes our system becomes two-dimensional, spanned by temperature τ and vertical coordinate z. Because of this reduced dimensionality of the system, the method of characteristics yields a descriptive view of the RB dynamics, resulting in the vector field describing the evolution in τ -z-phase space. The characteristics, i.e. trajectories in τ -z-phase space, are found to reproduce the typical cycle of a fluid parcel. The regions of the main transport in τ -direction have been identified as the boundary layers, while the major movement in z-direction takes place in the bulk. This highlights the importance of the boundary layers to the heat transport. The relation of heat transport in terms of the Nusselt number to the conditional averages introduced in our derivation is briefly discussed, leading us to the definition of a Nusselt number density. It would be very interesting to obtain the statistical quantities describing the evolution of the PDF directly from experiments, e.g. from measurements of instrumented particles as described in [34]. Future efforts will be to not only use the characteristics as an illustrative way to describe the mean movement in phase space, but actually calculate the PDF of temperature from the integral representation (3.10). Also, modeling of the conditional averages, which are up to now estimated from direct numerical simulations, might be feasible; an intermediate step would be to discuss the quantities not in the turbulent case, but close above the bifurcation from heat conduction to convection, where analytical solutions of temperature and velocity fields are available. Though an easy illustration in the form of trajectories in two-dimensional space will not be achievable in the case of the joint PDF, this approach is nevertheless promising and planned for the future, because we hope that already the form of the conditional averages will give insight into the connection of the statistics to the RB dynamics. An intermediate step would be to concentrate on the joint PDF of temperature and vertical velocity, which should among others relate to the dynamics of plumes. | 6,868.4 | 2011-03-03T00:00:00.000 | [
"Physics"
] |
Intranasal nanoemulsion adjuvanted S-2P vaccine demonstrates protection in hamsters and induces systemic, cell-mediated and mucosal immunity in mice
With the rapid progress made in the development of vaccines to fight the SARS-CoV-2 pandemic, almost >90% of vaccine candidates under development and a 100% of the licensed vaccines are delivered intramuscularly (IM). While these vaccines are highly efficacious against COVID-19 disease, their efficacy against SARS-CoV-2 infection of upper respiratory tract and transmission is at best temporary. Development of safe and efficacious vaccines that are able to induce robust mucosal and systemic immune responses are needed to control new variants. In this study, we have used our nanoemulsion adjuvant (NE01) to intranasally (IN) deliver stabilized spike protein (S-2P) to induce immunogenicity in mouse and hamster models. Data presented demonstrate the induction of robust immunity in mice resulting in 100% seroconversion and protection against SARS-CoV-2 in a hamster challenge model. There was a significant induction of mucosal immune responses as demonstrated by IgA- and IgG-producing memory B cells in the lungs of animals that received intranasal immunizations compared to an alum adjuvanted intramuscular vaccine. The efficacy of the S-2P/NE01 vaccine was also demonstrated in an intranasal hamster challenge model with SARS-CoV-2 and conferred significant protection against weight loss, lung pathology, and viral clearance from both upper and lower respiratory tract. Our findings demonstrate that intranasal NE01-adjuvanted vaccine promotes protective immunity against SARS-CoV-2 infection and disease through activation of three arms of immune system: humoral, cellular, and mucosal, suggesting that an intranasal SARS-CoV-2 vaccine may play a role in addressing a unique public health problem and unmet medical need.
Introduction
The respiratory virus causing COVID-19 is a zoonotic betacoronavirus known as SARS-CoV-2 (Severe Acute Respiratory Syndrome Coronavirus 2) [ protein contains the residues 1-1208 with a C-terminal T4 fibritin trimerization domain, an HRV3C cleavage site, an 8X His-tag and a Twin-Strep-tag. The stabilized S-2P form was achieved by mutation of the S1/S2 furin-recognition site 682-RRAR-685 to GSAS to produce a single chain S protein, and the 986-kV-987 was mutated to PP. The protein was produced in Expi-CHO-S cells as described previously [16,17].
Nanoemulsion adjuvant and vaccine preparation
The 60% NE01 was prepared by high shear homogenization of water, ethanol, cetylpyridinium chloride, Tween-80 (non-ionic surfactant), and highly refined soybean oil to form an oil-inwater nanoemulsion with a mean particle size of~400 nm as described previously [18]. The 60% NE01 used in the preparation of vaccines for both mouse and hamster studies was produced under GMP conditions with a endotoxin level of <8 EU/mL. The vaccine was prepared by mixing S-2P with NE01 adjuvant for a final concentration of 2.5 μg of S-2P (mouse studies) or 10 μg of S-2P (hamster studies) with 20% NE01/dose, which would correspond to <0.03-0.05 EU/dose. Alum adsorbed intramuscular vaccine was prepared by mixing 2.5 μg of S-2P with 30 μg of aluminium hydroxide (Al (OH) 3 )(Croda, Cat# AJV3012) in a 50 μL dose volume. The prepared vaccine was mixed thoroughly before administering to animals.
Mouse study
Mouse immunization studies were performed at IBT Bioservices, Rockville, MD, USA. The mice used in this study were housed in compliance with Integrated BioTherapeutics IACUC with protocol approval number AP-160805. Animals were housed in groups of four, individually vented Innovive disposable IVC rodent cages in climate-controlled conditions with 12/12 light/dark cycle. All animals in the study were monitored once daily for clinical observations, body weights, moribund and mortality. The animals were fed commercially prepared mouse diet and water was available ad libitum. Anesthesia was used to minimize pain and distress during the study. Animals were first anesthetized via isoflurane inhalation and then terminally bled (exsanguination). Death was confirmed by pinching of rear paw where the foot reflex is non-reactive.
Six-to eight-week-old female CD-1 mice were randomly assigned to each of the five groups, with 8 animals in each, except for group with two intranasal vaccinations, where 7 animals were assigned. Mice were immunized intranasally with S-2P/NE01 either three or two times, or three times with S-2P alone, or intramuscularly with S-2P/alum, and an unimmunized control group. All vaccinated animals received 2.5 μg of S-2P protein/dose either in 12 μL (intranasal dose), or 50 μL (intramuscular dose). Vaccines were administered three weeks apart and blood was collected two weeks post last vaccination. Bronchio-alveolar lavage (BAL) was collected prior to collection of lungs on week 8 (day 56), followed by collection of lungs and spleens.
Hamster study
Hamster challenge studies were performed at Testing Facility for Biological Safety, TFBS Bioscience Inc., Taiwan and Academia Sinica, Taiwan. The hamsters used in this study were housed in compliance with Institutional Animal care and Use Committee with study protocol approval number TFBS2020-019 and Academia Sinicia (approval number: 20-10-1526). Animals were housed in groups of three to six individually vented cages indoors in climate-controlled conditions with a 12/12 light/dark cycle. All animals in this study were monitored once daily for clinical observations, body weights, moribund and mortality like mouse study described above. The animals were fed commercially prepared hamster diet and water was available ad libitum through polycarbonate bottles attached to cages. Anesthesia was used to minimize pain and distress during the study. 2% isoflurane was used during blood collection, and challenge was performed under intraperitoneal anesthesia with Zoletil 50 (5 mg/kg). Death of the animals was confirmed by cardiac and respiratory arrest by carbon-di-oxide overdose.
Six-to nine-week-old female golden Syrian hamsters were randomized into four groups. Groups of 12 hamsters were immunized either with three or one intranasal dose, while the group vaccinated two times had 10 animals. There were six animals assigned to the negative control group (PBS vaccination). Hamsters were immunized three weeks apart with 10μg/ 20μL (10μL/nare) of S-2P/NE01 per each dose. Animals were challenged with SARS-CoV-32 days post last dose (as described below) and finally they were bled three weeks after the last vaccine dose.
Hamster challenge with SARS-CoV-2. Hamsters were challenged at 4-5 weeks after the last dose with 1 x 10 4 PFU of SARS-CoV-2 as described previously [17]. In brief, hamsters in each group were divided into two cohorts and sacrificed three-or six-days post-challenge for viral load and pathology in lungs along with collection of nasal wash for upper respiratory viral load. Bodyweight and survival for each hamster was recorded daily post challenge until sacrifice. Euthanization, viral load, and histopathological examination were performed as described earlier [17].
Quantification of viral titer by cell culture infectious assay (TCID 50 )
The viral titer determination from lung tissue was performed as described previously [17]. In brief, the lungs were homogenized, clarified by centrifugation, and supernatant was diluted 10-fold and plated onto Vero cells in quadruplicate for live virus estimation. Similarly for nasal wash, the sample was centrifuged, diluted, and plated onto Vero cells. Cells were fixed, stained, and TCID 50 /mL was calculated by the Reed and Muench method [19].
Real-time PCR for SARS-CoV-2 RNA quantification
The SARS-CoV-2 RNA levels were measured using the established RT-PCR method to detect envelope gene of SARS-CoV-2 genome. RNA obtained from both lungs and nasal washes were analyzed for SARS-CoV-2 RNA levels as described previously [17,20].
Histopathology
As described previously [21,22], the left lungs of the hamsters were fixed with 4% paraformaldehyde for 1-week. The lungs were trimmed, processed, paraffin embedded, sectioned, and stained with Hematoxylin and Eosin (H&E) followed by microscopic scoring. The assessment of the pathological changes was done using scoring system that was used in the previous experiments where nine different areas of the lung sections are scored individually and averaged. In brief, a score of 0, was given to sections with no significant findings, score of 1-for minor inflammation with slight thickening of alveolar septa and sparse monocyte infiltration, score of 2-for apparent inflammation with alveolus septa thickening and interstitial mononuclear inflammatory infiltration, score of 3 and above-for diffuse alveolar damage with increased infiltration [17].
Determination of serum and BAL S-2P specific IgG and IgA by ELISA
Serum and bronchoalveolar lavage samples (BAL) were evaluated for S-2P specific IgG and IgA antibody responses by ELISA. Briefly, 96-well Immulon 4HBX plates (Thermo Scientific, Cat# 3855) were coated with 1 μg/ml of S-2P, blocked using 5% BSA in PBS and, two-fold serially diluted serum or BAL samples were added onto the plate. Titers were determined using Sheep Anti-Mouse IgG-HRP (Jackson Immunoresearch, Cat # 515-035-071) or Rabbit Anti-Mouse IgA-HRP (Rockland, Cat # 610-4306). The endpoint titer (EPT) was determined by extrapolating from the closest OD values above and below the cutoff value (three times the mean background) and calculating the average of these two values.
Neutralization assays
The SARS-CoV-2 VSV pseudotype neutralization assay was performed at IBT Bioservices. In brief, the serum samples from mouse immunogenicity study were serially diluted two-fold, mixed with 10,000 RLU of rVSV-SARS-CoV-2 pseudovirus in which G gene of VSV is replaced with the firefly luciferase reporter gene and the S protein of SARS-CoV-2 is incorporated as the membrane protein on the surface of the VSV pseudotyped virus. The mixture was incubated at 37˚C for 1 hour. Following incubation, the mixture was added to monolayer of Vero cells in triplicates and incubated for 24 hours at 37˚C. After 24 h, firefly luciferase activity was detected using the Bright-Glo™ luciferase assay system (Promega Corporation, Cat # E2610). ID 50 were calculated using XLfit dose response model.
The serum samples from hamsters were analyzed for neutralizing antibody titers using lentivirus expressing full-length wild type Wuhan-Hu-1 strain SARS-CoV-2 spike protein as described previously. 16 Briefly, serum samples were heat-inactivated, serially diluted 2-fold in MEM with 2% FBS and mixed with equal volumes of pseudovirus. The samples were incubated at 37˚C for 1 hour before adding to the HEK293-hACE2 plated cells. Cells were lysed 72 hours post incubation and relative luciferase units (RLU) were measured. ID50 and ID 90 (50% and 90% inhibition dilution titers) were calculated deeming uninfected cells as 100% and virus transduced control as 0%.
Lung and spleen cytokine assay
Lungs and spleens were dissected and manually disrupted to generate single-cell suspensions to be used in the Luminex and ELISpot assays. The contaminating red blood cells were lysed using 0.8% ammonium chloride with EDTA. The lymphocytes were washed with media, resuspended, and plated at 5 X 10 5 cells per well in a 96-well flat bottom plate. The cells were stimulated with or without S-2P (5 μg/mL) and incubated at 37˚C incubator with 5% CO 2 . After 72-hour incubation, the culture supernatants were collected and Luminex assay was performed according to the manufacturer's protocol (EMD Millipore, Cat# MCYTOMAG-70K).
Lung and spleen B-cell ELISpot
Single cell suspensions from lungs and spleens of mice were stimulated with mouse IL-2 (R & D Systems, Cat # 402-ML; 0.5 μg/mL) and RD848 (Mabtech, Cat # 3611-5X; 1 μg/mL) for 3 days to induce nonspecific polyclonal expansion. At the end of 3 days, the cells were washed and plated onto PVDF ELISpot filter plates coated with anti-mouse IgG or IgA capture antibody (Mabtech, Cat# BASIC 3825-2H and BASIC 3835-2H). The plates were incubated at 37˚C for 24 hours, following which the cells were stained with biotinylated S-2P antigen. Antigen-specific IgG-or IgA-producing B cells were detected using streptavidin-HRP. The spots were counted in AID ELISpot reader and expressed as spot forming units/million cells.
Statistical analysis
The data generated between groups were compared using GraphPad Prism software. Unpaired, Mann-Whitney nonparametric tests, one-way ANOVA with post hoc Tukey Kramer corrections, and Kruskal-Wallis with corrected Dunn's multiple comparison test were used to assess statistical significance. Data are presented as mean and 95%CI.
Intranasal immunization with S-2P/NE01 induces humoral immune response in mice
Data presented in Fig 1 show a significant induction of serum S-2P-specific IgG after either intranasal or intramuscular vaccination. The route of vaccination did not impact seroconversion as all animals generated similar levels of anti-S-2P antibodies However, increased levels of IgA were detected only after intranasal vaccination, with no detectable levels of antigen-specific IgA in any of IM vaccinated animals. A cell-based neutralization assay utilizing an rVSVpseudotype SARS-CoV-2 ( Table 1), revealed that after 3 IN immunizations with S-2P/NE01, neutralizing antibodies were generated in the sera of all mice (8/8) with a GM IC 50 >8000. Additionally, all mice (7/7) from the 2 IN S-2P/NE01 immunization group generated neutralizing antibodies but had a substantially lower GM IC 50 of 1375. Antibodies generated from 3 IM S-2P/Alum vaccinations had equivalent neutralizing activity to 3 IN immunizations.
Intranasal immunization induces mucosal immunity in mice
Mucosal immunity is defined by the induction of secretory IgA in mucosal surfaces and homing of immune cells to these tissues. Antigen-specific homing of B cells to mouse lungs and spleens were measured by ELISpot assay. There was a significant increase in homing of S-2Pspecific IgG-producing B cells to the lungs after intranasal vaccination (2.5-fold increase in spot-forming units) and spleens (over 3.5 -fold increase) compared to intramuscularly S-2P/ alum vaccinated animals. In addition, only intranasal immunizations selectively produced B cells secreting S-2P-specific IgA in both spleens and lungs, suggesting a tissue resident memory B-cell response to the antigen, which supports a strong mucosal immune response conferred by this adjuvanted vaccine. (Fig 2).
Balanced Th1/Th2 and Th17 immune response induced by intranasal immunization in mice
Cell-mediated immune responses were assessed in lung cells stimulated with S-2P antigen in a cytokine release assay. T H 1 immune responses were evaluated by measuring IFNγ and TNFα production. IL-4 and IL-5 levels were used to assess T H 2 responses. T H 17 activity was measured by the release of IL-17A, the hallmark of mucosal immunity. As seen in Fig 3A, a significant induction of IFNγ was seen in lung tissue fromS-2P/NE01 IN immunized animals. These levels were statistically significant when compared to the levels in the lungs of S-2P/alum immunized mice. The T H 2 immune response was significantly increased by both intranasal and intramuscular immunizations (Fig 3B). However, there was statistically significant induction of IL-4 in the lungs was seen in S-2P/alum (p-value <0.05) immunized mice compared to S-2P/NE01, although this result is not surprising as alum is known as a strong T H 2 stimulating adjuvant. Mucosal immunity in the lungs was significantly stimulated by intranasal vaccination as evidenced by increased IL-17A levels in the immunized animals (Fig 3C): S-2P/NE01 immunized mice demonstrated more robust IL-17A with 100-fold higher increase compared to S-2P/alum immunized mice. Together, these data suggest that intranasal immunization with NE01-adjuvanted vaccine elicited a balanced Th1/Th2/Th17 immune response with production of tissue-resident memory T cells in the lung that will be beneficial for strong mucosal immunity.
Intranasal immunizations induce highly efficient neutralizing antibodies in hamsters
To examine the vaccine efficacy of IN S-2P/NE01, a Syrian hamster model was selected due to SARS-CoV-2 pathogenesis and clinical symptoms of weight loss and fulminant pneumonia [23]. In this study, the same immunization protocol used in the mouse study was followed by dosing three weeks apart. Only the IN immunizations were performed in this study, comparing the efficacy of one versus two and three doses. Hamsters were challenged intranasally four weeks post last dose with 10 4 PFU/hamster of SARS-CoV-2 isolate hCoV-19/Taiwan/4/2020. Animals were bled for serology prior to viral challenge to determine the systemic immune response. Although two different animal models were assessed, the convention of the immune response was similar as both models developed neutralizing antibodies after at least two doses. Statistically significant induction of neutralizing antibodies was seen in hamsters that received either two or three S-2P/NE01 immunizations with GMT for fifty-percent inhibition dose (ID 50 ) at 825 after three IN vaccinations and 493 after two and GMT for ID 90 at 195 and 104 respectively as assessed by pseudovirus neutralization assay. No induction of neutralizing antibodies was seen after one intranasal dose of the S-2P/NE01 vaccine (Fig 4).
Intranasal immunizations protect hamsters from SARS-CoV-2 challenge
Protection in the hamster challenge model is measured as a change in body weight after SARS-CoV-2 infection. In this study, hamsters that received either 2 or 3 IN doses of S-2P/ NE01 gained between 1 and 2% of body weight measured every day until six days post-challenge. In contrast, animals immunized with 1 dose of S-2P/NE01 showed a weight loss similar to the control animals. Lung viral load at three-and six-days post-challenge measured by RT-PCR to detect viral RNA and by cell culture infectious assay (TCID 50 ) showed a significant decrease in viral load in hamsters that received 2 or 3 IN doses of S-2P/NE01. Upper respiratory tract infection was measured in nasal washes collected at 3-and 6-days post-challenge. Both two-and three-dose immunized hamsters showed a two-fold decrease in viral load as measured by TCID 50 , 3-days post-challenge compared to control. However, six days post-challenge, the viral loads were below the limit of detection even in control. A significant decline in the number of copies of viral genome was observed six days post challenge in the nasal washes collected from three dose group (Fig 5). These results correlated with the bodyweight change and levels of neutralization antibodies, indicating two or three intranasal doses can provide protection to hamsters from both upper and lower respiratory tract infections bySARS-CoV-2.
Intranasal immunizations do not induce lung pathology
Lung sections from the hamsters were scored and analyzed for any pathological changes after infection. No differences in pathology were seen between the immunized groups and control after three days post-challenge. At 6 days post-challenge, animals immunized with either 2 or 3 times still had no detectable lung abnormalities, while animals in the control group and 1 dose immunized group showed significantly increased lung pathology with extensive immune cell infiltration and diffuse alveolar damage (Figs 6 and S1). These results indicate that two or three doses of S-2P/NE01 vaccine induce a robust systemic immune response, in addition to local immunity, thereby enhancing viral clearance from lungs and nasal cavity and protecting hamsters from SARS-CoV-2 infection.
Discussion
Licensed SARS-CoV-2 vaccines had shown remarkable efficacy against infection and hospitalization. However, an increased rate of infections have been observed in vaccinated people contributing to the rise of a fourth and fifth wave of infections in countries that achieved high rates of vaccination post second or third immunizations. The rise of infections coincided with reduced SARS-CoV-2 antibody titers as well as spread of new variants of concern, especially the highly contagious Omicron (BA.1) variant in addition to other localized variants: Alpha (B.1.1.7), Beta (B.1.351), Gamma (P1), and Delta (B.1.617.1). Immune evasion can be observed with these variants by antigenic drift in the receptor-binding domain leading to reduced efficacy of vaccine-induced neutralizing antibodies. In spite of these observations, all COVID-19 vaccines still exhibit high efficacy against hospitalization and severe disease [24,25]. Administration of a booster dose to those vaccinated six months or more following the last dose of vaccination is proposed as a remedy to boost serum antibodies which in turn could reduce SARS-CoV-2 infections and transmission, thus reducing chances for the emergence of new variants [26]. We believe that any proposed solution based on boosting serum antibodies by administration of a third vaccination to influence nasal colonization and spread of the virus is a temporary solution as intramuscular immunization does not elicit mucosal immunity, the only permanent and efficient solution to the problem. Our mucosal adjuvant NE01 demonstrates a potential long-lasting induction of mucosal and systemic immunity, achieved by an intranasal administration of a NE01 formulated/adjuvanted vaccine. Intranasal vaccination using nasal NE01 adjuvant/delivery had shown unique attributes including elicitation of mucosal Th17, IgA, serum IgG, and homing of IgG and IgA B-and T-cells to reside in mucosal tissues. These attributes were absent when vaccines delivered intramuscularly. In addition, our adjuvant induces IL-17. Current clinical evidence has shown that Th17 polarization in COVID-19 patients can be associated with poor disease outcomes facilitated by eosinophilic infiltrates in the lungs [27]. However, NE01-intranasal vaccines has been previously evaluated in primary animal models for RSV (cotton rats) and pandemic flu (ferrets) eliciting mucosal and systemic immunity that not only prevented disease, but also prevented nasal colonization following intranasal and intratracheal viral challenge [28,29]. In these studies, local, but not systemic increases of IL-17 were observed in the lung without co-expression of IL-13, where IL-13 has been associated with severe disease progression with COVID-19 in mouse models [30]. Moreover, pre-clinical mouse studies using nanoemulsion-inactivated RSV demonstrated no immunopotentiation, with absence of mucus hypersecretion and lack of airway eosinophilia [31]. Since our vaccine platform consistently contributes to balanced T-cell immunity (Th1/Th2/Th17), skewed and potentially damaging T-cell polarizations are likely negated due to NE01's unique adjuvant mechanism of action that induces homing of memory cells and induction of mucosal immunity at distant mucosal tissues. Our previously reported data showed that intranasal immunization with a bivalent gD2/gB2/NE01 vaccine elicited mucosal immunity that prevented colonization and infection following intravaginal HSV2 challenge in a guinea pig model [32]. Data presented in the current study show that formulation of SARS-CoV-2 S-2P antigen in NE01 elicited protective immune responses against lung infection and disease evidenced by histopathologic scoring. Further, intranasally vaccinated animals exhibited an enhanced reduction of SARS-CoV-2 viral load in the lungs and nasal washes. With the caveat that IM vaccination temporarily reduced nasal colonization following vaccination, our intranasal vaccination outcomes were in line with other data generated in the same hamster model using an S-2P vaccine adjuvanted with a combination of Alum and CpG 1018 [16,17], suggesting that intranasal immunization could be as efficient as intramuscular vaccination with the potential advantage of induction of mucosal immunity that would eliminate the virus at its port of entry.
The NE01 adjuvant is a clinical-stage adjuvant and has been evaluated in several clinical trials, including a phase 1 anthrax vaccine trial and a seasonal flu trial [33]. NE01-adjuvanted vaccines demonstrated a remarkable safety profile and a robust mucosal and systemic immunity. Additionally, the exceptional stability (at 5˚C) and ease of administration, reduces the complexities involved with ultra-low cold chain storage and needle-less administration, making this vaccine attractive to low-income countries [34]. We believe our NE01 technology can play a role in providing safe and efficacious standalone vaccine to protect against infection and disease. In the light of the fact that billions of people had already received IM vaccines and that many vaccines are already licensed and have been used, our future development plan includes using this unique intranasal vaccine as a booster vaccine to those who had received IM vaccines fin order to boost their systemic immunity and to confer complementary mucosal immunity to achieve the ultimate goal of eliciting immunity for the prevention of colonization, spread, infection, and disease caused by SARS-CoV2. | 5,241.6 | 2022-03-23T00:00:00.000 | [
"Biology",
"Medicine"
] |
The $P$-wave charmonium annihilation into two photons $\chi_{c0, c2}\rightarrow \gamma\gamma$ with high-order QCD corrections
In this paper, we present a new analysis on the $P$-wave charmonium annihilation into two photons up to next-to-next-to-leading order (NNLO) QCD corrections by using the principle of maximum conformality (PMC). The conventional perturbative QCD prediction shows strong scale dependence and deviates largely from the BESIII measurements. After applying the PMC, we obtain a more precise scale-invariant pQCD prediction, which also agrees with the BESIII measurements within errors, i.e. $R={\Gamma_{\gamma\gamma}(\chi_{c2})} /{\Gamma_{\gamma\gamma}(\chi_{c0})}=0.246\pm0.013$, where the error is for $\Delta\alpha_s(M_\tau)=\pm0.016$. By further considering the color-octet contributions, even the central value can be in agreement with the data. This shows the importance of a correct scale-setting approach. We also give a prediction for the ratio involving $\chi_{b0, b2} \to\gamma\gamma$, which could be tested in future Belle II experiment.
Charmonium decays have been widely used to explore the interplay between the perturbative and nonperturbative dynamics due to its relatively clean platform, and they also play important roles in establishing the asymptotic freedom of quantum chromodynamics (QCD) [1,2]. Among them, many attentions have been paid for the electromagnetic decays χ c0,c2 → γγ. They have been measured by the CLEO and BESIII collaborations [3,4]; especially, in year 2017, the BESIII collaboration issued their measured value for the R-ratio, R exp = Γ χc2→γγ Γ χc0→γγ = 0.295 ± 0.014 ± 0.007 ± 0.027, (1) where the errors are statistical, systematic, and the associated errors of the branching fraction B(ψ(3686) → γχ c0,c2 ) and the total decay width Γ χc0,c2 , respectively. On the other hand, they have been calculated by using various approaches, such as the nonrelativistic potential model, the nonrelativistic QCD theory (NRQCD), the relativistic quark model, and the lattice QCD theory, respectively, c.f. Refs. [5][6][7][8][9][10][11][12][13][14][15][16] and references therein. Within the framework of NRQCD factorization theory, one has observed that the leading-order (LO) prediction is close to the experimental measurements, but this process is extremely sensitive to high-order QCD corrections and relativistic corrections, due to the fact that the typical magnitude of strong coupling constant and the squared relative velocity of the charm quark in charmonium, α s (m c ) ∼ v 2 c ∼ 0.3, are comparatively large. It is important to finish as more perturbative terms as possible so as to achieve a more accurate pQCD prediction. And in order to obtain a convincing fixed-order prediction, the influence of high-order correction on the χ c0,c2 → γγ must be carefully analyzed.
At the present, the QCD corrections to the S-wave heavy quarkonium electromagnetic/leptonic electromagnetic decays have been calculated up to next-to-nextto-leading-order (NNLO) level. The spin-singlet heavy quarkonium decays η c → γγ and η b → γγ have been calculated up to NNLO level in Refs. [17,18]; and the spin-triplet heavy quarkonium decays J/ψ → e + e − and Υ → e + e − decay have been calculated up to NNLO level by Refs. [19,20]. In year 2016, the NNLO QCD corrections to the P -wave charmonium χ c0,c2 → γγ have been done by Ref. [16], which however show large renormalization scale dependence, and the predicted R-ratio cannot explain the above BESIII value. It is important to show what's the reason for such discrepancy.
Within the framework of NRQCD, one can factorize the decay width into non-perturbative matrix elements and perturbatively calculable short-distance coefficients, and the R-ratio becomes where the helicity amplitude of this process is expressed by A χcJ λ1,λ2 , λ = |λ 1 − λ 2 |, λ 1,2 = ±1, J = 0, 2. The amplitude of the P -wave quarkonium electromagnetic decays χ c0,c2 → γγ can be expressed as [16] A χc0,c2 where the color-singlet P -wave long-distance matrix element can be related to the first derivative of the radial wave function at the origin, where N c = 3 is the SU c (3) color number, R ′ χc0,c2 (0) are first derivatives of χ c0,c2 radial wavefunctions at the origin. The spin-splitting effect for the radiation wavefunctions of χ c0,c2 are small, and by defining the Rratio (2), the uncertainties caused by the matrix elements can be greatly suppressed. The perturbative part C χc0,c2 λ (m c , µ r , µ Λ ) up to NNLO level can be read from Ref. [16], where µ r and µ Λ are renormalization and factorization scales, respectively.
In dealing with the perturbative series of R-ratio, one usually sets µ r = m c so as to eliminate large logarithmic terms in powers of ln µ 2 r /m 2 c , and then varies it within certain range to ascertain the uncertainty. This simple method causes the mismatching of α s with its perturbative coefficients at each order, breaks the renormalization group invariance [21], and leads to conventional renormalization scheme-and-scale ambiguities. Such ambiguities could be softened to a certain degree by including higher-order terms. However due to its complexity, the exact NNNLO corrections of χ c0,c2 → γγ shall not be available in near future, thus it is important to find a correct way to achieve a reliable and accurate prediction by using the known NNLO series.
The renormalization scale-setting problem is one of the most important issues for pQCD theory, which has a long history, cf. the review [22]. To solve it, we adopt the single-scale approach [23] of the principle of maximum conformality (PMC) to analyze the decay width of χ c0,2 → γγ up to NNLO QCD corrections. By using the renormalization group equation recursively, the PMC determines the precise α s value of the process by using the non-conformal β-terms in pQCD series [24][25][26][27][28]. After applying the PMC, the resultant pQCD series becomes conformal, the magnitude of α s and the perturbative coefficients become well matched, and then we obtain exact values for each order. The PMC predictions are renormalization scheme-and-scale independent [29], thus the conventional scale-setting ambiguities is eliminated. Due to the perturbative nature of the pQCD theory, there is residual scale dependence because of unknown higherorder terms [30]; For the PMC perturbative series, such residual scale dependence shall generally be highly suppressed even for lower-order predictions [31].
One can rewrite the R-ratio (2) of χ c0,2 → γγ as the following perturbative form, where a s = α s /4π and Ω = |R The perturbative coefficients r i can be derived into conformal terms r i,0 and non-conformal terms r i,j =0 by using the degeneracy relations among different orders [32], i.e., where β 0 = 11 − 2 3 n f , representing the one-loop βfunction, in which n f is the active flavor numbers. Using the NNLO results given in Ref. [16], we obtain (10) where , and e c represents the charm-quark electric charge.
After applying the standard procedures of the PMC single-scale approach [23] to the pQCD series (5), we obtain the following conformal series, where Q * is the PMC scale, which is obtained by requiring all non-conformal items vanish. It is the effective scale which replaces the individual PMC scales at each order in PMC multi-scale approach [27,28] in the sense of a mean value theorem. At present, by using the known NNLO perturbation series, the PMC scale can be fixed at the leading-log accuracy: wherer i,j = r i,j | µr =mc . It is noted that Q * is independent to any choice of renormalization scale µ r , together with the scale-invariant conformal coefficients, the conventional renormalization scale ambiguity is eliminated. Using Eq. (12), we obtain Q * = 0.250 GeV, which is close to the QCD asymptotic scale. Because the effective momentum flow Q * of the process is close to the QCD asymptotic scale Λ, we need to choose a low-energy model for α s so as to achieve a reliable prediction. In the literature, a variety of low-energy α s models have been suggested [33][34][35][36][37][38][39][40][41]. A comparison of various low-energy α s models has been given in Ref. [42]. In the present paper, for clarity, we adopt the CON model to do our discussion. It is derived from continuum theory [41] and uses the exchanged gluons with an effective dynamical mass m g , and determines the non-perturbative dynamics of gluons by using the Dyson-Schwinger equation. More explicitly, the CON low-energy α s model is expressed as follows: where , and m g = 500 ± 200 MeV [35]. The asymptotic scale Λ can be fixed by using the α s measured at a typical energy scale. More definitely, by using α MS s (M τ ) = 0.325 ± 0.016 [43], which leads to Λ| n f =3 = 0.383 +0.029 −0.031 GeV, Λ| n f =4 = 0.324 +0.029 −0.029 GeV, and Λ| n f =5 = 0.223 +0.022 −0.023 GeV. Fig.1 shows the α s -running behavior at different scales, the α s CON-model with m g = 500 +200 −200 MeV is adopted in low-energy region which is shown by shaded band. The smooth connection between the low-energy region and the large-energy region is obtained by using the matching scheme proposed in Ref. [44]. More explicitly, by requiring the first derivatives of α s to be the same at the crossing point of the two energy regions, the α s transition scale is determined to be 0.933 +0.183 −0.191 GeV. To do the numerical calculation, we take the c-quark pole mass m c = 1.68 GeV [45], and set the factorization µ Λ = 1 GeV. In Table I, we give the contributions of each loop terms of R c under conventional and PMC scale-setting approaches, respectively. The conventional predictions are highly µ r -dependent for each loop terms and the total contributions of R c , e.g. as shown by the following Eqs. (14)(15)(16), the renormalization scale uncertainty for R Conv. c,total within the range of µ r ∈ [1GeV, 2m c ] is about ( −99% +57% ): R Conv. c,total | µr=mc = 0.067 GeV, R Conv. c,total | µr =2mc = 0.105 GeV.
Those values deviate from the BESIII measurement [4] by at least ∼ 3.9σ. Moreover, the separate scale un-certainties for NLO-terms and NNLO-terms are ( +59% −27% ) and ( −60% +12% ), respectively. After applying the PMC, the conventional scale uncertainty is removed, i.e. we obtain R PMC c,total ≡ 0.246 for any choice of µ r , which agrees with the BESIII measurement within errors. By further taking the α s shift, ∆α s (M τ ) = ±0.016, into consideration, we obtain ∆R Conv. c,total | µr =1 GeV = +0.026 −0.029 , ∆R Conv. c,total | µr =mc = ±0.013, ∆R Conv. c,total | µr =2mc = ±0.007, and ∆R PMC c,total = ±0.013. Fig. 2 shows explicitly how the R c -ratio changes with different choices of µ r . It shows that after applying the PMC, the perturbative series is independent to any choice of µ r , and the conventional renormalization scale ambiguity is removed. The PMC conformal series ensures the scheme independence of the pQCD prediction, and together with the scale invariance, its behavior indicates the intrinsic perturbative behavior of the series. The NNLO conformal coefficient r 2,0 = 126.74 is larger than the conventional coefficient r 2 = −59.93 +50.84 −67.92 for µ r ∈ [1GeV, 2m c ], this explains why the PMC magnitude of the NNLO-terms is larger than the conventional one. It is noted that the smaller conventional NNLO coefficient r 2 is due to accidental cancelation of conformal and non-conformal terms, which is however highly scale dependent, leading to scale dependent series.
A physical observable should not dependent on the choices of manly introduced parameters such as the renormalization scale and the factorization scale. In the above, we have shown that by applying the PMC, the NNLO R-ratio removes the conventional renormalization dependence. Due to the heavy quark spin symmetry, the matrix elements for χ c0 and χ c2 are the same, and R-ratio avoids the uncertainties caused by different choices of matrix elements. However, because the NNLO-coefficient r 2,0 is explicitly µ Λ -dependent, the R-ratio shows explicit µ Λ dependence, whose magnitude is large [16]. We observe that such large factorization scale dependence could be removed by taking the evolution effects of the matrix element into consideration; That is, because the anoma- lous dimensions for χ c0 and χ c2 are different [48], those two matrix elements are different at different µ Λ , which could compensate the µ Λ -dependence from the hard part.
Using the evolution equation [9], we obtain ln . (18) As required, the difference is an O(α 2 s )-order effect. If taking O 1 ( 3 P J )| µΛ0=1GeV = 0.107 GeV 5 [49] as the initial value, we obtain O 1 ( 3 P 0 )| µΛ=3GeV = 0.139 GeV 5 and O 1 ( 3 P 2 )| µΛ=3GeV = 0.124 GeV 5 . Though the differences are small, we observe that the net factorization scale dependence of R c can be removed. This can be explicitly shown by Fig.3, in which the flat lines for R c versus µ Λ indicates that the R c -ratio is independent to any choice of µ Λ . Fig.3 shows that after applying the PMC, the predicted R c becomes more closer to the BESIII value, which still has a slight gap from the data.
According to Refs. [13,50,51], contributions from the color-octet (CO) components in charmonium should not be ignored. For example, it has been argued that the color-octet components shall shift the decay width by ∆Γ CO χc0 ≃ −0.3 GeV and ∆Γ CO χc2 ≃ −0.227 GeV [13]. If taking those extra color-octet contributions into consideration, we obtain R PMC c,total = 0.246 to 0.299. Fig. 4 shows the R c -ratio with or without the color-octet contributions, where the error bars are for ∆α s (M τ ) = ±0.016 and µ r ∈ [1GeV, 2m c ]. These results show a better match with the experimental data. Thus the CO contributions should be taken into consideration for a sound prediction. Three typical µr are adopted, and the PMC prediction is independent to those choices of µr.
As a final remark, the above analysis can be conveniently applied for the P -wave bottomonium decays to two photons, which could be measured by the future high precision Belle-II experiment. We present the NNLO R b -ratio under conventional and PMC scale-setting ap-proaches in Table II. To do the numerical calculation, we take the b-quark pole mass m b = 4.78 GeV [45]. Due to α s (m b ) ∼ 0.1, a better pQCD convergence is observed for the bottomonium case, and the scale uncertainty is smaller, i.e. the net NNLO scale error is about 29% for µ r ∈ [1 GeV, 2m b ].
As a summary, in the present paper, we have studied the R c -ratio up to NNLO accuracy. Under conventional scale-setting approach, the renormalization scale uncertainty is large, which is about ( −99% +57% ) for µ r ∈ [1 GeV, 2m c ]. By applying the PMC, we obtain a more accurate pQCD prediction without renormalization scale uncertainty, R c | PMC = 0.246 ± 0.013, whose error is caused by ∆α s (M τ ) = ±0.016. This prediction agrees with the latest BESIII data within errors. | 3,599.4 | 2021-05-27T00:00:00.000 | [
"Physics",
"Chemistry"
] |
Jet Substructure from Dark Sector Showers
We examine the robustness of collider phenomenology predictions for a dark sector scenario with QCD-like properties. Pair production of dark quarks at the LHC can result in a wide variety of signatures, depending on the details of the new physics model. A particularly challenging signal results when prompt production induces a parton shower that yields a high multiplicity of collimated dark hadrons with subsequent decays to Standard Model hadrons. The final states contain jets whose substructure encodes their non-QCD origin. This is a relatively subtle signature of strongly coupled beyond the Standard Model dynamics, and thus it is crucial that analyses incorporate systematic errors to account for the approximations that are being made when modeling the signal. We estimate theoretical uncertainties for a canonical substructure observable designed to be sensitive to the gauge structure of the underlying object, the two-point energy correlator $e_2^{(\beta)}$, by computing envelopes between resummed analytic distributions and numerical results from Pythia. We explore the separability against the QCD background as the confinement scale, number of colors, number of flavors, and dark quark masses are varied. Additionally, we investigate the uncertainties inherent to modeling dark sector hadronization. Simple estimates are provided that quantify one's ability to distinguish these dark sector jets from the overwhelming QCD background. Such a search would benefit from theory advances to improve the predictions, and the increase in statistics using the data to be collected at the high luminosity LHC.
Introduction
The physics program at the Large Hadron Collider (LHC) has reached a very mature stage. Run II is now completed, and ATLAS and CMS each have ∼ 150 fb −1 of 13 TeV data to explore. This data has already taught us a variety of lessons regarding the Standard Model and beyond, but detection of new physics has thus far remained elusive. Given the strong theory motivations provided by, e.g. supersymmetry and/or WIMP dark matter, most signal regions have been developed to target perturbative extensions of the Standard Model, which yield relatively clean, easily interpretable observables. This is made sharp by the notion of Simplified Models [1][2][3], which typically introduce one or two new physics states whose dynamics and interactions can be fully captured via a few additional terms that one adds to the Standard Model Lagrangian. However, not all Standard Model extensions have collider signatures that can be captured in the weakly-coupled Simplified Model framework. A good understanding of the novel signal regions associated with more out-of-the-box ideas is crucial to achieving full coverage when searching for new physics potentially being produced at the LHC.
Of particular relevance here is the idea that the dark matter could be a stable remnant of some new strong dynamics that resides in a hidden sector . It is then reasonable to assume the presence of some non-gravitational connection to the visible sector, such that the hidden sector was in thermal contact with the Standard Model at some point in the early Universe. This could result from a renormalizable interaction involving the Higgs, Neutrino, and/or Hypercharge Portals [29][30][31] or could be due to the exchange of some new mediator. Depending on the properties of the portal, it could be possible to access the hidden sector at the LHC. Furthermore, the dark strong dynamics could obfuscate the resultant signatures, as has been demonstrated concretely through many examples, e.g. lepton jets [32][33][34][35][36][37], emerging jets [38][39][40], semivisible jets [41][42][43][44], and soft bombs [6,14].
All of these examples share a common characteristic: a hard collision can generate a dark sector parton that subsequently undergoes a dark sector parton shower. This often yields a high multiplicity of soft final state particles, smearing out the kinematics of the underlying partons and making it difficult to distinguish the associated signal against large backgrounds. There is a further practical complication due to the fact that these signatures rely on the presence of dark strong dynamics -the theoretical predictions are not nearly as well understood as in the Simplified Model case. As a result, searches for this class of models are usually designed to be very inclusive, avoiding over reliance on details of the modeling. The resulting trade-off between signal significance and systematic error mitigation motivates the work presented here: our goal is to understand the systematic uncertainties associated with making predictions that rely on dark sector strong dynamics. An appreciation of which aspects of the observable can be reliably considered is crucial for the optimization of resulting search strategies.
Specifically, we focus on scenarios where the dark hadrons that result from a dark sector shower promptly decay back to Standard Model hadrons. Our goal is to explore the properties of the resulting jets' substructure, and to quantify the uncertainty inherent to making such predictions. Since substructure is sensitive to a variety of IR effects, such as the dark hadron mass spectrum and hadronization model, our work provides an observable-driven window into the systematic issues associated with making predictions for these strongly coupled dark sector scenarios.
As the use of jet substructure has become routine (see Refs. [45][46][47][48][49][50][51][52][53] for some reviews), many observables have been proposed to distinguish quark and gluons, or to tag boosted objects, and applications to dark sector showers have also been previously explored [43]. Detailed comparisons of parton and hadron level predictions for substructure observables have been performed in the context of the Standard Model, e.g. see the Les Houches 2017 report [54]. Of particular interest here are variables that were designed to be sensitive to the showering history of a jet, since our goal is to find ways to distinguish QCD jets from those that resulted from showering within a dark sector. We are also interested in taking advantage of advancements in analytic calculations that rely on resummation techniques to capture the showering contribution to substructure. To this end, our benchmark observable will be the energy correlation function e (β) 2 [55], where β controls the sensitivity to wide-angle radiation; see Eq. (2.1) below for details. We choose to focus on e (β) 2 since this family of observables is primarily sensitive to the gauge charge of the associated parton in the underlying hard process, which could be our only handle for uncovering dark shower signatures. 1 There is potential concern when predicting the efficiency of jet-substructure assisted searches. The discriminating power of nearly all substructure observables only becomes calculable if large logarithms that can appear in perturbation theory are resummed to all orders. If this calculation is performed using a Monte Carlo generator such as Pythia, only the leading logarithms (LL, defined in Sec. 2) are correctly captured, 1 A number of observables has been considered for the problem for quark/gluon discrimination that are expected to provide superior discirminiation to e (β) 2 . These include both intrinsically IRC unsafe variables like track multiplicity or N 95 [56], and also more complicated IRC safe observables that try to exploit correlations between multiple particles to approximate the behavior of these multiplicity variables [57][58][59]. The distributions of these observables are dominated by non-perturbative corrections, as discussed in more detail in Sec. 2.1. For QCD, this information can be extracted from suitably chosen control regions, while in the case of a new hidden sector, our only recourse is to appeal to phenomenological models whose systematics are challenging to quantify. This implies that the ability to extract meaningful limits using such observables is significantly reduced, and so we will not consider them in this paper. resulting in large expected theory uncertainties, which cannot be quantified by running the generator alone. 2 For QCD studies, such concerns are partially ameliorated by the fact that the parameters of generators are tuned to real data, allowing them to often match the real world better than their formal accuracy would suggest. When looking for physics beyond the Standard Model that we have not yet observed, we have no such recourse. To better address this state of affairs, we take advantage of theoretical technology developed to resum the soft and collinear QCD logarithms that contribute to e (β) 2 at leading and next-to-leading logarithmic order along with modern numerical implementations within Pythia. Sensibly enveloping across the spread of associated predictions will allow us to quantify the systematic error band that is the main result of this work. These error bands can then be utilized to consistently include substructure information into LHC searches for dark sector physics.
Throughout this paper, we assume the dark sector includes n F families of dark quarks which bind into dark hadrons at energies below some dark confinement scale Λ due to a non-Abelian dark SU ( N C ) gauge group. Dark quarks will be produced with large transverse momentum p T Λ such that they shower and hadronize, yielding jets of dark hadrons. We assume that these dark hadrons decay promptly back to Standard Model quarks, 3 yielding QCD-like jets. We then explore the impact on the e (β) 2 observable as we vary the dark sector parameters Λ, n F , N C , and the effect of making the dark quarks massive. In addition, we provide an approximate characterization of the non-perturbative uncertainties associated with dark hadronization by exploring the impact of varying the phenomenological parameters associated with the Lund string model [66]. We then use our error bars to estimate the extent to which dark sector showers can be distinguished from QCD when including the impact of substructure.
The rest of this paper is organized as follows. In Sec. 2, we introduce the twopoint energy correlation function, which will be used as our benchmark substructure variable. We then review how to calculate this observable to next-to-leading-logarithmic accuracy utilizing traditional resummation techniques. Our enveloping procedure that combines the analytic predictions with numerical results derived from Pythia is then introduced, and provides a proxy for the systematic error associated with making a dark substructure prediction. In Sec. 3, we present the extent to which the substructure 2 Automating parton showers beyond leading log and leading color is extremely challenging. Some progress towards formalizing the problem was made in Ref. [60], followed by a numerical approach to address aspects of subleading color in Ref. [61]. For recent progress in automating aspects of nextto-leading-logarithm accurate parton showers, see Refs. [62][63][64], with a recent candidate full proposal in Ref. [65]. 3 Decays to gluons are also in principle possible but to have them dominate the decay rate would require more involved model building.
changes as a function of some of the dark sector parameters: the dark confinement scale Λ, the number of dark colors N C , dark flavors n F , and the dark quark mass m q . In Sec. 4, we explore the effect of varying the parameters that model the dark sector hadronization. In Sec. 5, we estimate our ability to experimentally probe a dark sector jet against the QCD background. We present our conclusions in Sec. 6. In App. A, we detail the expressions that are used to derive the analytic contributions to our systematic error envelopes.
Substructure Observables with Error Envelopes
A large array of jet substructure observables and algorithms have been developed, and are being combined in analyses in increasingly complicated ways. However, the majority of substructure techniques are designed to find evidence of hard processes buried within boosted hadronic events, 4 and as such, most observables are optimized for the identification of distinct multi-prong structures within a jet. A dark sector has no guarantee that it will produce such structure. Instead, we are interested in observables that are sensitive to the structure of the color charge and gauge group of the radiation making up the parton shower. This problem is closely analogous to the problem of quark/gluon discrimination in QCD, and we may look to prior work in this context for guidance [55,56,59,[69][70][71][72][73][74][75][76][77][78][79][80][81][82][83][84][85]. Additionally, we would like to work with infrared and collinear (IRC) safe observables, so that they are perturbatively calculable. This is particularly important for a dark sector search since, unlike the situation for QCD, we have no data from which to extract any of the non-perturbative parameters which are required to make predictions. Thus, there is no way to estimate their uncertainties without resorting to ad hoc empirical models. These two considerations almost uniquely limit us to considering observables which characterize the angular spread of radiation within the jet. A representative choice is the two-point energy correlation function [55], defined as where β is the angular dependence parameter that determines how sensitive the variable is to the angular distribution of the radiation. The jet algorithm determines the constituent particles in jet J that are summed over in Eq. (2.1). In the context of a hadron collider like the LHC, it is most useful to define z i ≡ p T i /p T J and θ ij ≡ R ij /R 0 , where p T J is the total p T of the jet, R ij is the Euclidean distance between the i th and j th partons in the η-φ plane, and R 0 is the jet radius. 5 For brevity, we will usually drop the (β) superscript below when making general statements, and will also refer to the two-point energy correlation function as the energy correlator when appropriate from context. Note that e (β) 2 is equivalent to the C (β) 1 variable introduced in Ref. [55] and widely used in experimental studies [86][87][88].
To build intuition, one can consider a jet with two constituents; in the infrared and collinear limit, the jet mass is given by can be seen as a generalization of jet mass that incorporates arbitrary angular dependence. It is also closely related to the family of jet angularities [89,90], without the need to define a jet axis.
Our essential idea is to calculate the distributions of interest analytically and numerically assuming various approximations, and then use these to determine an error bar such that it spans the range of predictions. First, we review the analytic calculation of the resummed substructure distributions at leading and next-to-leading log order, followed by a brief discussion of the numerical implementation using Pythia. Then, we explain how we combine the various approximations into an error envelope in the context of a QCD calculation. This will set the stage for Sec. 3, where we explore the range of predictions for the substructure distributions resulting from a dark sector shower.
Analytics Using Traditional Resummation Techniques
To understand the robustness of the e 2 distributions, it is useful to explore the range of predictions that result from analytic techniques for calculating the normalized differential cross section. These formulas were derived in Ref. [91], and we present a summary of the main steps for the calculations in App. A. The collinear limit of the leading order e 2 distribution generates a collinear logarithm from the integral over the splitting angle θ and a soft logarithm from the integral over the momentum fraction z. Enforcing the kinematics of two-body momentum conservation with a delta function, we can write down the differential distribution for e 2 by appealing to the definition in Eq. (2.1): 2) 5 For an e + e − collider, a more convenient choice would be z i ≡ E i /E J and θ ij ≡ 2p i · p j /E i E j or the actual Euclidean angle between the i th and j th partons. In the strict collinear limit, all these definitions collapse to be equivalent, and thus only differ in terms that are non-singular in the small e (β) 2 limit. We choose to normalize θ ij by the jet radius R 0 to eliminate the leading dependence on R 0 .
where R 0 is the jet radius 6 and p i (z) is the appropriate parton splitting function for a quark-initiated jet or a gluon-initiated jet, which are given by where T R = 1 2 is the index of the quark representation, i.e., the fundamental representation. These splitting functions encode the divergences associated with a shower that is initiated by the emission of a soft gluon.
In the limit where e 2 1, we can simplify z(1−z)(θ/R 0 ) β z(θ/R 0 ) β by assuming z 1. It is then straightforward to evaluate Eq. (2.2), which yields 2N C and C g = C A = N C are the color factors associated with the jet, and B q = − 3 4 and B g = − 11 12 + n F T R 3C A encode the subleading terms in the splitting functions that arise from hard collinear emissions. Identifying the characteristic logarithm L ≡ ln 1/e 2 , the cumulative distribution at leading order exhibits a characteristic double logarithm in the limit of small e 2 : This shows that perturbation theory breaks down in the limit of small e 2 , so we would like to resum this double logarithm to derive a convergent prediction. The authors of Refs. [94,95] derived a concise expression for the next-to-leading logarithmic (NLL) resummation of the cumulative distribution for recursively IRC safe observables such as e 2 : where the "radiator" R i is given by with R i ≡ dR i dL and κ = zθp T J , and R 1,i is the fixed-order (FO) correction at nextto-leading order, which allows one to match (in the Log-R scheme [96]) between the resummed and perturbative regimes, ensuring the appropriate kinematic endpoint is respected. As such, G 2,i L 2 and G 1,i L are the logarithms appearing in the fixed-order expression (in the collinear limit) that must to be subtracted to avoid double counting the resummed logarithms. Simplifying z(1 − z) to z in Eq. (2.2) is justified by the identical structure of the two collinear limits and is compensated by a suitable combinatoric factor, as further discussed in App. A.
In the context of quark/gluon discrimination, a number of observables have been proposed that seemingly satisfy our property of being perturbatively calculable while claiming to offer improved discrimination over the energy correlation function above [57][58][59]. This comes at a price. Instead of contributions from individual emissions contributing linearly to the observable, each emission's weight depends on the entire shower history. However, this feature also increases the resulting sensitivity to non-perturbative corrections by reducing the parametric suppression of these effects, and until more detailed understanding of these features is available, it is difficult to recommend the use of such substructure variables in situations where these effects cannot be constrained by data. Note that even in the case of the better understood e (β) 2 , the β dependence of quark/gluon discrimination has been measured, and it noticeably deviates from that of the perturbative predictions [97].
An analytic evaluation of R i is possible, although challenging, e.g. see Ref. [98]. The calculation of the resulting efficiencies at NLL due to a cut on e 2 requires evaluating the gauge coupling α s at two-loop order using the CMW scheme [99], such that efficiencies still need to be computed numerically. Another issue is related to α s becoming non-perturbative as the integral is evaluated at low enough scales. To mitigate these complications, we follow the procedure outlined in Ref. [91]: the coupling is only run at one-loop order and is frozen at a "non-perturbative scale" µ NP = 7Λ, where the factor of 7 is an arbitrary choice. This allows us to find a closed-form solution to Eq. (2.7) at the expense of limiting its logarithmic accuracy. We will call this approximate evaluation of Eq. (2.6) the "modified leading logarithmic" (MLL) resummed cumulative distribution with FO corrections. All analytic distributions presented will be MLL+FO accuracy (with the exception of Fig. 1).
Numerics From Pythia
Our analytic expressions have the benefit that they are transparent, in that we can precisely identify the approximations that go into the calculations. However, they do not account for important corrections from, e.g. hadronization or finite quark masses. They also do not provide any way for us to assess the impact of dark sector hadroniza-tion on our prediction. To address these shortcomings, we compare our results for the e 2 observables to those of a Monte Carlo parton shower that models a new confining gauge group. Although all parton showers in common use are formally accurate to leading log, they include various corrections with the goal of modeling certain higher order effects, e.g. see the Monte Carlo Event Generators review in Ref. [100]. It is worth emphasizing here that all such corrections assume QCD, and as such should be revisited in the context of more general confining theories. Specifically here, we simulate events using Pythia 8.240 [101]. We simulate pp collisions at √ s = 14 TeV including initial-and final-state radiation (without multiple parton interactions) for all our events. The signal is generated via a direct portal fromq q pairs to dark sector quarks, and the evolution of the dark sector is implemented in Pythia's Hidden Valley module [10][11][12], including a dark parton shower, hadronization, and decay back to Standard Model states. Events are clustered into anti-k t jets [102] with radius R 0 = 1.0 and e 2 computed for each jet using FastJet 3.3.2 [103], subject to a jet-level cut of p T > 1 TeV.
We will briefly comment on the implementation of the parton shower in Pythia's Hidden Valley module. 7 The underlying physics model is the same as that used for the time-like QCD shower. Showering proceeds via the emission of a dark gluons from both dark quarks and gluons. The dark quarks may be duplicated up to eight flavors, n F , with identical masses and integer spin by default (we set the dark quark spin to be 1/2 in this paper). Running of the dark gauge coupling is included up to one-loop for an arbitrary SU ( N C ) gauge group, assuming massless quarks. Although the functionality to include an arbitrary dark quark mass spectrum is available, we take the masses m q to be degenerate throughout this study. We do not include any states that are charged under both the Standard Model and dark sector symmetry groups, although such states may be considered to extend the range of phenomenological handles in the resulting signal.
A number of aspects of our analytic calculation make its perturbative accuracy greater than that of the Pythia parton shower. Dark gluon splitting into quark pairs is not currently implemented in Pythia; the P q←g (z) splitting function is not singular in the soft limit, and therefore provides contributions beyond LL accuracy. A choice of minimum allowed p T for emissions controls the termination of the shower at low scales. This threshold may be tuned to data in the case of QCD, but for a dark parton shower, this is a parameter that should not be much larger than the confinement scale. Matrix element corrections ensuring the accuracy of parton splitting to one-loop order
Model Showering
Hadronization are included in the QCD parton shower of Pythia but, being model-dependent, not for the Hidden Valley module. Comparing the analytic results to the Pythia predictions will estimate the resulting uncertainties, which are either included or (in the case of the p T cut) have no impact on our analytic results. The dark sector is assumed to confine, and hadronization is implemented via the Lund string model [66], which has some associated parameters whose values are unknown a priori. We explore the consequences of this fact below in Sec. 4. Hadronization proceeds exclusively to dark pions and dark rho mesons, which all decay back to the Standard Model using flat matrix elements (assuming no flavor symmetries leading to stable dark mesons, see e.g. Refs. [42,44]). Table 1 enumerates the relevant parameters discussed along with their default settings.
Error Envelopes
In this section, we describe the procedure used to compute the error envelopes presented in Sec. 3. To capture the "perturbative" theoretical uncertainty associated with these distributions, we combine a number of variations that probe the systematic uncertainties inherent to making dark shower predictions. First, to incorporate uncertainties in the showering step, we capture the range of parton level predictions by comparing the LL order and the MLL + FO order analytics (which we refer to as MLL in the figures). Next, we compare the MLL order analytics and the parton level numerics, i.e., turning off hadronization. Finally, we compare the parton level and the hadron level numerics to account for the effects of hadronization. For events originating from a dark sector shower, we also compare the dark hadron level and the visible hadron level numerics to capture the effects of decaying dark hadrons and their subsequent recombination into Standard Model hadrons. To construct our error bands, we sum the widths of these comparison sub-envelopes in quadrature to produce an averaged final envelope. The results of this procedure when applied to QCD are presented in Fig. 1. Note that for the later plots we show the central value of the envelope merely to guide the eye; this curve does not simply follow from our analytic results. Then in Sec. 4 below, we investigate the uncertainty due to hadronization modeling. The total error band that includes the perturbative and hadronization errors is then used as the input to our search sensitivity estimates for the LHC presented in Sec. 5.
We note that a common approach to calculating a theory uncertainty is to vary factorization, resummation, and (when considering exclusive observables) fragmentation scale parameters by a factor of two away from their canonical choices. This is a way of estimating higher-order terms that have not been explicitly computed by assuming they are dominated by their logarithmically enhanced pieces. The logarithms dominating our distributions are not due to a running effect so that uncertainties in the resummation procedure will not be captured by such an approach. Theoretical uncertainties for resummed calculations typically require more involved multi-scale variational schemes using effective field theory frameworks. 10 The enveloping approach advocated here is designed to incorporate this uncertainty, while also accounting for unknown details of the hadronization and decay properties of the dark sector. Our estimates of perturbative errors are comparable to those of the effective field theory scale variational approaches for QCD, where similar calculations have been done [106]. Depending on the precise treatment of the normalization when taking scale variations, it is possible to find significantly larger errors below the Sudakov peak (e.g. see Fig. 5 in Ref. [107]), where the interplay of constraints from the integrated cross section calculation and breakdown of resummation convergence makes uncertainties particularly sensitive to choice of scheme [108]. However, the resulting effect on signal yields is minimal, since such large uncertainties occur in a vary rapidly falling part of the distribution.
Before showing the results from varying parameters in the dark sector, we note that the analytic approximation for the radiator R i used in our calculations is not continuously differentiable, see Eqs. (A.21) to (A.23). This is a consequence of sharply cutting off the integrals using the non-perturbative scale µ NP introduced in Sec. 2.1 above, which leads to a kink in the second derivative of the radiator R i . To avoid this We show the predictions derived using the MLL analytic calculation, along with the parton and hadron level numerical results from Pythia. Larger (smaller) angular dependence emphasizes the contribution from pairs of partons with larger (smaller) angular distance. The analytic calculations begin to break down for angular dependence values β < 0.5, which is reflected here in the fact that the β = 0.2 curve does not appropriately terminate at the kinematic endpoint.
issue, we follow Ref. [91] and replace this derivative with a discrete approximation: where the choice δ = 1 is an additional source of theoretical uncertainty that is negligible to single logarithmic accuracy. Figure 2 shows the analytic and numeric e 2 distributions for QCD jets across various angular dependence values β. We note the agreement between the analytic and the numeric distributions begin to diverge for low angular dependences β < 0.5. Furthermore, the β = 0.2 analytic distribution does not appropriately terminate at the kinematic endpoint. We conclude that even though we are working in parameter space where the resummation techniques should be a good approximation, the low angular dependence regime of e 2 is not well modeled. For this reason, we will focus our analysis on the behavior of the e to explore the impact of varying β. From the definition of e 2 in Eq. (2.1), we see that increasing β gives greater weight to emissions at larger angular distances in the distribution. Since emissions at large angle within a jet are preferentially softer at large angles, giving lower weight to large-angle emissions leads to e 2 distributions closer to their kinematic endpoint, behavior that is clearly reflected in Fig. 2. Simultaneously, the distribution of e 2 is dominated by emissions in singular regions of phase space, so that lower values of β provide more sensitivity to the structure of the collinear singularity of partonic splitting functions. This comes at the cost of loss of perturbative control. Sec. 2.1 makes clear that the effective coupling in the calculation of e (β) 2 is α s /β and that for values of β 1, perturbative control of the e 2 distribution is lost throughout phase space.
Applying the Predictions
In addition to plotting the normalized e 2 distributions, we will provide a few different ways of presenting the predictions. We will show the cumulative cross section, which is derived by taking the differential distribution and numerically evaluating the following integral, see the bottom row of Figs. 3, 5, and 7: where e 2,max = 1 4 R β 0 . To incorporate the error envelopes, we assume they are fully correlated. In practice, this simply means we compute the upper (lower) error envelope of the cumulative distribution by integrating the upper (lower) edge of the differential distribution. The choice of x cut will be optimized below when we discuss the discovery potential of dark substructure in Sec. 5.
We also provide some quantitative insight into how different the signal and background distributions are using the MLL analytic predictions directly, see Figs. 4, 6, and 8. The left and middle panels of these figures provide two different figures of merit, which give a quantitative sense of how well one could distinguish signal from background, in this case approximated by quark initiated jets. Specifically, on the left we show ROC curves, which are the parametric curve that traces the background rejection 1 − B as a function of signal acceptance S , due to varying a cut on e 2 . The middle panels show the parametric curve for discovery significance S / √ B as a function of signal acceptance S , again due to varying a cut on e 2 . The right panels show the change in the signal rate as a function of the dark sector parameter that is being varied, for a benchmark fixed background factor, which is taken to be 1 − B = 90%. As we will explore in the next section, these various ways of presenting the predictions provide additional insight into the behavior of the e 2 observable across the dark sector parameter space.
Distinguishing Dark Substructure from QCD
Now that we have established a method to estimate the theoretical uncertainties inherent to calculating substructure distributions, we will apply this technology to explore the range of predictions one can expect from a dark sector including error bars. This demonstrates the behavior of the dark sector as a function of its parameters. In particular, we will highlight how the uncertainties depend on the parameters. While we incorporate the effects of hadronization in this section, we set the hadronization parameters to their default values. The results presented here will be combined with an estimate of hadronic uncertainties in Sec. 4, which are computed by varying the non-perturbative parameters. These are then used as the inputs to the estimates performed in Sec. 5, where we study to what extent it is possible to distinguish dark sector showers from QCD via substructure measurements. Note that we have made the simplifying assumption that the QCD background is entirely composed of quark jets in what follows, since the signal will dominate in the central region of the detector. A more realistic study should of course incorporate a more sophisticated modeling of the background. However, since our uncertainties are dominated by the signal modeling, a more careful accounting of the quark/gluon composition of the background should be a subdominant effect.
Λ Dependence
In this section, we explore the dependence on the dark sector confinement scale Λ. The plots shown in Fig. 3 compare the e 2 distribution for a dark-quark-initiated jet against a QCD-quark-initiated jet for a range of confinement scales Λ = Λ QCD compared to the QCD quark background, for two choices of β. As the confinement scale increases, the dark sector distribution shifts toward larger values of e 2 . The larger confinement scale implies that the dark sector coupling is larger than the QCD coupling at the energy scale of the jet. This implies that the peak of the differential distribution occurs at a larger value of e 2 , or equivalently, that the resummation approximation αL 2 ∼ 1 becomes relevant for larger values of e 2 . Therefore, the distribution peaks closer to the kinematic endpoint.
In the bottom row of Fig. 3, we provide the cumulative distribution Σ(x cut ) for the various choices of Λ. For β = 2, the envelope saturates at x cut = 10 −3 for large values of Λ and shifts toward x cut = 10 −4 as Λ decreases. The range of this envelope is 0.22 and insensitive to the size of Λ. Similarly, for β = 0.5, the envelope saturates at x cut = 10 −2 . The envelope range increases as Λ decreases, from a minimum of 0.26 and a maximum of 0.40. As Fig. 4 shows, the discriminatory power of a dark sector signal against a QCD background increases as the dark sector's confinement scale Λ increases. However, this increased discrimination power saturates for large confinement scales Λ 50 GeV. This saturation is caused by freezing the running coupling at the "non-perturbative scale" µ NP = 7 Λ, which we emphasize is a nonphysical prescription designed to obtain a closed-form solution to (2.7). Using the explicit dependence of µ NP on Λ, we can derive a naïve small coupling expansion for the discriminator, This provides a reasonable estimate of the scaling until Λ 5 GeV, when the approximation begins to fail. This can be traced back to the behavior of Eqs. (A.21) to (A.23), from which we infer that as the confinement scale increases, the non-perturbative effects become more relevant for larger values of e 2 .
N C Dependence
In this section, we explore how the substructure depends on the number of dark colors, N C . The set of plots shown in Fig. 5 compare the e 2 distribution for a QCD-quarkinitiated jet against a dark sector-quark-initiated jet for various choices of the number of dark colors N C > 3. As the number of dark colors increases, the β-function for the dark sector gauge coupling becomes more negative, so the scale evolution is faster for the dark sector than for the QCD background. This faster running of the coupling shifts the dark sector distribution toward smaller values of e 2 since α < α s at the scale set by the jet p T . In the bottom row of Fig. 5, we provide the cumulative distribution Σ(x cut ) for the various choices of N C . For β = 2, the envelope saturates at x cut = 10 −4 , regardless of the value of N C . The range of this envelope is 0.22 and insensitive to the size of N C . Similarly, for β = 0.5, the envelope saturates at x cut = 10 −2 . The envelope range increases as N C decreases, from a minimum of 0.36 and a maximum of 0.40.
As Fig. 6 shows, the discriminatory power of a dark sector signal against the QCD background decreases as the dark sector's number of dark colors N C increases. However, this decrease is rather marginal, and saturates for N C ∼ 10. We can understand this behavior analytically, by expanding the LL resummed cumulative distribution Eq. (A.6) to leading order in α. We find that the N C dependence is well approximated by This makes it clear that the discriminator quickly asymptotes as one increases N C , thereby explaining the qualitative behavior in the figures, i.e., that the sensitivity of the observables studied here to the number of dark colors is minimal.
n F Dependence
In this section, we explore how the substructure depends on the number of dark flavors n F . The plots shown in Fig. 7 compare the e 2 distribution for a dark-quark-initiated jet against a QCD-quark-initiated jet for a range of dark flavors with n F > 5; note that we take the number of flavors for QCD to be n F = 5. As the number of dark flavors increases, the β-function for the dark sector coupling α decreases, and in particular the dark sector is no longer asymptotically free when n F > 11 N C 4T R . This implies that the renormalization group evolution is slower for the dark sector than for the QCD background. This impacts the dark sector distribution by shifting it towards larger values of e 2 , since α > α s at the characteristic hard scale of the jet.
In the bottom row of Fig. 7, we provide the cumulative distribution Σ(x cut ) for the various choices of n F . For β = 2, the envelope saturates at x cut = 5 × 10 −4 for large values of n F and shifts toward x cut = 10 −4 as n F decreases. The range of the envelope is 0.24 and insensitive to the size of n F . Similarly, for β = 0.5, the envelope saturates at x cut = 10 −2 , regardless of the value of n F . The envelope range increases as n F increases, from a minimum of 0.34 and a maximum of 0.50. While we are limited by how many flavors we allow the dark sector to have if we want a confining dark sector, the differential distribution shifts toward larger values of e 2 as n F increases.
As Fig. 8 shows, the ability to discriminate a dark sector signal against a QCD background increases as the number of dark flavors increases. Furthermore, this effect increases rapidly as n −1 F . The dark flavor dependence can be estimated by expanding the LL resummed cumulative distribution given in Eq. (A.6) to leading order in the coupling. This yields While naively this implies that we should be able to find regions of parameter space that are very non-QCD-like, the framework breaks down for n F > 11 N C 4T R , because the dark sector does not confine as mentioned above. Practically, Pythia has limited the number of dark flavors one can include to be eight at most. Therefore, we are not able to numerically probe the discriminator beyond this point in parameter space. However, the trend agrees between the numeric and analytic calculations, and follows the analytic estimate in Eq. (3.3) to a good approximation.
m q Dependence
Finally, we explore the impact of varying m q on the e 2 distribution. Since the analytic calculations assume massless partons, we are not in a position to include the analytic contributions to our error envelopes. However, for IRC-safe observables such as e (β) 2 , the mass dependence of our distributions is suppressed as a power of m q / Λ when m q Λ, with the resulting effect on our results being negligible. This is not the case when quark masses exceed the confinement scale since m q then set the scale where the parton shower terminates. In the latter case, an accurate analytic treatment of finite quark masses is challenging, due to the presence of multiple overlapping logarithms of both e 2 and ratios of quark masses and energy scales. As a result, the resummation of differential distributions becomes a more involved procedure, and we will content ourselves with simply providing the results of a numerical study, and will not estimate the error band for different choices of m q .
With a degenerate spectrum, the impact of finite dark quark masses within Pythia is limited at stopping the parton shower from emitting at scales below m q , since the resulting partons would not be able to subsequently hadronize, and treating the color strings as having massive endpoints in the evolution of the Lund string during the hadronization step. Since gluon splitting to quark pairs is not included, potential finite Table 1. The associated dark hadron masses are 2 m q . Only a numerical study using Pythia is presented. We provide a cubic fit to these distributions to guide the eye. Larger dark quark masses move the peak to higher values due to the resulting cutoff imposed on collinear divergences for emissions from massive quarks. We show the results for two choices of the angular dependence: β = 2 [left] and β = 0.5 [right].
mass effects due to radiation dead cones around additional massive quarks from g → qq splitting play no role. Matrix element corrections in emission, which induce additional mass-dependence in analogous QCD showers, are not included. When the quark masses are above the confinement scale, the hadrons are more akin to quarkonia like the J/ψ or Υ than they are to light mesons like the π or ρ. Hadronization still occurs, since individual dark quarks cannot decay and can only annihilate once they become bound into hadrons. While the properties of these states may be well approximated by perturbative methods, as long as the multiplicity of quarkonia produced is larger than a few, a parton shower is still expected to provide a good approximation of the final state. 11 The result is displayed in Fig. 9, where we compare the e 2 distribution for a quarkinitiated QCD jet against the Pythia distributions for different choices of the dark quark mass m q ; we assume that the dark quarks are degenerate and that the dark hadron masses are 2 m q for simplicity. The other dark sector parameters are set to the default given in Table 1. We see that the peak of the distributions moves to higher values of e 2 as m q is increased. Additionally, we note that the impact on the distributions is not as dramatic as when we varied Λ above (cf., Fig. 3). This can be understood due to the fact that increasing the quark masses for fixed gauge coupling simply acts to cut out more of the IR region of the shower phase space where the sector is becoming strongly coupled. While this has an impact on the resultant multiplicity of dark hadrons that are produced in a shower, their subsequent decay from a higher rest mass to nearly massless QCD hadrons obscured the impact of the specific mass scale set by m q on the observable distribution.
Quantifying Hadronization Uncertainties
The enveloping procedure includes variations among predictions that result from either an analytic or a numerical approach to capture the dominant IR logs that result from showering. When considering sources of systematic uncertainties, it is critical to investigate the irreducible error on predictions due to incalculable strong coupling effects. Specifically, the numerical results rely on a phenomenological model of hadronization. In the case of Pythia, the hadronization step uses the Lund string model [66], which models the physics of confinement by iteratively connecting partons to each other with color strings, and breaking these strings by pair producing quarks from the vacuum when energetically favorable until an equilibrium configuration is achieved. 12 This approach introduces incalculable parameters, which can be tuned to data in the case of real QCD, but must simply be set by hand for the dark sector. It is therefore critical to our goals here to include the uncertainty associated with these choices. As we will show here, hadronization systematics are of the same size as the perturbative ones included in the error envelopes thus far. Clearly, they should additionally be included for searches performed by the experimental collaborations.
The results of varying the hadronization parameters is given in Fig. 10, where all other dark sector model parameters are set to the benchmark values given in Table 1. We then explored the hadronization parameter space to find a choice that resulted in the least (most) number of dark hadrons, which corresponds to the parameter choices aLund = 0, bmqv2 = 2, and rFactqv = 0 (aLund = 2, bmqv2 = 0.2, and rFactqv = 2). The hadronization band in Fig. 10 is then computed by taking the envelope across the result of the default hadronization parameters and these two extreme choices. For reference, we also plot the perturbative prediction, and provide the error envelope as computed above with default hadronization parameters, and we also show the combination of the two envelopes by adding them in quadrature. The largest impact is that the peak of these distributions do shift; this is expected since the position of the turn over is not under robust theoretical control. We see that the variation from hadronization is of the same order as the perturbative uncertainty. 13 We will use the total envelope in the next section where we estimate the impact of non-trivial error envelopes on a mock search for dark sector substructure.
Discovering Dark Substructure
Having quantified the perturbative and hadronization theory errors on the prediction for substructure that results from dark sector showering, we will briefly turn to estimating the impact of including our error envelopes for a search. Our goal here is to simply estimate the discovery potential. Unsurprisingly given existing limits, the subtle nature of the signature and the overwhelming size of the QCD background will imply that additional handles are required to reduce the background by a factor of O(10 5 ) if there is any hope of seeing evidence for dark substructure signals. For example, in models where some of the dark hadrons are stable, a cut on missing energy could play this role. In this case, ignoring the effect of jet to jet fluctuations in the number of unstable mesons, the predictions made above are unchanged, except that the statistics are reduced due to the fact that some particles are missing. We expect the associated theory uncertainty to be a subleading effect. One important mitigating factor is that stringent limits on new physics contributions to QCD distributions already exist from ATLAS [110] and CMS [111]. Since these searches simply look for high p T jets in the final state, the dark jets would fall in the signal region with essentially equal efficiency to QCD jets. Therefore, our first step to quantifying the discovery reach for models that yield substructure from dark showers is to interpret these bounds as a limit on the dark quark production cross section.
We assume the portal to the dark sector can be modeled by a contact interaction: By hunting for deviations in the tails of jet distributions, ATLAS [110] and CMS [111] have derived comparable limits Λ CI 22 TeV. We emphasize that this limit is essentially unchanged for our model, since the searches do not make any cuts on substructure. We convert this limit on the new physics scale into a bound on the production cross section using an implementation of a B − L extension of the Standard Model [112,113] publicly available in the FeynRules [114] model database. We take the Z mass to be large so that the production processq q → Z →¯ q q is well approximated by Eq. (5.1). Events are simulated using MadGraph5_aMC@NLO [115], taking the model parameters to correspond to the lower bound on Λ CI . This allows us to compute the cross section for pp → q q, and we then simply interpret the result as the rate for dark quark production. We implement generator level cuts on rapidity η < 2 and transverse jet momentum p T > 1 TeV. Our dijet backround is produced by all 2 → 2 QCD processes applying the same cuts. This results in a signal cross section σ S = 5 × 10 −5 pb, which can be compared to the enormous QCD background σ B = 13 pb. 14 These cross sections are used to compute the expected number of events for two choices of integrated luminosity; the final Run III data set of 300 fb −1 and the complete high luminosity data set of 3000 fb −1 . These values should be interpreted as the number of events that survive a loose "pre-selection" for the search.
Next we approximate the discovery significance including the impact of both statistical and systematic uncertainties using where S is the number of signal events, B is the number of background events, and δ i are their respective systematic uncertainties. Given the already stringent limits on the production of the dark quarks, it is easy to check that using dark substructure alone will not provide enough discriminating power to beat down the QCD background. Therefore, we will reframe the question in terms of a background reduction factor , which provides an estimate of what one must be able to achieve by incorporating other handles into the search, e.g. missing energy, resonances, and/or displaced objects. 15 To compute , we solve Eq. (5.2) using the substitution B → B: Larger values of correspond to improved discrimination. First, we estimate how large would need to be in order to see a 2σ excess of signal events without a cut on substructure and assuming no uncertainty on the signal production rate and assuming the cut has no impact on signal statistics, see the left panel of Fig. 11. This provides a baseline against which we can compare how much improvement can be obtained using substructure. Next, we include the substructure cut, using the models with varying Λ as a concrete example. We assume the theory error bands on the dark sector distributions are fully correlated, just as we did above when computing the cumulative distributions, e.g. Fig. 3. For the QCD background, there is a wealth of data that is used for tuning and calibration, and as such the systematic error bars can be controlled by leveraging a variety of inputs. For the results presented in Fig. 11, we use the background uncertainty δ B = 0.1 as determined by a recent NNLO calculation [116]. We additionally assume that δ B does not depend on the substructure cut. As a point of comparison, data driven approaches currently yield δ B ∼ 20% [110].
In Fig. 11, we plot the background rejection factor required to achieve a 2σ exclusion as a function of the dark confinement scale Λ, by optimizing a substructure cut for each choice of model parameters. In order to explore the impact of the error envelopes, we provide the result with δ S = 0 in black and δ S = 0 in red, and we also provide the results for β = 2 and 0.5 to investigate varying the angular dependence parameter. We assume either 300 fb −1 or 3000 fb −1 of integrated luminosity, which allows us to explore the scaling as the data set size is increased. 16 Most importantly, we see that a cut on substructure improves one's ability to discover these models, even when the systematic error on the signal shape is included. In particular, taking β = 2 and Λ = 20 GeV the relative change ∆ = 0.9(0.6) for no the range 1.3 − 1.5 [116]. 15 These changes to the model would obviously also impact the limits on signal production rates, i.e., the limit on Λ CI in Eq. (5.1). 16 It is worth noting that the bound on the scale for the contact operator Λ CI will also improve with more data, which is not being taken into account here. error (with error) for 300 fb −1 ; the relative change for 3000 fb −1 ∆ = 1.5 (1.4) for no error (with error). 17 This motivates future work quantifying the error envelopes for a wider variety of substructure distributions that could result from dark sector showers, so that cuts on these variables can be properly incorporated into searches. In particular, it is important to include such systematics when deriving limits on signal parameter space, since the non-trivial error bands can result in more realistic exclusion regions. Finally, we note that for the 300 fb −1 data set, the optimized value of the cut yields a signal region that is statistics dominated. Then when we increase the data set size to 3000 fb −1 , we find that optimal signal region has comparable statistical and systematic errors. We conclude that this subtle signature of dark sector physics is an interesting target scenario for the physics program at the high luminosity LHC.
Conclusions
In this paper, we explored the theory uncertainties associated with making predictions for a scenario where the presence of a new strongly coupled dark sector leaves its imprint on the substructure of QCD-like jets. We focused on the two-point energy correlation function, e (β) 2 . In particular, we quantified the error resulting from perturbative uncertainties associated with truncating to finite order in the logarithmic and gauge coupling expansions. We also explored the uncertainty due to incalculable non-perturbative hadronization effects. Varying the dark confinement scale Λ had the most pronounced impact on the shape of the resulting distributions. We showed e (β) 2 to be relatively insensitive to the number of dark colors N C but observed more striking variations when varying the number of dark flavors n F . We also briefly explored the dependence on the dark quark mass, although we did not provide an error envelope for these distributions due to the technical limitations discussed above.
We then used these error estimates to quantify one's ability to distinguish dark sector jets from the QCD background. We assumed that current bounds on four-quark contact operators apply, which was used to set the production rate for the dark sector. Achieving sensitivity to this subtle signal requires introducing additional handles for the search strategy that could reduce the QCD background by a factor of O(10 5 ) assuming little impact on the signal efficiency. Depending on the model, one could implement a cut on missing energy, a requirement of one or more b-tagged jets, or identification of displaced vertices or resonances -these additional uncorrelated fea-tures could additionally impact the interpretation of the limit on the production cross section, a full exploration of the open parameter space for variations of the base model is an interesting topic for future work. 18 This signature may also provide an interesting target opportunity for model agnostic approaches to new physics searches that rely on machine learning, e.g. [117][118][119][120][121][122][123][124]. While such approaches could mitigate the impact of theory uncertainties on the discovery potential of searches using substructure, the importance of uncertainties in setting accurate limits or extracting model parameters in the case of discovery cannot be ignored. Regardless of these details, this study makes clear that a dedicated search that relies on subtle features in substructure will benefit from the full data set collected at the high luminosity LHC, thereby providing a compelling physics target for future experimental efforts.
Moving forward, we acknowledge the practical need for the generalization of the error envelopes presented here to additional substructure variables. It is important to note that properly accounting for the impact of theory errors for a different observable of interest would require a similar study to what we have presented above. In particular, comparable analytic calculations are necessary to characterize theory uncertainties. We do expect that for a class of mass-like observables, i.e., those that display Casimir scaling at LL [55], one would find conclusions broadly similar to the case of e 2 . However, there are cases, e.g. those briefly mentioned at the beginning of Sec. 2, with a sufficiently different structure, such that a dedicated study would be necessary to determine the size and scaling of the errors. In the case of uncertainties which can be reliably characterized via Monte Carlo alone, e.g. hadronization modeling, modern machine learning methods similar to those of Ref. [125] might prove helpful in reducing the effort involved. However, we emphasize that a proper analytic accounting of expected theory errors in a resummed calculation has no true substitute. The work presented here makes the case that a comprehensive characterization of how substructure observables can be most useful for LHC applications should be performed.
A Analytic Calculation
In this appendix, we review the analytic calculation of the e 2 distribution to NLL order. Our discussion closely follows those of Refs. [55,91], which in turn are based on the framework developed in Refs. [94,95,126]. Our primary goal here is to provide some additional clarification on technical points that may be less familiar to reader not as versed in the details of QCD resummation. For a recent introduction to the general principles of final state resummation accessible to non-experts, see Ref. [127].
We begin with the collinear limit of the e 2 distribution, which is doubly divergent due to a collinear logarithm from the angular integral and a soft logarithm from the integral over the so-called splitting functions. These splitting functions p i (z), which depend on the momentum fraction z can be used to derived resummed distributions. The leading order (LO) contribution is due to a single emission. This can be simply modeled by integrating the splitting function against a delta function that enforces the 2-body momentum conservation as applied to Eq. (2.1). To this order, the differential distribution is where p i (z) is the appropriate parton splitting function for a quark-initiated jet or a gluon-initiated jet, which are given by For quark-initiated jets, only P g←q is included, since the function P q←q is not divergent in the soft limit and would effectively double count the jet core. Likewise, for gluoninitiated jets, the factor of 1 2 multiplying P g←g accounts for a double counting that results from there being the two gluons emerging from a single gluon, while the factor of n F multiplying P q←g provides the proper counting statistics for the gluon to split into n F different quark pairs. In the limit where e 2 1, we can simplify z(1−z)(θ/R 0 ) β z(θ/R 0 ) β by assuming z 1. It is then straightforward to evaluate Eq. (A.1), which yields 2N C and C g = C A = N C are the color factors associated with the jet and B q = − 3 4 and B g = − 11 12 + n F T R 3C A encode the subleading terms in the splitting functions and arise from hard collinear emissions. At LO, the cumulative distribution exhibits a characteristic double logarithm in the limit of small e 2 . Denoting the logarithm as L ≡ ln 1 e 2 , one finds Note that the first integral is divergent, since we have not accounted for virtual corrections. However, we can sidestep this issue by assuming that the probability to emit anywhere is finite. Instead of computing the missing O(α s ) corrections to the total rate, we instead invoke unitary to write the integral in the second finite form which implicitly includes the virtual corrections. Due to presence of the logarithm in Eq. (A.4), perturbative control over the differential distribution is lost for small values of e 2 . Particles with different color charges are going to give qualitatively different behavior in precisely this limit, and so it is necessary to resum the resulting logarithms to all orders to explore how the distributions differ. To leading-log (LL) accuracy, one can consider the emission of n collinear partons within the jet as independent, with the scale of the (one-loop) coupling for each splitting m chosen at the relative transverse momentum scale κ m = z m θ m p T J . Virtual corrections do not change the kinematics, so they will contribute to the distribution for any value of the observable, whereas real emissions will only contribute if the kinematic configuration is such that the emission angle is smaller than the jet radius. At LO, virtual corrections only yield a divergent correction to the tree-level value of e 2 = 0. Thus, to LL accuracy, the resummed cumulative distribution can be computed by simply summing over all emissions off the initial parton while treating them as uncorrelated. In the small z limit, and taking the second form of the integral in Eq. (A.4) to work with finite quantities, the resummed cumulative distribution is given by where the second line sums over virtual emissions, which have the same matrix element as real emissions by unitarity (modulo a sign difference) [94,95]. The series above is readily resummed into a single term, correct to double logarithmic accuracy: The function R i is called the radiator for the jet, and it captures the Sudakov double logarithms associated with the IR divergences that result from soft or collinear emissions from the hard parton. In the fixed coupling approximation, the radiator has the form so that expanding Σ LL to leading order in the radiator recovers the LO behavior in Eq. (A.4). At NLL order a number of new effects appear: multiple emissions, the two-loop running coupling, and non-global logarithms that arise from out-of-jet-emissions falling within the cone. The resummed cumulative distribution can be improved to single logarithmic accuracy by explicitly summing over uncorrelated emissions: 19 The angular ordering condition comes from the fact that when inserting an eikonal emission factor i T i k i · /k i · q into an existing matrix element M, the squared matrix element picks up a kinematic factor of . (A.9) Each such term can be rewritten as W ij = W The benefit of this rewriting is that every such term satisfies an angular ordering property, where R i here is the Laplace space version of the expression in Eq. (A.6). Logarithmic accuracy in ν tracks the logarithmic accuracy in e 2 , since they are Laplace conjugates of each other. Therefore, to derive the NLL cumulative distribution, one must compute the radiator to single logarithmic accuracy in ν. Expanding about ν −1 = e 2 gives 13) where N = 1 + O(α s ) is a matching coefficient that can be determined by comparing with the fixed-order cumulative distribution, γ E is the Euler-Mascheroni constant, and the radiator R i is given in Eq. (A.6).
Note that improving predictability to NLL order requires matching the resummed calculation to the fixed-order distribution. To this end, we implement the Log-R matching scheme [96] by first considering the LO cumulative distribution, i.e., the properly integrated form of Eq. (A.1), where R 1,q = C F β − 4 Li 2 1 + u 2 + 3u + ln 2 (1 − u) − 2 ln(1 + u) ln(1 − u) + 4 ln 2 − ln(1 + u) ln(1 + u) − 3 tanh −1 u + π 2 3 − 2 ln 2 2 , and u ≡ √ 1 − e 2 . Here u takes values between u = 1 − e 2,max = 1 − 1 4 R β 0 and 1. With the Log-R matching scheme, it is straightforward to match the resummed and fixed-order results, where G 2,i L 2 and G 1,i L are the logarithms appearing in the fixed-order expression which must to be subtracted from R 1,i to avoid double counting the resummed logarithms. From Eq. (A.4), these logarithms are explicitly Using this analytic form in Eq. (A.16) requires evaluating the radiator R i , which is given in Eq. (A. 6). An analytic evaluation of R i is possible, although challenging, e.g. see Ref. [98]. The calculation of the resulting efficiencies at NLL due to a cut on e 2 requires evaluating the gauge coupling α s at two-loop order using the CMW scheme [99], such that efficiencies still need to be computed numerically. Another issue is related to α s becoming non-perturbative as the integral is evaluated at low enough scales. Following the procedure in Ref. [91], the coupling is only run at one-loop order and is frozen at the non-perturbative scale µ NP ≡ 7Λ. These choices result in a closed-form solution for R i while limiting its logarithmic accuracy, so Eq. (A.16) provides a modified leading logarithmic (MLL) resummed cumulative distribution with FO corrections. All analytic distributions presented above are to LL or MLL+FO accuracy, but strictly not accurate to full NLL order.
The prescription of freezing the coupling at the non-perturbative scale µ NP ≡ 7Λ leads to the explicit form where µ = µ NP p T R 0 is the relevant scale associated with the non-perturbative transition. Finally, we will write down the explicit expressions for the radiator functions that are used here. Their form depends on the choice of the angular dependence β. For β > 1, while for β < 1, 2 > µ | 15,331.4 | 2020-04-01T00:00:00.000 | [
"Physics"
] |
Detection of Routing Misbehavior in Manets with 2ack Scheme
—The Routing misbehavior in MANETs (Mobile Ad Hoc Networks) is considered in this paper. Commonly routing protocols for MANETs [1] are designed based on the assumption that all participating nodes are fully cooperative. Routing protocols for MANETs are based on the assumption which are, all participating nodes are fully cooperative. Node misbehaviors may take place, due to the open structure and scarcely available battery-based energy. One such routing misbehavior is that some nodes will take part in the route discovery and maintenance processes but refuse to forward data packets. In this, we propose the 2ACK [2] scheme that serves as an add-on technique for routing schemes to detect routing misbehavior and to mitigate their effect. The basic idea of the 2ACK scheme is to send two-hop acknowledgment packets in the opposite direction of the routing path. To reduce extra routing overhead, only a few of the received data packets are acknowledged in the 2ACK scheme.
A. MOBILE ADHOC NETWORK
Mobile Ad-hoc networks (MANET) are self-configuring and self-organizing multi hop wireless networks where, the network structure changes dynamically.In a MANET nodes (hosts) communicate with each other via wireless links either directly or relying on other nodes as routers [3].The nodes in the network not only acts as hosts but also as routers that route data to/from other nodes in network The operation of MANETs does not depend on preexisting infrastructure or base stations.Network nodes in MANETs can move freely and randomly.
C. MANET ROUTING
To find and maintain routes between dynamic topology with possibly uni-directional links, using minimum resources.The use of conventional routing protocols in a dynamic network is not possible because they place a heavy burden on mobile computers and they present convergence characteristics that do not suit well enough the needs of dynamic networks [5].For Example, any routing scheme in a dynamic environment for instance ad hoc networks must consider that the topology of the network can change while the packet is being routed and that the quality of wireless links is highly variable.The network structure is mostly static in wired networks that are why link failure is not frequent.Therefore, routes in MANET must be calculated much more frequently in order to have the same response level of wired networks.Routing schemes in MANET are classified in four major groups, namely, proactive routing, flooding, reactive routing, and hybrid routing [6].
D. MISBEHAVIOUR OF NODES IN MANET:
Ad hoc networks increase total network throughput by using all available nodes for forwarding and routing.Therefore, the more nodes that take part in packet routing, the greater is the overall bandwidth, the shorter is the routing paths, and the smaller the possibility of a network partition.But, a node may misbehave by agreeing to forward packets and then failing to do so, because it is selfish, overloaded, broken, or malicious [7].
An overloaded node lacks the buffer space, CPU cycles or available network bandwidth to forward packets.A selfish node is unwilling to spend CPU cycles, battery life or available network bandwidth to forward packets not of direct interest to it, even though it expects others to forward packets on its behalf.A malicious node creates a denial of service (DOS) [7] attack by dropping packets.A broken node might have a software problem which prevents it from forwarding packets.
A. THE 2ACK SCHEME
The main idea of the 2ACK scheme is to send two-hop acknowledgment packets in the opposite direction of the routing path.In order to reduce additional routing overhead, only a fraction of the received data packets are acknowledged in the 2ACK scheme.Thus it detects the misbehaving nodes, eliminate them and choose the other path for transmitting the data.The watchdog detection mechanism has a very low overhead.Unfortunately, the watchdog technique suffers from several problems such as ambiguous collisions, receiver collisions, and limited transmission power [8].The main issue is that the event of successful packet reception can only be accurately determined at the receiver of the next-hop link, but the watchdog technique only monitors the transmission from the sender of the next-hop link.Noting that a misbehaving node can either be the sender or the receiver of the next-hop link, we focus on the problem of detecting misbehaving links instead of misbehaving nodes.In the next-hop link, a misbehaving sender or a misbehaving receiver has a similar adverse effect on the data packet [8]: It will not be forwarded further.The result is that this link will be tagged.2ACK scheme significantly simplifies the detection mechanism.http://ijacsa.thesai.org/
B. DETAILS OF THE 2ACK SCHEME
The 2ACK scheme is a network-layer technique to detect misbehaving links and to mitigate their effects.It can be implemented as an add-on to existing routing protocols for MANETs, such as DSR.The 2ACK scheme detects misbehavior through the use of a new type of acknowledgment packet, termed 2ACK.A 2ACK packet is assigned a fixed route of two hops (three nodes) in the opposite direction of the data traffic route.Suppose that N1, N2, N3 and N4 are three consecutive nodes (tetra) along a route [9].The route from a source node, S, to a destination node, D, is generated in the Route Discovery phase of the DSR protocol.When N1 sends a data packet to N2 and N2 forwards it to N3 and so on, it is unclear to N1 whether N3 or N4 receives the data packet successfully or not.Such an ambiguity exists even when there are no misbehaving nodes.The problem becomes much more severe in open MANETs with potential misbehaving nodes.
The 2ACK scheme requires an explicit acknowledgment to be sent by N3 and N4 to notify N1 of its successful reception of a data packet: When node N3 receives the data packet successfully, it sends out a 2ACK packet over two hops to N1 (i.e., the opposite direction of the routing path as shown), with the ID of the corresponding data packet.The triplet N1 N2 N3 N4 is derived from the route of the original data traffic.
Such a tetra is used by N1 to monitor the link N2 N3 N4.For convenience of presentation, we term N1 in the tetra N1 N2 N3 N4 the 2ACK packet receiver or the observing node and N4 the 2ACK packet sender.Such a 2ACK transmission takes place for every set of tetra along the route.Therefore, only the first router from the source will not serve as a 2ACK packet sender.The last router just before the destination and the destination will not serve as 2ACK receivers.
III. APPLICATION
Ad-hoc networks are suited for use in situations where an infrastructure is unavailable or to deploy one is not cost effective.
A mobile ad-hoc network can also be used to provide crisis management services applications, such as in disaster recovery, where the entire communication infrastructure is destroyed and resorting communication quickly is crucial.By using a mobile ad-hoc network, an infrastructure could be set up in hours instead of weeks, as is required in the case of wired line communication.Another application example of a mobile adhoc network is Bluetooth, which is designed to support a personal area network by eliminating the need of wires between various devices, such as printers and personal digital assistants.The famous IEEE 802.11 or Wi-Fi protocol also supports an ad-hoc network system in the absence of a wireless access point [9].Another application example of a mobile adhoc network is Bluetooth, which is designed to support a personal area network by eliminating the need of wires between various devices, such as printers and personal digital assistants [10].
IV. ADVANTAGES
As compared to the watchdog, the 2ACK scheme has the following advantages: 1) Flexibility [9]: One advantage of the 2ACK scheme is its flexibility to Control overhead with the use of the Rack parameter.
2) Reliable data Transmission: It deals with the reliable transfer of file from source to destination.The file needs to be stored at source for certain amount of time even if it has been transmitted.This will help to resend the file if it gets lost during transmission from source to destination.
3) Reliable route discovery [10]: Reliable Route Discovery deals with discovering multi-hop route for wireless transmission.Routing in a wireless ad-hoc network is complex.This depends on many factors including finding the routing path, selection of routers, topology, protocol etc.
4) Limited Overhearing Range [10]: A well-behaved N3 may use low transmission power to send data toward N4. Due to N1's limited overhearing range, it will not overhear the transmission successfully and will thus infer that N2 is misbehaving, causing a false alarm. Both this problem occurs due to the potential asymmetry between the communication links. The 2ACK scheme is not affected by limited overhearing range problem. 5) Limited Transmission Power:
A misbehaving N2 may maneuver its transmission power such that N1 can overhear its transmission but N4 cannot.This problem matches with the Receiver Collisions problem.It becomes a threat only when the distance between N1 and N2 is less than that between N2 and N3 and so on.The 2ACK scheme does not suffer from limited transmission power problem.http://ijacsa.thesai.org/V. CONCLUSION The proposed system is a simulation of the algorithm that detects misbehaving links in Mobile Ad Hoc Networks.The 2ACK scheme identifies misbehavior in routing by using a new acknowledgment packet, called 2ACK packet.A 2ACK packet is assigned a fixed route of two hops (four nodes N1, N2, N3, N4), in the opposite direction of the data traffic route.The system implements the 2ACK scheme which helps detect misbehavior by a 3 hop acknowledgement.The 2ACK scheme for detecting routing misbehavior is considered to be networklayer technique for mitigating the routing effects.
with node C, node D and node B. If A wants to communicate with node E, node C must work as an intermediate node for communication between them.That"s why the communication between nodes A and E is multi-hop.The operation of MANETs does not depend on preexisting infrastructure or base stations.Network nodes in MANETs can move freely and randomly.
Figure 1 :
Figure 1: A Mobile ad hoc networkB.CHARACTERSTICS OF MANETS: It having the dynamic topology, which links formed and broken with mobility. Possibly uni-directional links[4]. Constrained resources like battery power and wireless transmitter range.
Figure 2 :
Figure 2: Representation of dynamic topology
Figure 3 :
Figure 3: Scenario for packet dropping and misrouting
Figure 4 :
Figure 4: The 2ACK Scheme Figure 4 illustrates the operation of the 2ACK scheme.Suppose that N1, N2, N3 and N4 are three consecutive nodes (tetra) along a route[9].The route from a source node, S, to a destination node, D, is generated in the Route Discovery phase of the DSR protocol.When N1 sends a data packet to N2 and N2 forwards it to N3 and so on, it is unclear to N1 whether N3 or N4 receives the data packet successfully or not.Such an ambiguity exists even when there are no misbehaving nodes.The problem becomes much more severe in open MANETs with potential misbehaving nodes. | 2,531.2 | 2011-01-01T00:00:00.000 | [
"Computer Science"
] |
Molecular Ribbons via Diels-Alder Cycloadditions : Synthesis of Models for Solubilized Polyacenes and Polyacene Polyquinones
Syntheses of linearly fused ribbons of carbocyclic six-membered rings are accomplished by Diels-Alder cycloadditions of a diene (2,3-diheptylidene-1,2,3,4-tetrahydronaphthalene) and a bis-diene (2,3,6,7-tetraheptylidene1,2,3,4,5,6,7,8-octahydroanthracene) to a bis-dienophile (1,4,5,8-anthradiquinone) and a dienophile (1,4-anthraquinone). The Diels-Alder adducts were dehydrogenated to several more highly unsaturated molecular ribbons.
The electronic structure of polyacene (1), a hypothetical polymer consisting of linearly fused unsaturated six-membered rings, has been of much theoretical interest due to its extended conjugation.For example, Kivelson and Chapman have proposed that this class of materials has the potential to exhibit high temperature superconductivity and ferromagnetism. 1However, polyacene should be highly insoluble, complicating its purification and handling.The synthesis of a soluble polyacene derivative would be a highly desirable alternative, since such a polymer might be castable as a thin film.
A synthesis of a solubilized polyacene (2) is proposed to be accomplished via reductive deoxygenation of polyacene polyquinone 3, which could be obtained by dehydrogenation of polymer 4. Polymer 4 could, in turn, be prepared by double Diels-Alder cycloaddition of bis-diene 5 to diquinone 6. 2 As a model study for this proposed synthesis, a series of molecular ribbons have been synthesized to qualitatively ascertain the solubilizing ability of hexyl groups in systems related to polyacene. 3uble Diels-Alder cycloaddition of diene 8 4 to diquinone 6 4 (in the presence of the anti-oxidant, BHT, to inhibit radical polymerization of the diene) gave adduct 9 in 73% yield, surprisingly as a single stereoisomer.This adduct was assigned a structure consistent with the endo rule for Diels-Alder cycloadditions and with the stereochemistry unambiguously determined for a related adduct. 5Furthermore, we believe that it must have resulted from addition of the two diene molecules to opposite sides of the diquinone.Presumably, cycloaddition of diene 8 to diquinone 6 initially gives intermediate monoadduct 10, although this was not observed.The electron-deficient aromatic moiety in 10 may interact favorably through space with the electron-rich one, thus imparting a preferred U-shaped conformation to the core ribbon structure of the molecule.Addition of diene 8 to monoadduct 10 would then occur preferentially, if not exclusively, to the less hindered face of 10, resulting in an overwhelmingly predominant formation of the diastereomer shown for 9.
Aromatization of diquinone 15 with DDQ in benzene at room temperature gave anthracene-containing diquinone 16, while treatment of 15 with DDQ in benzene at reflux gave a 68:32 mixture of naphthacene 17 and anthracene 16, from which a 12% yield of 17 could be obtained.An attempt to produce undecacene diquinone 12 from diquinone 15 by dehydrogenation using DDQ under forcing conditions was not successful.
A molecular ribbon containing nineteen linearly fused carbocyclic six-membered rings was synthesized by first preparing diene 18, the presumed intermediate in the formation of double adduct 14, in 39% yield via 1:1 Diels-Alder cycloaddition of bis-diene 13 to 1,4-anthraquinone.
Diene 18 then underwent double cycloaddition to diquinone 6 to give a 90% yield of the desired molecular ribbon, adduct 19, as a mixture of diastereomers.This mixture could be smoothly oxidized to tetraquinone 20, which was also characterized as a diastereomeric mixture.
Conclusions
The Diels-Alder cycloadditions used to synthesize these ribbons of carbocyclic six-membered rings are high yielding, and partial dehydrogenation of the adducts without aromatization is quite facile.However, aromatization of diquinone 11 to nonacene diquinone 7 is sluggish and low-yielding, and undecacene diquinone 12 could not be prepared from diquinone 15 under any of the conditions tried.On the basis of these model studies it appears that the number and size of the alkyl groups currently being employed are not adequate for solubilizing polyacene-polyquinone polymer 3. | 841.4 | 1997-09-01T00:00:00.000 | [
"Chemistry"
] |
Exploring substrate/ionomer interaction under oxidizing and reducing environments
Local gas transport limitation attributed to the ionomer thin-film in the catalyst layer is a major deterrent to widespread commercialization of polymer-electrolyte fuel cells. So far functionality and limitations of these thin-films have been assumed identical in the anode and cathode. In this study, Nafion thin-films on platinum(Pt) support were exposed to H 2 and air as model schemes, mimicking anode and cathode catalyst layers. Findings indicate decreased swelling, increased densification of ionomer matrix, and increased humidity-induced aging rates in reducing environment, compared to oxidizing and inert environments. Observed phenomenon could be related to underlying Pt-gas interaction dictating Pt-ionomer behavior. Presented results could have significant implications about the disparate behavior of ionomer thin-film in anode and cathode catalyst layers.
Introduction
As polymer-electrolyte fuel cells (PEFCs) gain traction in the energy-device landscape, they face a major hurdle from significant mass-transport losses associated with the ionomer/catalyst interface [1], [2]. Sources of mass-transport losses include: confinement driven gas transport losses in ionomer thin-film coating carbon-supported platinum, interfacial resistances caused by structural changes at local ionomer-platinum boundary, and partial electrochemical deactivation of platinum surfaces [3]- [6]. The latter can impact overall kinetics on platinum(Pt) surfaces [7], [8], however such effects on ionomer mass-transport and the interplay with reducing atmospheres are unknown. As a result, explicit understanding of losses at the ionomer/Pt interface is required for optimal electrode-ionomer design and accelerating market penetration of PEFCs.
Ionomer thin-films cast onto a Pt surface can serve as model systems providing a focused glimpse into the catalyst layer. Although bulk, continuous polycrystalline Pt does not fully describe Pt nanoparticle phenomenon present in real catalyst layers, it can still elucidate surface specific interactions that impact ionomer properties and morphology [9], [10]. While impact of Pt substrate on ionomer performance have been shown [8], [11], efforts to clarify the source of this impact have been contradictory, especially in elucidating the role of water on oxidized and unoxidized Pt surfaces [12], [13]. Additionally, the extent of Pt surface influence on ionomer during exposure to oxidative/reductive environments remains unexplored. In this study, watervapor-sorption dynamics of dispersion-cast Nafion thin-films under reducing (H 2 ), oxidizing (Air), and inert (Ar, N 2 ) environments are investigated in order to understand the Pt/ionomer interaction in anode and cathode catalyst layers.
Thin-film Preparation
Nafion dispersions (5 wt%, 1100 g/mol SO 3equivalent-weight, Sigma Aldrich) were diluted in isopropanol, spin cast onto Pt-coated Si, and Si/SiO 2 wafers to form ~50 nm films. Pt substrates were prepared via e-beam evaporation of 5nm Ti adhesion layer followed by 60nm of Pt. Pt substrates were cleaned with benchtop Ar plasma for 6 minutes prior to casting. Thin-films were annealed at 150°C under vacuum for 1 hr before measurement.
Water-Uptake Measurement
Thickness change of Nafion films was monitored using in-situ spectroscopic ellipsometry (J.A. Woollam) as detailed in Ref [14]. Measurements shown are the average of at least two separate samples measured <15 minutes after annealing. To create a consistent water history, all measurements were preceded with an hour exposure to dry (0%) and saturated (96%) relative humidity (RH) (See Fig 1a for hydration protocol). Humidity-dependent thickness (L(t,RH)) was an average of the last 10 min of set humidity. The % change from dry ( L o ) is given by:
Grazing Incidence Small Angle Scattering (GISAXS) Measurements
Pt-coated Nafion films were placed into an in-house built environmental chamber with X-ray transparent Kapton windows as in Ref [6]. The sample was equilibrated in dry H 2 and N 2 gas at room temperature and GISAXS patterns were collected after multiple purges for 5 to 10 minutes in each gas, at varying incidence angles (α i ).
Mechanical-Property Measurement
100 nm Nafion films were prepared on Pt-coated thin Si cantilever wafers (105μm thickness by approximately 0.5cm x 4cm). Sample was clamped in an environmental cell with humidified gas feeds. Constrained swelling due to the substrate results in a compressive force, which bends the Si cantilever. Using a laser array reflected off the backside of sample, change in curvature of the cantilever was measured and related to stress-thickness via Stoney's equation, see Ref [15].
Humidity-induced stress-strain curves were generated by combining stress and strain (from ellipsometry, see Equation 1) under the same humidity conditions, and the deformation energy density was calculated by integrating the area under the curve. The reversibility and persisting impact of the gaseous environment on ionomer swelling was explored using humidity-cycling by alternating inert and reducing gas exposure. In-situ ionomer thickness change on Pt was monitored over three hydration cycles: first, a single step of dry to 96% RH gas exposure (Cycle 0, gas 1); second, humidity was stepped down to 0% RH prior to hydration cycling (Cycle 1, gas 1); finally, gas 1 was switched and stepped RH was applied (Cycle 2, gas 2). Here, the dry reference thickness was set to the thickness from Cycle 0. conditions at a rate that is between that of H 2 -and Ar-only environments, confirming the impact of the H 2 environment. Thickness change in (a) Dry and (b) Saturated (96% RH) relative to dry and saturated thickness in Cycle 0 exposed to gas 1, respectively.
Results and Discussion
The findings in Fig. 1 and 2 are consequences of changes at the ionomer/Pt interface induced by gas/Pt interaction. Surface oxidation on Pt metal can occur via electrochemical and thermochemical pathways [17]. In a thermochemically oxidized Pt surface, exposure to an oxidative gaseous environment like air will enlarge oxidized metal islands on Pt, while exposure to a reducing environment like H 2 can reduce the unstable passivated surface even under ambient conditions [17]- [20]. Pt substrates in this study are likely to exist with some surface oxidation as they are stored under ambient conditions. This oxide surface continues to grow with continued exposure to an oxidizing environment or, is reduced and saturated with dissociated atomic hydrogen during H 2 exposure; a phenomenon that has been reported experimentally and computationally [10], [21]- [23]. As a result, during exposure to Air and H 2, Pt interface can exist at varied states of oxidation and reduction resulting in sample-to-sample variability. Nonetheless, adsorbed hydrogen reduces the solid-surface free energy [21], resulting in a more hydrophilic but nonpolar Pt/H interface compared with that of oxidized Pt. This phenomenon was verified by using bare Pt-coated crystal in a quartz-crystal microbalance, which exhibited significant adsorption of H 2 on Pt surface when dry, and greater absorption of water when saturated due to greater affinity for water at the Pt/H interface (data not shown). The Pt/H interface lacks strong electrostatic interactions, resulting in possible ionomer restructuring to orient hydrophilic sidechains towards the Pt/H interface, where water molecules are likely to gather, thereby creating a dense region of hydrophobic ionomer away from the interface. In such a scenario, the bulk of the ionomer behaves like a higher equivalent-weight ionomer with lower water uptake. On the other hand, negatively charged oxygen atoms on an oxidized Pt surface, which, while comparatively less hydrophilic, induce a strong polar dipole and enhance electrostatic interactions between hydronium ions and sulfonic-acid moieties. Similar depression in watervapor uptake in thin-films on Si/SiO 2 support under H 2 also point towards impact of oxidized surfaces. Under ambient conditions, growth of native oxide layer of 1 to 2 nm is expected on a Si substrate. Continued layer-by-layer growth of SiO 2 ; however, requires presence of both water and oxygen [24], [25]. Although reduction of the oxide layer is not occurring under H 2 environment on Si/SiO 2 support, oxide formation is actively being facilitated under humidified air. These interactions enhance the overall effective water uptake within the ionomer on oxidized surface, which is consistent with predictions from molecular-dynamics simulations [26]. Figure 3 schematically portrays the balancing impacts of polarity and hydrophilicity in reducing and oxidizing environment. The above hypothesis is supported by morphological changes tracked by GISAXS and mechanical response of Nafion thin-film on Pt exposed to H 2 and N 2 gases. When α i of x-ray beam is below the critical angle of the polymer film, α c ,film , total external reflection occurs with a surface-sensitive scattering [27], whereas above α c ,film , , the x-ray beam penetrates through the entire film and scattering from the paracrystalline Pt surface is observed. As shown in Figure 4a, the paracrystalline peak is present at α i = 0. 16 Despite being the least understood component, the gas/ionomer/Pt interface in the catalyst layer bears the utmost duty for PEFC performance. Thus, there is need for greater understanding of pairwise interaction between gas/ionomer, ionomer/Pt, and gas/Pt interfaces to reduce critical transport losses and improve electrode design. To that effect, this study focused on how gas/Pt interaction impacts Pt surface and ionomer thin-film morphology and properties.
Unexpectedly, a reduced swelling, increased densification, decreased deformation energy density and continual reduction in effective water uptake in the ionomer during cycling were observed under H 2 relative to oxidizing or inert environment. These observations demonstrate the coupled impact of gas/substrate and ionomer/substrate interactions on ionomer thin-film's behavior and ultimately it's transport properties [28]. Therefore, there is a need for increased electrode-specific investigations and separate ionomer design for anode and cathode catalyst layers. The impact of electronic potential going from oxidation to reduction potentials can also affect the surface-state identity and ionomer thin-film morphology, which is a focus of current research. Furthermore, existence of a water-rich phase at the Pt/ionomer interface in a reducing environment can impact surface conductivity significantly, which may not occur in an oxidizing environment. The findings herein also indicate heightened vulnerability to delamination of ultra-thin ionomer films in the anode due to increased water-layer thickness and reduced deformation energy density. | 2,262.4 | 2018-02-01T00:00:00.000 | [
"Chemistry",
"Materials Science",
"Engineering"
] |
A-Evac: The Evacuation Simulator for Stochastic Environment
We introduce an open-source software for fire risk assessment named Aamks. This article focuses on a component of Aamks—an evacuation simulator named a-evac. A-evac models evacuation of humans in the fire environment produced by a zone fire model simulator. In the article, we discuss the probabilistic evacuation approach, automatic planning of exit routes, the interactions amongst the moving evacuees and the impact of smoke on humans. The results consist of risk values based on fractional effective dose and are presented in the form of various probability distributions and evacuation animations. The intended scope of Aamks is buildings, e.g., offices, malls, factories rather than stadiums or streets. We present the need for such software based on the current state of research and existing engineering tools in the probabilistic risk assessment domain. Then we describe a-evac and its details: geometry, path-finding, local movement, interaction with fire, and visualization. Given the above scope, the article contributes to the domain of probabilistic risk assessment by proposing: (a) stochastic approach to evacuation, (b) velocity-based model for evacuation, (c) evacuation software that interacts with fire conditions and zone fire models.
There are scientific methods and models of the fire and the emergency scene, there are computer implementations, but the complexity of the domain impedes the more widespread use of these tools.
Currently, the most typical approach for assessing the safety of the building is a precise choosing of the input parameters for a small number of lengthy, detailed simulations.This procedure is managed by a practitioner, based on his experience.However, based on heuristics and biases [31,22] we have concerns that human judgement surpasses statistical calculations.The alternative is to let computer randomly choose the parameters and run thousands of simulations.The resulting collection allows us, after further processing, to judge on the safety of the building.
Aamks, the multisimulations platform
We created Aamks -the platform for running simulations of fires and then running the evacuation simulations, but thousands of them for a single project.This is the Monte-Carlo approach.We use CFAST, which is a rough, but a fast fire simulator.This allows us to explore the space of possible scenarios and assess the probability of them.The second component of risk -consequencesis taken from an evacuation simulator capable to model evacuation in the fire environment.We use a-evac as the evacuation simulator which we have built from scratch.The multisimulation is a handy name for what we are doing.Aamks tries to assess the risk of failure of humans evacuation from a building under fire.We applied methodology proposed in [16,8,1] -stochastic simulations based on the Simple Monte-Carlo approach [7].Our primary goal was to develop an easy to use engineering tool rather than a scientific tool, which resulted in: AutoCAD plugin for creating geometries, web based interface, predefined setups of materials and distributions of various aspects of buildings features, etc.The workflow is as follows: The user draws a building layout or exports an existing one.Next, the user defines a few parameters including the type of the building, the safety systems in the building, etc.Finally, they launch a defined number of stochastic simulations.As a result they obtain the distributions of the safety parameters, namely: available safe egress time (ASET), required safe egress time (RSET), fractional effective dose (FED), hot layer height and temperature and F-N curves as well as the event tree and risk matrix.
Fortran is a popular language for coding simulations of physical systems.CFAST and FDS+Evac are coded in fortran.Since we don't create a fire simulator we code Aamks in python which is more comfortable due to extremely rich collection of libraries.We decided that borrowing Evac from FDS and integrating it with Aamks would be harder for us than to code our own evacuation simulator, hence a-evac was born.There's also a higher chance of attracting new python developers than fortran developers for our project.
Aamks consists of the following modules: a-geom, geometry processing: AutoCAD plugin, importing geometry, extracting topology of the building, navigating in the building, etc. a-evac, directing evacuees across the building and altering their states a-fire, CFAST and FDS binaries and processing of their outputs a-gui, web application for user's input and for the results visualisation a-montecarlo, stochastic producer of thousands of input files for CFAST and a-evac a-results, post-processing the results and creating the content for reports, a-manager, managing computations on the grid/cluster of computers a-installer 3 A-evac, the evacuation simulator In the following subsections we describe the internals of a-evac, sometimes with the necessary Aamks context.
Geometry of the environment
The Aamks workflow starts with a 3D geometry where fires and evacuations will be simulated.We need to represent the building, which contains one or more floors.Each floor can consist of compartments and openings in them, named respectively COMPAS and VENTS in CFAST.Our considerations are narrowed to rectangular geometries.There are two basic ways for representing architecture geometries: a) cuboids can define the insides of the rooms (typea-geometry) or b) cuboids can define the walls / obstacles (type-b-geometry).CFAST uses the type-a-geometry.We create CFAST geometries from the input files of the following format (there are more entities than presented here): All the entities in the example belong to the same FLOOR 1.The triplets are (x 0 , y 0 , z 0 ) and (x 1 , y 1 , z 1 ) encoding the beginning and the end of each entity in 3D space.In practice we obtain these input files from AutoCAD, thanks to our plugin which extracts data from AutoCAD drawing.There's also an Inkscape svg importer -useful, but without some features.Adding basic support for another graphics tools is not much work.
In later sections we will introduce problems of guiding evacuees throughout the building.Those modules require the type-b-geometry.We convert from a type-a-geometry to a type-b-geometry by duplicating the geometry, translating the geometry and applying some logical operations.Figure 2 shows the idea.
Fig. 2 The conversion from type-a-geometry to type-b-geometry There are three aspects of movement when it comes to evacuation modeling [9]: (a) path-finding -for the rough route out of the building, (b) local movement -evacuees interactions with other evacuees, with obstacles and with environment, and (c) locomotion -for "internal" movement of the agent (e.g.body sway).A-evac models only (a) and (b).
Path-finding (roadmap)
The simulated evacuees need to be guided out of the building.The type-bgeometry provides the input for path-finding.Each of the cuboids in typeb-geometry -representing obstacles -is defined by the coordinates.These coordinates represent corners of the shapes.Since we model each of the floors of a building separately, we flatten 3D geometry into 2D and represent obstacles as rectangles.Therefore type-b-geometry in path-finding module is represented as set of 4-tuple coordinates (x 0 , y 0 ), (x 1 , y 1 ), (x 2 , y 2 ), (x 3 , y 3 ) .
The set of 4-tuple elements is then flatten to the set of coordinatesbag-of-coordinates.Due to the fact that majority of the obstacles share the coordinates, we remove duplicates from the set (for the sake of performance).Then, this bag-of-coordinates is the input for triangulation.We apply Delaunay triangulation [10], that represents space as a set of triangles.Figure 3 depicts the idea of triangulation.The triangles are used as navigation meshes for the agents.The navigation meshes define which areas of an environment are traversable by agents.
After triangulation of bag-of-coordinates, some of the triangles are located inside the obstacles -those (by definition) are not traversable so we remove them.What is left is a traversable-friendly space.
We create then the graph of spatial transitions for the agents, based on the adjacency of triangles obtained from the triangulation.Spatial transition means that an agent can move from one triangle to another.
An agent on an edge of a triangle can always reach the other two edges.For triangles which share edges it allows an agent to travel from one triangle to another.
The pairs of all neighbouring edges are collected.We use python networkx module [4] which creates a graph made of the above pairs.For further processing we add agents positions to the graph, by pairing them with the neighbouring edges.
The graph represents all possible routes from any node to any other node in the graph.We can query the graph for the route from the current agent's position to the closest exit.It means that agent will walk through the consecutive nodes and will finally reach the exit door.We instruct networkx that we need the shortest distances in our routes (default is the least hops on the graph) and we obtain a set of edges the agent should traverse in order to reach the exit.Figure 4 depicts the set of edges returned by the graph for an example query.
Fig. 4 The roadmap defined by the graph for an example query.The red line crosses centers of edges that an agent needs to travel to reach the exit.
The set of edges returned by the graph cannot be used directly for pathfinding.Neither the vertices of the edges nor the centers of them, do define the optimal path that would be naturally chosen by evacuees during real evacuation.Therefore an extra algorithm should be used to smooth the path.For this purpose we apply funnel algorithm defined in [6].The funnel is a simple algorithm finding straight lines along the edges.
The input for the funnel consists of a set of ordered edges (named portals) from the agent origin to the destination.The funnel always consist of 3 entities: the origin (apex) and the two vectors from apex to vertices on edges -the left leg and the right leg.
The apex is first set to the origin of the agent and the legs are set to the vertices of the first edge.We advance the left and right legs to the consecutive edges in the set and observe the angle between the legs.When the angle gets smaller we accept the new vertex for the leg.Otherwise the leg stays at the given vertex.After some iteration one of the legs should cross the other leg defining the new position of the apex.The apex is moved and we restart the procedure.
As a result the path is smoothened and defined only by the points where the changes in velocity vector are needed.Moreover, we used an improved version of the funnel algorithm that allows for defining points keeping a distance from the corners reflecting the size of the evacuee.This allows for modeling the impaired evacuees on wheeled chairs or beds in the hospitals.Figure 6 depicts the smoothened path by funnel algorithm.
Fig. 6 The roadmap from starting point to exit smoothened by funnel algorithm.
Local movement
Local movement focuses on the interaction with (a) other agents (b) static obstacles (walls) and (c) environmental conditions.A-evac handles (a) and (b) via RVO21 which is an implementation of the Optimal Reciprocal Collision Avoidance (ORCA) algorithm proposed in [3,32].Later in this section we describe how we are picking the local targets which is an aspect of (b).(c) is basically altering agent's state such as speed.
RVO2 aims at avoiding the velocity obstacle [11].The velocity obstacle is the set of all velocities of an agent that will result in a collision with another agent or an obstacle.Otherwise, the velocity is collision-avoiding.RVO2 aims at asserting that none of the agents collides with other agents within time τ .
The overall approach is as follows: each of the agents is aware of other agents parameters: their position, velocity and radius (agent's observable universe).Besides, the agents have their private parameters: maximum speed and a preferred velocity which they can auto-adjust granted there is no other agent or an obstacle colliding.With each loop iteration, each agent responses to what he finds in his surroundings, i.e. his own and other agents radiuses, positions and velocities.The agent updates his velocity if it is the velocity obstacle with another agent.For each pair of colliding agents the set of collision-avoiding velocities is calculated.RVO2 finds the smallest change required to avert collision within time τ and that is how an agent gets his new velocity.The agent alters up to half of his velocity while the other colliding agent is required to take care of his half.Figure 8 a-b depicts the idea of velocity collision avoidance.
The algorithm remains the same for avoiding static obstacles.However, the value of τ is smaller with respect to obstacles as agents should be more 'brave' to move towards an obstacle if this is necessary to avoid other agents.
It turned out problematic how to pick the local target from the roadmap.Local targets need to be updated (usually advanced, but not always) near points defined by funnel algorithm during path-fining phase -the disks on Figure 7, after they become visible to the agent.However, the disks can be crowded and agents can be driven away from the correct courses by other agents.We carefully inspected all possible states that agents can find themselves in.In order to have a clearer insight and control over the agents inside the disks we use the Finite State Machine2 instead of just plain algorithm block in our code.The state of the agent is defined by 4 binary features: (a) is agent inside the disk?(b) are where agent is walking to and what agent is looking at the same target?(c) can agent see what he is looking at (or are there obstacles in-between)?(d) has agent reached the final node?
Within each iteration of the main loop we check the states of the agents.The states can be changed by agents themselves -e.g.agent has crossed the border of the disk, or by our commands -e.g.agent is ordered to walk to another target.Consider these circumstances: the agent has managed to see his next target and now he walks towards this next target -he is in state S1.But now he loses the eye contact with this new target and finds himself in state S2.The program logic reacts to such a state by transiting to the state S3: start looking at the previous target and walk towards this previous target.Based on what happens next we can order the transition to another state or just wait for the agent to change the state himself.By careful examination of all possible circumstances we can make sure that our states and their transitions can handle all possible scenarios.
Agent is moving towards this target Agent is keeping an eye on this target On Figure 8c) we show how agents are passing through a HOLE.Due to our concept of the disks (where searching for new targets takes place) and due to the internals of RVO2 we gain the desired effect of agents not crossing the very center of the disk.Instead, the agents can walk in parallel and advance to another target which looks natural and doesn't create an unnecessary queue of agents eager to cross the very center of the HOLE.
Evacuation under fire and smoke
Each a-evac simulation is preceded with the simulation of the fire.We have only tested a-evac with CFAST [24,21].CFAST writes its output to csv files.
We need to query these CFAST results quite a bit, therefore we transform and store these results in a fast in-memory relational database3 .For each frame of time we are repeatedly asking the same questions: (a) given the agent's coordinates, which room is he in?(b) what are the current conditions in this room?
When it comes to (b), the environment effects on the agent can be: (b.1) limited visibility (eyes), (b.2) poisonous gases (nose) and (b.3) temperature in the room (body).Both (b.1) and (b.2) are read from the default (but configurable) height of 1.8 m.There are always two zones in CFAST, which are separated at a known height, so we need to read the conditions from the correct zone, based on where our 1.8 m belongs.
The value of visibility (OD -optical density) affects agent's speed.We use the relation proposed in [12] following the FDS+Evac [23]: where: K s is the extinction coefficient ([K s ] = m −1 ) calculated as OD/log 10 e according to [19,20], v n,min is the minimum speed of the agent A n and equals 0.1 • v pref n (agent's preferable velocity), and α, β are the coefficients defined in [12].
Setting the minimal value of speed means that the agent does not stop in thick smoke.They continue moving until the value of incapacitated Fractional Effective Dose (FED) is exceeded, which is fatal to the agent.FED is calculated from CFAST-provided amounts of the following species in the agent environment: carbon monoxide (CO), hydrogen cyanide (HCN), hydrogen chloride (HCl), carbon dioxide (CO 2 ) and oxygen (O 2 ) by the equation [26,23]: where HV CO2 is the hyperventilation induced by the concentration of CO 2 .Following are the formulas for the terms in the above equation.FEDs are given in ppm and time t in minutes.C stands for concentration of the species in %: Based on [17] In contrast to the model applied in Evac, CFAST does not allow for proactive correction of effect of nitrogen dioxide -C CN = C HCN − C N O2 .Therefore this effect is not included in the calculations.
Based on [28,17] F ED O2 = t 0 dt 60 • exp 8.13 − 0.54(20.9− C O2 (t)) ( 6) There are few quantitative data from controlled experiments concerning the sublethal effect of the smoke on people.In works [28,5,26,14,29] sublethal effect in a form of incapacitation (IC 50 ), escape ability (EC 50 ), lingering health problems and minor effects were reported.Incapacitation was inferred from lethality data, to be about one-third to one-half of those required for lethality.The mean value of the ratios of the IC 50 to the LC 50 was 0.50 and the standard deviation 0.21, respectively.In [14] a scale for effects based on FED was introduced.The three ranges were proposed: 1 FED indicating lethality, 0.3 FED indicating incapacitation and 0.01 FED indicating no significant sublethal effects should occur.We propose based on this data a scale for sublethal effects of smoke for evacuees as presented in Table 1.
F ED total affects the agent's movement in the smoke.For F ED total > 0.3, the smoke inhalation leads to sublethal effects [5] -the agent is not able to find safety from the fire and just stays where he is.For F ED total > 1 we model lethal effects.We later use these effects in the final risk assessment.
Table 1 summarizes the FED effects on human health, what is our original proposition for evaluation of sublethal effect of smoke.These ranges are incorporeted in Aamks.It is based on the following works: [28,5,26,14,29]
Probabilistic evacuation modeling
This section presents the internals of our probabilistic evacuation model, which we find distinct across the available, similar software.Table 2 presents the distributions of the input parameters used in Aamks.Each of the thousands of simulations in a single project is initialized with some random input setup according to these distributions.Aamks has a library of the default parameters values for important building categories (schools, offices, malls etc.).The Aamks users should find it convenient to have all the distributions in a library, but they may choose to alter these values, which is possible.
Most of the data in table 2 come from the standards and from other models, mostly FDS/Evac.Following are some comments on table 2.
Aamks puts much attention to the pre-evacuation time [9], which models how people lag before evacuating after the alarm has sounded.Positions 7. and 8. are separated, because the behaviour of humans in the room of fire origin is distinct.We compile two regulations C/VM2 Verification Method: Framework for Fire Safety Design [30] and British Standard PD 7974-6:2004 [2] in order to get the most realistic, probability-based pre-evacuation in the room of fire origin and in the rest of the rooms.
The Horizontal/Vertical speed (unimpeded, walking speed of an agent) is based on [18,13,25,15].Speed in the smoke is modeled by formula 1.The randomness of the simulations comes from the random number generator's seed.We save the seed for each simulation so that we can repeat the very same simulation, which is useful for debugging and visualisation.
We register all the random input setups and the corresponding results in the database.We are expecting to research at some point the relationships in these data with data mining or sensitivity analysis.
The final result of Aamks is the compilation of multiple simulations as a set of distributions i.e.F-N curves.The F-N curves were created as in [12].Figure 9 depicts the exemplary results.Fig. 9 The results of evacuation modeling as F-N curves.
Visualization
In Aamks we use a 2D visualization for supervising the potential user's faults in his CAD work (e.g.rooms with no doors (Figure 10)), for the final results, and for our internal developing needs.We use a web based technology which allows for displaying both static images and the animations of evacuees.We also have a web based 3D visualization made with WebGL Threejs.This subsystem displays realistic animations of humans during their evacuation under fire and smoke.(Figure 11).Below we evaluate the quality of a-evac as described in [9,27] as well as it's computer performance.
Verification of a-evac
Verification and validation deals with how close the results of the simulations to the reality are.We took care to be compliant with the general development recommendations [9] by: (1) obeying good programming practices, (2) verifying intermediate simulation outputs, (3) comparing simulation outputs against the analytical results, and (4) creating debugging animations.
The are three types of errors that can be generated by our software: a) error in deterministic modeling of single scenario, b) error of Monte Carlo approximation, c) statistical error -disturbance.
For the first type of error we applied the methods proposed in [27].The proposed tests are organized in five core components: (1) pre-evacuation time, (2) movement and navigation, (3) exit usage, (4) route availability, and (5) flow conditions/constraints.For each category there are detailed tests for the geometry, the scenario and the expected results.The results are in table 3. RVO2, the core library of a-evac which drives the local movement, was also evaluated in [33].The conclusions are that RVO2 is of the quality comparable with the lattice gas and social force models.The social force model is commonly used in a number of evacuation software.
The above is the evaluation of a single, deterministic simulation.However, the final result is the compilation of the whole collection of such single simulations -this is how we get the big picture of the safety of the inquired building.The picture is meant to present risk.Probability of risk is calculated as a share of simulations resulted in fatalities, in the total number of simulations.
The accuracy of this evaluation depends on the method applied -stochastic simulations.The error is proportional to the square root of number simulations.Namely for the discrete Bernoulli probability distributions used for example for evaluation of probability of scenario with fatalities, the error is calculated as follows: where: pn is the probability of fatalities obtained as a number of simulations resulted in fatalities to the total number of simulations, n is the number of simulations.
We are aware of the third type of error which may be generated by the application.The input for Aamks is a set of various probability distributions what may occasionally generate unreal scenario.For example an evacuee who moves very slowly on corridors and very fast on stairs.In most cases these errors are related to the other parts of Aamks i.e. probabilistic fire modeling.
However, fire environment impacts the evacuation.This error can be evaluated by comparison of data generated by Aamks with real statistics.So far we do not have idea how to tackle this problem efficiently.We consider to evaluate this error by launching simulations for the building stock and check whether we reconstruct historical data.This method is very laborious and not justified at that moment, because our application still lacks some models i.e. fire service intervention -what has significant impact on fire.
Performance of the Model
The main loop of Aamks processes all agents in the time iteration.Table 4 summarizes how costly the specific calculations for a single agent within a single time iteration are.The tests were performed on computer with Intel Core i5-2500K CPU at 3.30 GHz with 8 GB of RAM.The total time of a single step of the simulation for one agent is 2 × 10 −4 s and it grows linearly with the number of agents.The speed and FED calculations are most costly, because they both make database queries against the fire conditions in the compartment.The time step for a-evac iteration is 0.05 s.There is no significant change in fire conditions within this time frame.Therefore for performance optimization we update speed and FED every 20-th step of the simulation.
Discussion
Vertical evacuation is troublesome and not implemented.There is RVO2 3D, but it is for aviation where agents can pass above each other -clearly not for our needs.Besides, we think things look actually better in 2D.We like the idea that vertical evacuation can be still considered 2D, just rotated, and we plan to move in this direction.
A-evac does not model the social or group behaviours.However, it is difficult to evaluate, how much the lack of such functionalities impacts the resulting probability distributions.
In the workflow we run a CFAST simulation first.Then a-evac simulation runs on top of CFAST results.This sequential procedure has it's drawbacks, e.g.we don't know how long the CFAST simulation should last to produce enough data for a-evac, so we run "too much" CFAST for safety.Also, evacuees cannot trigger any events such as opening the door, etc.We considered a closer a-evac-CFAST integration.
There seems to be lot's of space for improvement in Aamks.We work with the practitioners and know the reality of fire engineering.We know the limitations of our current implementations and most of them can be addressed -there are models and approaches that we can implement and the major obstacle are the limits of our team resources.Therefore we invite everyone interested to join our project at http://github.com/aamks.
Conclusion
Aamks is actively developed since 2016 and we are truly engaged in making it better.The software, though not really ready for end-users has already served as a support for commercial projects and fire engineers and scientists regard Aamsk as having potential.The stochastic based workflow of Aamks is not a new concept.There are opinions in the community that this approach is how fire engineering should be done.Since no wide-used implementation has been created so far, this is an additional motivation that drives our project.
Fig. 1
Fig. 1 The concept of a HOLE: a) the room in reality, b) the room representation in CFAST: two rectangles for separated calculations, but open to each other via HOLE.
Fig. 5
Fig. 5 The idea of funnel algorithm.a) starting point, b) advancing legs.
Fig. 7
Fig.7The roadmap and local movement
Fig. 8
Fig.8RVO2 at its work of resolving collisions: (a) agents on direct collision courses and (b) their calculated collision-avoiding courses, (c) three agents crossing a HOLE in parallel.
Table 1
FED effects on human health in Aamks.
Table 2
Parameters of the distributions for the exemplary scenario.
Table 3
The results of Aamks tests
Table 4
The costs of a single loop iteration per agent | 6,280.4 | 2017-11-25T00:00:00.000 | [
"Computer Science"
] |
Temperature Profiles From Two Close Lidars and a Satellite to Infer the Structure of a Dominant Gravity Wave
Gravity waves (GW) are a crucial coupling mechanism for the exchange of energy and momentum flux (MF) between the lower, middle, and upper layers of the atmosphere. Among the remote instruments used to study them, there has been a continuous increment in the last years in the installation and use of lidars (light detection and ranging) all over the globe. Two of them, which are only night operating, are located in Río Gallegos (−69.3° W, −51.6° S) and Río Grande (−67.8° W, −53.8° S), in the neighborhood of the austral tip of South America. This is a well‐known GW hot spot from late autumn to early spring. Neither the source for this intense activity nor the extent of its effects have been yet fully elucidated. In the last years, different methods that combine diverse retrieval techniques have been presented in order to describe the three‐dimensional (3‐D) structure of observed GW, their propagation direction, their energy, and the MF that they carry. Assuming the presence of a dominant GW in the covered region, we develop here a technique that uses the temperature profiles from two simultaneously working close lidars to infer the vertical wavelength, ground‐based frequency, and horizontal wavelength along the direction joining both instruments. If in addition within the time and spatial frame of both lidars there is also a retrieval from a satellite like SABER (Sounding of the Atmosphere using Broadband Emission Radiometry), then we show that it is possible to infer also the second horizontal wavelength and therefore reproduce the full 3‐D GW structure. Our method becomes verified with an example that includes tests that corroborate that both lidars and the satellite are sampling the same GW. The improvement of the Río Gallegos lidar performance could lead in the future to the observation of a wealth of cases during the GW high season. Between 8 and 14 hr (depending on the month) of continuous nighttime data could be obtained in the stratosphere and mesosphere in simultaneous soundings from both ground‐based lidars.
Introduction
Gravity waves (GWs) have significant global effects from the lower to the upper atmosphere (e.g., Fritts & Alexander, 2003;Gill, 1982). They are mainly generated in the troposphere or stratosphere and may increase in amplitude under vertical propagation in certain conditions. These waves may transfer significant amounts of energy and momentum flux (MF) to the background if filtering or dissipation occurs while they propagate. This may result in strong forcing of the dynamics and thermal structure, mainly in the middle atmosphere. Some works have shown that GW may even penetrate and influence the thermosphere and ionosphere (e.g., Alexander et al., 2015;Park et al., 2014).
Although the austral tip of South America may be the most intense hot spot of GW on the globe from austral late autumn to early spring (e.g., Ern et al., 2004;Hoffmann et al., 2013Hoffmann et al., , 2016, several studies using lidars (light detection and ranging), aircrafts, radars, or balloons have focused on the Northern Hemisphere (NH). However, in the last years there has been a growing awareness of the relevance of this GW hot spot close to the southern pole (e.g., Chu et al., 2018;Fritts et al., 2016;Kaifler et al., 2015;Llamedo et al., 2019;Zhao et al., 2017). The importance of an improvement in the knowledge of this region is highlighted by the fact that comparisons of stratospheric GW MF obtained from general circulation models (GCMs) and satellite data reveal some notable discrepancies. Although there are some well reproduced features, large deviations are still present (e.g., de la Cámara et al., 2016;Geller et al., 2013). For example, several GCMs produce simulations of the Southern Hemisphere (SH) polar stratosphere that lead to significant underestimations of the temperatures and drag on the winds (e.g., Butchart et al., 2011;Wright & Hindley, 2018). Numerical solutions are not able to resolve the full spectrum of waves. Parameterizations of the smallest scale GW are then introduced, but they are usually too coarse and may be a major cause of the biases in the polar SH stratospheric dynamics and thermal structure simulations (McLandress et al., 2012). These shortages recall the need for observational information on GW sources, evolution, and general behavior in this zone. The possible but still uncertain causes of the intense GW activity are usually attributed to orography (Southern Andes, Antarctic Peninsula, or small oceanic islands), nonorographic waves from winter storm tracks over the Southern oceans or from spontaneous adjustment, or jet instability around the edge of the stratospheric vortex or secondary waves stemming from primary breaking ones from any source (e.g., Alexander & Grimsdell, 2013;Hindley et al., 2015;Sato et al., 2009).
The DIAL (Differential Absorption Lidar) instrument, belonging to the Centro de Investigaciones en Láseres y Aplicaciones (CEILAP), was located in 2005 at the Observatorio Atmosférico de la Patagonia Austral (OAPA) in Río Gallegos (51.6 • S, 69.3 • W), mainly for ozone studies. This lidar was the southernmost to the North of Antarctica until November 2017, when CORAL (Compact Rayleigh Autonomous Lidar) started working in Río Grande (53.8 • S, 67.8 • W). Río Gallegos is nearly 300 km to the East of the Andes and 70 km to the North of the Strait of Magellan, whereas Río Grande is further Southeast on the Atlantic coast of the Tierra del Fuego island (Figure 1). Both are in an excellent position in relation to the observation of the GW hot spot and are separated by 265.6 km (zonal and meridional distances of 100.8 and 245.7 km).
It is not possible to determine the MF of a dominant GW and its 3-D structure with the temperature retrieval from one lidar. However, it may be feasible to obtain additional information with a second simultaneous and close lidar through the phase shift between GW-induced perturbations on both soundings and the knowledge of the spatial separation of both instruments. If in addition there is another close temperature profile, for example, provided by a satellite, then it may be possible to reveal the full 3-D GW structure, including the net MF calculation. The satellite measurements like those from SABER (Sounding of the Atmosphere using Broadband Emission Radiometry), GPS radio occultation, or HIRDLS (High Resolution Dynamics Limb Sounder) provide no directional information of the horizontal components of the MF vector, but only the absolute value of each one can be found with 3 near profiles. However, an inspection of the horizontal 10.1029/2020EA001074 components of the equation of motion for the atmosphere shows that the net MFs affect the wind and temperature structure (e.g., Geller et al., 2013). Wright et al. (2016) also remarked the importance of obtaining the net rather than the absolute value. In brief, we suggest to use a sequence of vertical temperature profile pairs over a time interval plus a static retrieval. As far as we know, there have been no previous similar studies. Just frozen GW reconstructions have been usually obtained from a combination of instantaneous satellite temperature profiles, which are close in space and time (e.g., Alexander et al., 2018;Ern et al., 2004Ern et al., , 2017Schmidt et al., 2016), as the evolution could not be monitored in those cases.
In the present study we employ a succession of two close and simultaneous lidar temperature measurements over time and height and a third instantaneous retrieval from a satellite within the same spatial and time frame in order to infer the ground-based frequency and the three cartesian wavelengths of a dominant GW in the studied zone. This also allows the determination of the three phase velocity components and the net GW MF. Section 2 explains the analysis procedure here employed and the general characteristics of the data from both lidars and from the SABER instrument onboard the TIMED (Thermosphere Ionosphere Mesosphere Energetics Dynamics) satellite that we use in an application example in section 3. Section 4 summarizes the constraints of the method and the main results of our case study.
Data and Method
The DIAL lidar at Río Gallegos has four Newtonian telescopes of 0.5 m diameter. Four Rayleigh and two Raman digital channels record the backscattered photons emitted by the third harmonic of the Nd-YAG laser at 355 nm (130 mJ maximum energy) with a 30 Hz repetition rate. For details on DIAL working characteristics, see Llamedo et al. (2019). Above 30 km altitude, temperature T is obtained using the Rayleigh scattering technique, whereas below the method is affected by aerosol scattering and ozone absorption. The spatial/temporal resolution of the photon counting system is 15 m/1 min, respectively, but an integration of at least 900 m/30 min is needed to improve the signal-to-noise ratio (SNR). A careful analysis revealed that a significant fraction of oscillations above 40 km height with a 30 min integration time may be caused by noise. Moreover, very large negative temperature gradients are unlikely to persist as they are convectively unstable. The DIAL signal power is about 4 W.
The CORAL lidar measures atmospheric backscatter profiles from 22 to 90 km altitude, but only values above 30 km should be used due to the effect of aerosols. It is an Nd:YAG laser generating 12 W at 532 nm and 100 Hz pulse repetition rate. The telescope comprises a 630 mm diameter f/2.45 parabolic mirror. At the top altitude, the T derivation procedure is seeded by SABER temperature. Retrievals with a temporal and vertical resolution of, respectively, up to 10 min and 0.3 km may be provided with reasonable SNR values. For a description of CORAL characteristics see Kaifler et al. (2017).
With the two lidars we may obtain a two-dimensional (2-D) scenario over several hours. In order to be able to fully resolve the 3-D structure of a GW observed by both lidars, an additional profile must be provided. SABER soundings (Mlynczak, 1997), if present within the time and space frame, seem to be a good option as they measure T roughly between 20 and 100 km height. Although this retrieval is instantaneous, it helps to resolve the 3-D GW structure over the whole observational period of both lidars (assuming the wave persists and does not suffer a substantial modification during all that time). Here we use kinetic temperatures from Version 2.0 datasets. A first step requires the separation in all the T profiles of GW from the background, including planetary waves (PWs) if present. In general, special care has to be taken in avoiding spurious amplifications of GW near the altitudes of sharp changes (tropopause or stratopause) when using a digital filter to isolate these waves (e.g., de la Torre et al., 2006). Due to the limited vertical range of reliable lidar data in Río Gallegos (30-40 km), we do not undergo that problem. We follow Ehard et al. (2015) and Rapp et al. (2018) in that at middle and high latitudes a digital filter cutoff at 15 km of vertical distance separates GW from PW. As we have a 10 km vertical interval of data, a band pass between 2 and 10 km will, respectively, contemplate the Nyquist condition (a vertical resolution of 1 km and sampling of 100 m is used for consistency in both lidars) and the elimination of the background and PW. We used a Savitzky-Golay filter (Orfanidis, 1996). Regarding the time evolution of both lidars, we implemented for consistency intervals of 30 min. The same filter was used below for the SABER profile.
10.1029/2020EA001074
We assume that a dominant monochromatic GW is present at both lidars during the whole sounding period or at least a significant fraction of it. In the last case the different portion should be identified from the data. The perturbed temperature T' A,R , respectively, at Río Gallegos ( A ) and Río Grande ( R ) may be then represented by where x, y, z, and t represent zonal, meridional, and vertical coordinates and time; k, l, m, and are the corresponding wave numbers and frequency as seen from the ground; T o is the GW amplitude (we consider it to be constant due to the limited height range); and 0 is a fixed value, whereas on the right-hand side the expression within parentheses is the wave phase. If for both lidars we represent as usual T ′ against z and t, we should then get a similar and m if both places are observing the same dominant wave and are subject to similar mesoscale conditions. If so, it means we obtained the ground-based frequency and the vertical wavelength.
Between the two lidars at a fixed time and height, the phase difference d AR is given by If we put the origin in Río Grande and rotate the horizontal cartesian coordinate system so that the y * axis coincides with the direction to Río Gallegos (see Figure 1), then we may rewrite Equation 3 shows that if the phase difference is found, as the horizontal separation between both lidars is known, it is possible to obtain the component of the horizontal wave vector defined by the direction that joins both places. To reconstruct the full 3-D GW structure, only one horizontal wavelength is missing. This information may be provided by an additional profile. To optimize its added value, its location in the horizontal plane should not lie along the y * axis (the new information would be redundant) but rather separated from it.
In addition, to ensure that it may be observing the same dominant GW, we restrict its separation from any of both places to less than 2.5 • in latitude and 4 • in longitude (the angle difference keeps the maximum possible zonal and meridional separations equal). A minimum distance, for example, 50 km, should be set to avoid uncertainties being larger than the possible small phase difference for a too close comparison. It will be shown below that the SABER horizontal excursion for measurements between 30 and 40 km height is small compared to the horizontal wavelength found in our example, so we essentially consider it a vertical profile. The second horizontal equation between SABER and Río Gallegos then is where l * was already found in Equation 3, the positions are known, so k * can be calculated.
The determination of GW phase differences between both lidars as a function of height at consecutive 30 min intervals has been performed by wavelet coherence (Torrence & Compo, 1998). We first assumed that d AR was between − and . However, we then also contemplated the three other possible aliased phase differences between −4 and 4 . This implies physically that we also contemplate waves with smaller wavelengths and/or propagating in the opposite direction than the initial one. Aliasing is not expected in the vertical direction or time, as it would imply wavelengths smaller than 2 km and ground-based periods of less than 1 hr. We will then obtain four possible solutions. Once we derive the missing horizontal wavelength for each of the four aliased cases, the "true" value will be selected as the one that better suits the GW dispersion relation. It is given in terms of the intrinsic frequencŷbŷ where H is the scale height, k 2 h = k 2 + l 2 = k * 2 + l * 2 is the squared total horizontal wavelength and N is the Brunt-Väisälä frequency, which will be obtained from ERA Interim reanalysis T profiles adequately interpolated in time and space but may be also obtained from any lidar (Chu et al., 2018). The reanalysis 10.1029/2020EA001074 horizontal components of wind were used in order to calculate the intrinsic frequency on the left-hand side (̂= − kU − lV with U and V the zonal and meridional projections of air velocity). A similar procedure to discard spurious aliased cases was already used by Alexander et al. (2018).
Fortunately, the part of the year with the longest nights coincides with the months of the most intense GW. From March to October 2018 there were 17 coincident measurement periods of both lidars, ranging from 4 to 12 hr of simultaneous sounding intervals. The relatively low number of concurrent observations as compared to the total number of nights is due to cloudy conditions or to operational problems in any of both places. To make both data sets comparable, the information from Río Grande was restricted to the 30-40 km height interval, all the profiles were provided every 30 min, and the vertical resolution was set to 1 km. The T retrievals were initially used from 27 to 42 km height, but after retaining the GW, they were restricted to 30-40 km. This procedure was only done to attenuate any artificial discontinuity at the beginning or end of the data set due to the implicit assumption of the digital filtering procedure that it is cyclic, which may introduce spurious temperature fluctuations (Ehard et al., 2015). Although all the profiles already underwent quality control verifications, we tested them against anomalous temperature values (below 160 K or above 320 K).
After the 17 matrix pairs of T ′ against height (every 100 m in the 30-40 km range) and time (every 30 min along the coincident observational period) were obtained, different procedures were developed to ensure that both lidars detect the same GW and adequately. A minimum of 10 consecutive with up to one missing measurement (to be interpolated) time was requested. By visual inspection we kept only cases that exhibited the presence of wavefront-like features and eliminated cases with clearly identifiable noisy patterns in either lidar site, whereby eight cases out of 17 passed this selection process. In every matrix pair we searched at every fixed time a dominant vertical mode that was present and found its phase difference at both places by wavelet coherence (Torrence & Compo, 1998). If d AR had an abrupt change at any height (>0.2 rad in 100 m), then the pair was discarded as it may mean that different phenomena were observed at both places. Or diffusion or absorption or any instability happened or simply the main observed effects cannot be explained in terms of a significant GW or its properties stayed beyond the observational window of the lidars (we recall that no instrument may capture the whole spectrum of waves). With these requirements only four nights from March to June were still suitable for further analysis.
For these remaining cases we applied 2-D wavelet analysis to each T ′ (t, z) matrix (Wang & Lu, 2010;Kaifler et al., 2017). The spectral power (SP) as a function of height, time, vertical wavelength, and period was obtained for each lidar and event, whereby the largest value indicated the dominant mode and location. In order to ensure that the main GW seen at each site was similar to the other one, we required that they both become represented by slightly differing elements of the 2-D wavelets basis: the angle and magnitude that define the dominant mode for each lidar sounding should deviate by less than ∕10 and by less than 10 units (as vertical spacing is 100 m, this represents, e.g., 1 km in the z direction). After this evaluation only two cases were found to meet these criteria. However, when requiring that the evaluation of Equation 5 should differ by less than 10% as calculated on both hand sides, only one event remained. Larger deviations could mean that we are not observing a GW or it may be undergoing nonlinear behavior at any of both places. We show in the appendix some characteristics of the case that missed our last test and give some remarks on its failure.
Application Example
The case to be analyzed is 1 June 2018 21:50 to 2 June 2018 10:08 (all times in UTC). In Figure 2 we show SP for both lidars. The outcome of the 2-D Morlet wavelet is determined in terms of two parameters: the angle , which defines the direction of the mode in t − z space, and the scale s, which is the wavelength along the angle direction. SP was initially obtained as a function of z, t, , and s, whereby the summation over the former two variables yielded SP as a function of the latter two. Differences in angle and scale for the dominant mode in both lidars were, respectively, less than ∕20 and 0.1 km, and their values were ∕2 and 5 km. Notice that = ∕2 implies stationary wavefronts, as they are represented by horizontal lines in the t, z plane. In Figure 3 we show the location of the polar vortex. To obtain it at different heights, we used ERA Interim data at 475, 600, and 700 K isentropic levels to find the largest potential vorticity gradient weighted by the horizontal wind speed (Nash et al., 1996). It can be seen that both lidars are outside and far away from the vortex, so direct effects on both soundings or the presence of nonstationary GW induced by geostrophic adjustment are unlikely. It should be mentioned that in some occasions the edge may reach or even surpass one or both sites. In addition, to analyze if all the studied region exhibits similar mesoscale features, we show in Figure 4 the horizontal velocity at 600, 100, and 10 mb levels. Although the dominant wind direction changes with height, homogeneous conditions can be observed at every single altitude shown. We therefore expect uniform background characteristics in the whole observed zone. In addition, in the lower left part of the 600 mb panel (a level close to the height of the local Andes mountains), adequate conditions for the generation of mountain waves are observed. The prevailing wind finds a meridional obstacle, thus probably generating GW (Baines, 1995). Moreover, the other panels show that at least at those two other heights, critical levels for stationary mountain waves are unlikely (no zero wind regions).
In Figure 5 we show the T ′ representation against time and height for both lidars. Notice some general similarities regarding the nearly stationary wavefronts. The best fit wavefront maxima are also shown (i.e., the optimal phase for a 5 km vertical wavelength was searched in each case). In Figure 6 we show the SABER profile. We used the same band-pass filter between 2 and 10 km as with the lidar data. Notice a clear nearly 5 km vertical periodicity. The sounding was located about 334 km northeast from Río Gallegos, so the 50 km minimum distance requirement was fulfilled. Equation 4 was used to calculate the missing component k * . The total horizontal wavelength that was found is 155 km and the deviation from the East direction was 2.2 • anticlockwise. The SABER horizontal displacement for measurements from 30 to 40 km height is 0.18 • northward and 0.17 • eastward, which is 23.5 km with an angle 58.7 • anticlockwise from the east direction. If this displacement is projected on the horizontal wave vector direction, it is equal to 13 km, which is small compared to the 155 km total horizontal wavelength. Therefore, in this case the satellite profile can be considered vertical. In order to quantify the possible distortion of the vertical wavelength observed in the slanted SABER profile, we use the formula derived by de la . According to the elevation angle of the sounding (23.1 • ) and the orientation angle of the wave vector found when it is projected on the vertical plane defined by the retrieval (86.7 • ), the possible error is about 12% (both angles are defined with respect to the ground). The GW intrinsic period was also calculated with our solution and the aid of the horizontal velocities provided by the ERA Interim reanalysis and was about 2.2 hr (19.6 m/s is the average speed parallel to the horizontal wave vector between the mountain tops around 600 mb, where presumably the waves are generated, and the maximum altitude of our study, at approximately 3 mb). The aspect ratio (the division of vertical and horizontal scales) and intrinsic period of this wave belong to the hydrostatic nonrotating regime, close to the border with the nonhydrostatic spectral sector (Gill, 1982).
Notice that changing the choice of the initial pair (one profile from the lidars and one from SABER) leads from Equations 3 and 4 to a different 2 × 2 linear equation system to be solved but both are equivalent, which means that they lead to the same solution. We now evaluate the impact of uncertainties in the specified parameters of any of both equation sets, which are represented by a 2 × 2 M matrix (four distances) and a column vector b (two phase differences). The equation set is then represented by and to evaluate if in our procedure any small changes in the known parameters can produce large changes in the solution s, we must calculate the condition number K(M). Then (e.g., Alexander & de la Torre, 2010) where s , M , and b refer to the relative error of s, M, and b, respectively. K = 1.47 for any of both equivalent equation sets in our example (1 is the optimal value in any case). Distances can be given with high precision, so their uncertainty can be estimated to be around 1% ( M = 0.01). Phase differences are extracted from the comparison of the same mode in the profiles at the two different places at the same time, and we could evaluate them to be around 10% ( b = 0.1). Then, variations in the solutions would be around 15%. If the three profiles would become nearly collinear, then the equation set would tend to an ill condition, and the propagation of the precision errors to the solution would have dramatic effects through the increase of K(M). In general, it is possible to calculate the horizontal phase speed c h and the horizontal and vertical components of phase velocity c x,y,z as seen from the ground for nonstationary waves The GW-associated specific potential energy can be also calculated over at least one wavelength as whereas the specific horizontal MF components may be obtained by an expression valid in the midfrequency range (e.g., Ern et al., 2004) where g is gravity, refers to the wavelengths, N can be obtained from any lidar profile, and finally T o and T are the GW temperature amplitude and the background atmospheric temperature (which can be obtained from either lidar), respectively. We obtain E p = 44.6 J/kg, F x = −1.45 J/kg, and F y = −0.06 J/kg. The average density in the studied height interval is 1.19 × 10 −2 kg/m 3 , and both components of MF then, respectively, become −0.018 and −0.001 Pa.
Conclusions
We have shown in a zone with high GW activity through one illustrative example that it is possible to reconstruct the 3-D structure of a dominant wave observed by two close simultaneous lidar soundings and an additional vertical temperature profile. The described method may help to dodge a present difficulty as is the determination of the three signed components of the wave vector. This is an essential quantity to determine directional MF, which is strongly related to atmospheric model parameterizations of GW drag. The last element is an Achilles heel that affects the simulation of zonal mean wind and temperature structure at middle and high latitudes. In particular, orographic waves like those generated in the hot spot here studied are currently considered to make significant contributions to the vertical transport of GW MF (e.g., McLandress et al., 2012).
Diverse tests verified if all instruments are likely observing the same GW and no important side effects contaminate the analysis. Only one out of 17 cases provided concrete results. This should be attributed to the fact that in several cases the diverse instruments were observing different phenomena or GW were too weak or affected by other effects or underwent diffusion, absorption, unstable, or nonlinear behavior. It should be noted that in some previous studies it was assumed that much longer latitude, longitude, and time intervals contained the same GW with no additional verification of coherency. McDonald (2012) evaluated as a function of horizontal separation and time difference the percentage of GPS radio occultation paired profiles that may be expected to contain the same GW. For example, his estimations were that approximately 30% of the pairs at 50-60 • S in the SH separated by less than 250 km and by less than 15 min were seeing the same GW. It is clear that an improved version of the Rio Gallegos lidar and eventually the 24 hr operation of both instruments would lead to a substantial increase in the number of cases meeting the required conditions.
10.1029/2020EA001074
The assumptions of the method should be recalled to constrain its validity: one dominant GW is observed by both lidars and the additional sounding, GW and background are adequately separated by the digital filter, and there are no further aliasing effects in the horizontal plane than those considered.
Appendix A
In Figure A1 we may see the spectral power for both lidars as a function of and s for the case that failed to meet the final test in order to be considered a possible standing GW. In Figure A2 we see the corresponding temperature perturbation against time and height for both lidars. The time frame of the coincident data for both sources is from 7 June 2018 01:04 (UTC) to 08:30 on the same day. It may be seen that in Rio Grande an upper stationary front of maxima is clearly defined for only approximately the last 1/3 of the total time. Figure 5, it should be considered that the different aspect ratio is due to the fact that both cases do not have the same time extension. | 6,944 | 2020-07-01T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Dynamic Multi-Graph Convolution-Based Channel-Weighted Transformer Feature Fusion Network for Epileptic Seizure Prediction
Electroencephalogram (EEG) based seizure prediction plays an important role in the closed-loop neuromodulation system. However, most existing seizure prediction methods based on graph convolution network only focused on constructing the static graph, ignoring multi-domain dynamic changes in deep graph structure. Moreover, the existing feature fusion strategies generally concatenated coarse-grained epileptic EEG features directly, leading to the suboptimal seizure prediction performance. To address these issues, we propose a novel multi-branch dynamic multi-graph convolution based channel-weighted transformer feature fusion network (MB-dMGC-CWTFFNet) for the patient-specific seizure prediction with the superior performance. Specifically, a multi-branch (MB) feature extractor is first applied to capture the temporal, spatial and spectral representations fromthe epileptic EEG jointly. Then, we design a point-wise dynamic multi-graph convolution network (dMGCN) to dynamically learn deep graph structures, which can effectively extract high-level features from the multi-domain graph. Finally, by integrating the local and global channel-weighted strategies with the multi-head self-attention mechanism, a channel-weighted transformer feature fusion network (CWTFFNet) is adopted to efficiently fuse the multi-domain graph features. The proposed MB-dMGC-CWTFFNet is evaluated on the public CHB-MIT EEG and a private intracranial sEEG datasets, and the experimental results demonstrate that our proposed method achieves outstanding prediction performance compared with the state-of-the-art methods, indicating an effective tool for patient-specific seizure warning. Our code will be available at: https://github.com/Rockingsnow/MB-dMGC-CWTFFNet.
I. INTRODUCTION
E PILEPSY is one of the most common brain diseases of nervous system, producing recurrent seizures and threatening the patients' life [1].Recently, more than 50 million people worldwide suffer from epilepsy, and there are approximately 30% of patients deteriorating into refractory epilepsy, despite both drug and surgical treatment [2].Fortunately, the seizure prediction based on Electroencephalography (EEG) provides an additional solution for these refractory epilepsy patients, who can give early warning for advanced neuromodulation treatments [3], so as to suppress seizures effectively.The previous studies divided the long-term recorded epileptic EEG signals into four neurophysiological periods: inter-ictal, pre-ictal, ictal and post-ictal periods [4], [5].Therefore, the core problem for the epileptic seizure prediction is how to accurately distinguish the pre-ictal period from inter-ictal period, promoting intelligent waring before seizure onset for patients and clinicians [6].
For automatic EEG seizure prediction, the primary challenge is to extract discriminative EEG features of the epileptic activity.Due to the high temporal resolution of EEG, the long short-term memory (LSTM) [7], [8] was introduced to the seizure prediction models to capture the temporal information of the epileptic EEG.In addition, to exploit the spectral representation in epileptic rhythms, the wavelet transformation [9] and the short-time Fourier transform [10] were combined with the convolution neural network (CNN), which can learn quantitative time-frequency characteristics to facilitate the classification of inter-ictal and pre-ictal periods.Moreover, Ahmet et al. [4] proposed a 3D-CNN seizure prediction framework to evaluate the spatio-temporal evolution correlation from multi-channel EEG time series.Zhang et al. [11] designed a spatial filter of common spatial pattern to extract distinguishing spatial features from epileptic EEG, which further fed into a shallow CNN to discriminate between the pre-ictal and inter-ictal states.However, these methods just obtained the coarse-grained EEG features in single or multi domains by a fixed mode, without taking full advantage of the patient-specific temporal, spectral and spatial signatures simultaneously, which may lead to the loss of essential epileptic activity information.Thus, a multi-branch feature extractor is needed to capture the multi-level fine-grained representations from the epileptic EEG in multiple domains.
Another existing issue is that the CNN framework in seizure prediction task can only learn low-dimensional spatial correlations among EEG channels, due to its regular convolution operation and the local receptive field [12].It is difficult to track the complex non-Euclidean structure in the epileptic seizures [13].To deal with this problem, the graph convolutional network (GCN) were investigated in recent studies of the seizure prediction field [14], [15].The common procedure in GCN is to define the prior adjacency matrix for constructing the graph structure among channels, which helps to convert the epileptic EEG signals to a graph representation with graph nodes and edges [16].For example, Wang et al. [17] employed phase locking value (PLV) in EEG data to construct the adjacency matrix of graph edges.The differential entropy (DE) was applied in the inference of the spatial coupling in network topology to calculate the temporal correlations of EEG and yield the graph nodes [18].Unfortunately, these GCN methods based on the information theory depended on the handcrafted features to generate EEG graphs, neglecting the dynamic changes in patient-specific graph construction.Lian et al. [14] developed a joint graph structure and representation learning network (JGRN) to predict seizures, where the graph structures can be jointly optimized with patient-specific connection weights of temporal channels.A similar study proposed a subject-independent seizure predictor by using geometric deep learning, realizing the seizure prediction from LSTM EEG graph synthesis [15].It is notable that most of these models ignored the spatial position relationship among EEG channels, and only focused on the single and shallow static EEG graph construction without the spatial position guidance, which cannot fully represent the dynamic changes of individualized channel connectivity in multiple domains.Therefore, a novel GCN is highly required to jointly characterize high-level multidomain features, and map patient-specific dynamic EEG graph representations.
Additionally, in order to integrate comprehensive feature information for the precise seizure prediction, some feature fusion strategies were designed to fuse the EEG features from different scales and domains.For example, Li et al. [19] adopted a temporal-spectral squeenze-and-excitation scheme to fuse the hierarchical multi-domain representations of epileptic EEG, which reduced the information redundancy of high-dimensional features.Gao et al. [20] combined the attention mechanism with dilated convolution to aggregate spatio-temporal multi-scale features, providing a promising solution for EEG-based seizure prediction.Although these feature fusion methods obtained a comprehensive feature, they only considered the general fusion of low-level features in Euclidean space [21].High-level EEG graph node features, embedded in non-Euclidean graph structures, urgently need a specific fusion approach to enable robust seizure prediction.
The main motivation of our study aims to break through limitations of the existing prediction methods, including coarse-grained EEG features in single domain, shallow static EEG graph construction without spatial position guidance and difficulties in high-level graph feature fusion.Thus, we propose a novel multi-branch dynamic multi-graph convolution based channel-weighted transformer feature fusion network (MB-dMGC-CWTFFNet), for the patient-specific seizure prediction.First, a multi-branch (MB) feature extractor is used to capture multi-level fine-grained features from epileptic EEG in multiple domain.Second, in order to extract multi-domain graph features, a point-wise dynamic multi-graph convolution network (dMGCN) is constructed to adaptively learn three-view dynamic graph structure with spatial position guidance.Finally, we investigate a channel-weighted transformer feature fusion network (CWTFFNet) to efficiently fuse the multi-domain graph features, which introduces the channel-weighted self-attention mechanism to map discriminative fused representations for the seizure prediction.The proposed MB-dMGC-CWTFFNet is evaluated on two kinds of epileptic datasets, i.e., CHB-MIT EEG dataset and our Xuanwu intracranial stereo-electroencephalography (sEEG) dataset, and achieves the promising performance compared with the state-of-the-art methods, which validates its outstanding capability in seizure prediction task.
In general, the main contributions of our study are summarized as follows: 1) A novel MB-dMGC-CWTFFNet is proposed to predict seizures for the individual epilepsy patient, which can efficiently fuse multi-domain graph features, yielding the highest prediction performance on both CHB-MIT and Xuanwu datasets, respectively.
2) We design a MB feature extractor, including three parallel sub-branches in temporal, spatial and frequency domains respectively, to capture the multi-level fine-grained features jointly, which offsets the inadequate representation of coarse-grained EEG features in traditional feature extractors.
3) A dMGCN is constructed by point-wise dynamic graph neural network, which can learn dynamic changes of three-view graph structures with spatial position guidance, and extract deep multi-domain graph features, and thus overcomes insufficient expression of spatial connectivity in shallow static EEG graph.
4) A CWTFFNet is developed by introducing both the local and the global channel-weighted self-attention into the transformer network.The local graph edge weights are complementary to the global channel position information, which can implement efficient fusion of high-level graph features against current feature fusion strategies.
II. METHODOLOGY
The seizure prediction framework of our proposed MB-dMGC-CWTFFNet is displayed in Fig. 1.The overall architecture mainly consists of the multi-branch feature extractor, the point-wise dynamic multi-graph convolution network and the channel-weighted transformer feature fusion network, summarized as follows: 1) The MB feature extractor is primarily designed to extract the multi-domain temporalspatial-spectral features from EEG signals.
2) The dMGCN is further employed to transform the temporal-spatial-spectral features into high-level graph representations from temporal, spatial and spectral views.3) The CWTFFNet is adopted to obtain the fused feature maps, and the fully connected layers are utilized to generate the recognition results ultimately.The well-trained MB-dMGC-CWTFFNet is then transformed into a practical seizure warning system by a post-processing strategy.Details of each step are given in following subsection.
A. Multi-Branch Feature Extractor
The epileptic EEG signals are defined as E = (x i , y i )|i = 1, 2, . . ., N , where x i ∈ R C×S represents the i-th EEG trial with C channels and S sampling points.N is the total number of EEG trials.y i is a binarized label of pre-ictal or inter-ictal state corresponding to x i .
Considering the individualized differences of epileptic activities in both time domain, frequency domain and spatial domain, we firstly construct the MB feature extractor to capture the temporal-spatial-spectral representations from epileptic EEG signals.In Fig. 1, the MB feature extractor includes three sub-branches: the multi-scale temporal-conv branch, the multi-band spectral-conv branch and the multi-channel spatial-encoding branch.
1) Multi-Scale Temporal-Conv Branch: Epileptic seizure recordings involved the critical electrophysiological fluctuation from inter-ictal period to pre-ictal period [22].In order to capture the comprehensive temporal information of EEG with its higher time resolution in time domain, a multiscale temporal-conv branch is first designed with nparallel temporal convolution (TConv) layers.Thus, we can gain the multi-scale temporal features with different sizes from TConv-1 to TConv-n, denoted as , where T k is the output scale of the temporal feature from the k-th TConv.Additionally, the batch normalization and exponential linear unit (ELU) are also applied in the each TConv of the multi-scale temporal-conv branch to accelerate the training and convergence of the proposed model.Accordingly, these multi-scale temporal features are concatenated to generate the overall feature map in time domain: 2) Multi-Band Spectral-Conv Branch: Previous studies have proven that the epileptic activities may be of different frequencies for epilepsy patients [23].Thus, according to the clinical five frequency sub-bands: δ band (0-4Hz), θ band (4)-8Hz), α band (8)-13Hz), β band (13)-30Hz), γ band (30-50Hz) [24], the multi-band spectral-conv branch is adopted to contain the hierarchical wavelet convolutions (WaveConv) based on Daubechies order-4 (Db4) wavelet [25].The wavelet decomposition can be accomplished on the EEG trials due to its high correlation coefficients with the epileptic signal [26] to obtain the wavelet spectral features in the five sub-bands.The hierarchical WaveConv layers perform successive spectral analysis by means of L-level iteration, where L = log 2 ( f s ) − 3, determined by the EEG sampling rate f s , and ⌊•⌋ represents the rounding-down operation [27].Then, the frequency boundaries of the l-th WaveConv are (0, f s /2 (l+1) ) and ( f s /2 (l+1) , f s /2 l ), respectively, where l = 1, 2,• • • , L. After inputting the EEG trial x i ∈ R C×S into the multi-band spectral-conv branch, we can obtain the multi-band wavelet spectral features: to five standard physiological frequency sub-bands, where H = S/2 l is the output dimension of the wavelet spectral features generated from the l-th WaveConv.Additionally, due to the similar of time-frequency analysis with the discrete wavelet transform, the WaveConv operators have no learnable parameters in the processing of the feature extraction, which weights are fixed and given by Db4 wavelet filter.Then, the five-band wavelet spectral features are concatenated into the integral spectral feature map in frequency domain: 3) Multi-Channel Spatial-Encoding Branch: Apart from the multi-scale temporal-conv branch and multi-band spectralconv branch, we also propose a multi-channel spatial-encoding branch to excavate the representations of channel mapping in spatial domain.Specially, the multi-channel EEG trials are transposed to the channel-wise slices, which are imported into the channel position encoder and spatial feature encoder to complete the channel correlation construction and the spatial feature extraction, respectively.For the channel position encoder, a distance set U is foremost established by U = {u i j |i, j ∈ [1, C], i ̸ = j}, where u i j represents the Euclidean distance of the i-th channel and the j-th channel, C is the total number of channels.we can get the u i j from the international standard electrode system [28].Then the initialized channel adjacency matrix A ∈ R C×C is generated by the following position embedding method: where M(•) is the mean operation, a i j is the element of the i-th row and the j-th column of adjacency matrix A ∈ R C×C .Therefore, the channel adjacency matrix Acontains the global position information of the multi-channel relationship, which will be used to construct the dynamic graph in the following point-wise dMGCN.Additionally, we adopt the spatial feature encoder based on the channel-wise spatial convolution [29], [30] to extract the multi-channel spatial characteristics, which are ultimately concatenated and reshaped as F S ∈ R C×D S , where D S is the output dimension of integrated spatial feature map.
B. Point-Wise Dynamic Multi-Graph Convolution Network
To further learn deep dynamic connectivity of different brain regions for the individual epilepsy patient, in this subsection, a novel point-wise dMGCN is proposed to extract multi-domain graph features.Three synchronized dynamic graph convolution networks are involved by temporal, spatial and spectral views.Three views constitute the point-wise dMGCN in Fig. 2, which can explore the deep channel relationship of the temporal feature map F T , the spatial feature map F S and the wavelet spectral feature map F R , respectively.For each graph convolution view, the initialized adjacency matrix A ∈ R C×C , depicting original distance between any two channels, has been calculated by the channel position encoder of the MB feature extractor.To further guide the dynamic evolution process of the channel relationship from three kinds of views, a self-gating strategy is employed in initialized adjacency matrix A as follows: where Ã1 , Ã2 , Ã3 ∈R (C×C)×1 are reshaped from A ∈ R C×C , W 12 , W 22 , W 32 ∈R ((C×C)/r )×(C×C) and W 11 , W 21 , W 31 ∈R (C×C)×((C×C)/r ) are weight matrixes of fullyconnected layers, r is the reduction ratio, •δ() and •σ () are the ELU activation function and the rectified linear unit (ReLU).Hence the three dynamic adjacency matrixes A T , A S , A R ∈R C×C corresponding to temporal, spatial and spectral graph convolution nets are acquired by reshaping ÃT , ÃR , ÃS ∈R (C×C)×1 into R (C×C) .After constructing the dynamic connectivity of epileptic activities from three views, the operations of the dynamic graph convolution are performed on the temporal feature map F T ∈ R C×D T , the spatial feature map F S ∈ R C×D S and the wavelet spectral feature map F R ∈ R C×D R , respectively, which are formulated by: where the dynamic graph features corresponding to F S , F T , F R with the hidden non-Euclidean topology in epileptic activities, R are the degree matrixes corresponding to A T , A S , A R respective, Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.11 , 12 ∈ R D T ×D T , 21 , 22 ∈R D S ×D S , 32 , 32 ∈R D R ×D R represent the weight matrixes of convolution kernels in the point-wise convolution unit [31].Therefore, we obtain the dynamic multi-domain graph features G T , G S , G R with their corresponding dynamic adjacency matrix A T , A S , A R , which will be fed into the CWTFFNet to conduct the final feature fusion in the next subsection.
C. Channel-Weighted Transformer Feature Fusion Network
To further fuse the high-level graph features G T , G S , G R , the CWTFFNet is proposed by combining the dynamic adjacency matrix A T , A S , A R with multi-head self-attention mechanism.In Fig. 3, the CWTFFNet can be divided into a local channel-weighted multi-head self-attention (Local CW-MHSA), a global channel-weighted feature fusion block (Global CW-FFB) and the multi-layer perception (MLP).
The Local CW-MHSA consists of three heads: the A T -weighted self-attention unit (SAU), the A S -weighted SAU and the A R -weighted SAU.For each self-attention head, three kinds of weight matrixes, denoted as are initially introduced to encode the input graph features, where d K and d V both represent the hyperparameters.So, the query Q,the key Kand the value Vare calculated via: where the G ∈ {G T , G S , G R } indicates three kinds of graph features.We introduce the local channel-weighted strategy by applying the dynamic adjacency matrixes in multi-head selfattention mechanism.Then, the local features Z local ∈ R C×d K from Local CW-MHSA is obtained by: where Z T , Z S , Z R are the outputs from the three self-attention heads respectively, and •Concat () is the concatenation function.To further capture the global information, we employ the Global CW-FFB and MLP in local features Z local according to the following equations: where where p i is the conditional probability of the i-th EEG trial outputted by the proposed MB-dMGC-CWTFFNet, l j is the class from the label set, •ϕ() represents the indicator function, Nis the total number of samples, M =2 is the number of classes.λ ∥θ∥ belongs to the trade-off regularization term of Eq. ( 17), and aims to alleviate the overfitting problem during the model training, where λ is the regularization parameter and the θ denotes the updatable parameters of the model.As a result, a personalized well-trained model of the proposed MB-dMGC-CWTFFNet is generated, and will be performed the individual seizure prediction by the following post-processing [32].
Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.
D. Post-Processing Strategy
Eventually, the well-trained MB-dMGC-CWTFFNet is then transformed into a practical seizure warning system by a post-processing strategy [33].Specifically, after inputting the consecutive EEG signals into the well-trained MB-dMGC-CWTFFNet, the probability series P(i) belonging to pre-ictal class from i-th epoch is generated.Then we employ a moving average filter on P(i) to reduce the oscillation and obtain the smoothed probability series P s (i) over time [32].The lengths of the moving average filter are configured to 15s and 25s for CHB-MIT and our Xuanwu dataset respectively, which will be discussed by the experimental results in Section IV-B.
A. Dataset Description
The performance of the proposed MB-dMGC-CWTFFNet is evaluated on two epileptic datasets, wich is given as follows: 1) CHB-MIT Scalp EEG Dataset [34]: The CHB-MIT dataset contains the scalp EEG signals from 23 patients, which were recorded with 18 common electrodes and sampled at 256Hz in the Children's Hospital Boston.In this study, there were at least two seizures and three-hour inter-ictal recordings from each patients, who were selected for the patient-specific model evaluation of seizure prediction [4].In addition, the neural recordings within two hours after a seizure are removed to exclude the effect of post-ictal period [32].Specially, if several seizures cluster within two hours, only the first seizure prediction is considered as an effective evaluation, because a successful warning depends on whether the model can predict the leading seizure [35].
2) Xuanwu Intracranial sEEG Dataset: The Xuanwu dataset is collected by the Xuanwu Hospital of Capital Medical University, Beijing, China, which consists of sEEG recordings on the intracranial depth electrode for 5 focal epilepsy patients, which sampled at 256Hz with 15 channels.From Table I, there are total 16 seizures, and the recording duration of sEEG from these patients is 42 hours.The labels of the inter-ictal, pre-ictal and ictal states were marked by the professional clinicians.This study was approved by the Ethics Committee of Xuanwu Hospital, Capital Medical University (LYS2018041) in Beijing and complied with the ethical standards of the Declaration of Helsinki.Informed consent was obtained from all patients.
B. Experimental Settings and Evaluation Metrics
In this study, based on the recent research [4], [11], the EEG signals from CHB-MIT and Xuanwu datasets are both cropped into 5-second clips before fed into the proposed MB-dMGC-CWTFFNet.Additionally, the pre-ictal period was popularly defined by 15 minutes before seizure onset in the latest methods [4], [32].Thus, we adopt the identical setting to the pre-ictal period, and the inter-ictal period is defined at least 2 hours away prior to seizure onset and after seizure ending [10].
To conduct a comprehensive performance evaluation of the proposed MB-dMGC-CWTFFNet, the patient-specific leaveone-out cross-validation (LOOCV) [20] is employed in this study.Since the inter-ictal period is much larger than the pre-ictal periodin the model training stage, the inter-ictal clips are randomly down-sampled to the same number of the pre-ictal clips [8].Then, assuming that there are total N i seizures for the i-th patient, in each leave-one-out loop, N i − 1 seizures are utilized for training while the left one is for testing, during the training stage, the cross-validation is use to divide training set and validation set.It is repeated with N i loops until the proposed model completes the prediction evaluation of all seizures for the i-th patient.Since the number of seizures and the recording duration are both different for each patient in two datasets, for each leave-one-out loop, the number of training data samples varies from 3028 to 18241, the validation data size varies from 572 to 3543, the testing data size varies from 3131 to 7508, and the total data size varies from 6731 to 29292.The proposed MB-dMGC-CWTFFNet is evaluated via four metrics, including area under curve (AUC), sensitivity (S n ), false prediction rate (FPR/h) and the p-value.AUC mainly reflects the classification performance for the inter-ictal and the pre-ictal states.S n denotes the ratio of successfully predicted seizures to the total number of seizures.FPR/h indicates the number of false alarms per hour, and the p-value represents the significance of an improvement over chance-level, which is used to evaluate statistically significance whether the seizure warning system is better than a random predictor [4].
C. Overall Performance
In order to illustrate the patient-specific prediction efficiency of the proposed MB-dMGC-CWTFFNet, we compare our proposed model with the following state-of-the-art methods in the same chance-level, which are tested on two datasets.
1) DCNN-Bi-LSTM [8]: This is a typical deep learning method by combining the deep convolutional network with a bidirectional long short-term memory, extracting the spatial and temporal features of epileptic EEG signals respectively, which were used for the seizure prediction.2) CE-stSENet [19]: This method used a temporal-spectral squeenze-and-excitation network to capture hierarchical multi-domain representations, which introduced the attention mechanism into the epileptic seizure detection task and improved the recognition performance.3) TS-MS-DCNN [20]: This advanced model encoded the multi-scale EEG features by designing temporal and spatial multi-scale stages, and a dilated convolution block was constructed to further expand the feature receptive and achieving the EEG-based seizure prediction.
TABLE II THE PATIENT-SPECIFIC OVERALL COMPARISON OF PERFORMANCE ON CHB-MIT DATASET TABLE III THE OVERALL OF PERFORMANCE ON XUANWU DATASET
The experimental of the patient-specific comparison on public CHB-MIT and our Xuanwu datasets are listed in Table II and Table III respectively.From Table II, we can observe that the DCNN-Bi-LSTM, CE-stSENet and TS-MS-DCNN gain the average AUC of 0.865, 0.857 and 0.890 respectively on CHB-MIT dataset, while our proposed MB-dMGC-CWTFFNet reaches the highest average AUC of 0.935.Especially for Patient 1, 8, 13 and 23, which the AUC are all greater than 0.985, indicating an excellent performance of our method in distinguishing between the inter-ictal state and the pre-ictal state.In the seizure prediction scenario, the average sensitivity of our proposed model achieves an ideal 97.8% as well, which outperforms three baseline methods with 7.1%, 11.8% and 6.3% respectively.The distinct advancement on S n demonstrates that our proposed model, which is transformed into the seizure warning system, performed a more successful seizure warning for an individual patient.In addition, our proposed model yields the lowest average FPR/h of 0.059, which is at least 58.7% improvement against other methods.Meanwhile, the p-value of our seizure warning system is less than or equal to 0.001 for all patients, implying the improvement-over-chance of our seizure predictor is statistically significant with 99.9% confidence interval.It indicates that our proposed MB-dMGC-CWTFFNet has the significantly patient-specific capability for the epileptic seizure prediction.
Additionally, to further validate the effectiveness of our proposed method, Table III lists the prediction results for five focal epilepsy patients on Xuanwu dataset.It is obvious that our proposed model achieves more excellent performance on AUC and S n with average 0.984 and 100% respectively, which are at least 5.1% and 10.0% higher than that of the state-of-the-art models.The average FPR/h of our method is 0.079, which is lower than other methods.These encouraging experimental results demonstrate the remarkable performance ( p <0.05) of our MB-dMGC-CWTFFNet framework in the subject-independent intracranial seizure prediction task, which makes it possible to predict seizure by implanting intracranial deep electrode, and it enables more convenient treatment for refractory epilepsy patients [36].
A. Ablation Studies
To prove the innovation of each component of our proposed MB-dMGC-CWTFFNet, the ablation studies are conducted on both CHB-MIT and Xuanwu datasets.In this subsection, we discuss the efficacy of each innovation by comparing the proposed method with and without this component, which contributes to justifying the positive influence.The overall experimental result of ablation studies is presented Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply. in Table IV, and the impacts of MB feature extractor, pointwise dMGCN and CWTFFNet are demonstrated respectively as follows:
TABLE IV ABLATION STUDIES ON TWO DATASETS
1) Impact of MB Feature Extractor: In order to give a comprehensive assessment for the proposed MB feature extractor, we compare our MB-dMGC-CWTFFNet with three simplified sub-models: a) the model without temporal-conv; b) the model without spatial-encoding; c) the model without spectralconv.From Table IV and Fig. 4, firstly, when using the temporal-conv to extract the multi-scale temporal features on two datasets, the AUC of our MB-dMGC-CWTFFNet increased by 2.6% and 3.1% respectively compared with the model without the temporal-conv.The S n also get the improvements of 3.0% on CHB-MIT dataset, 15.0% on our Xuanwu dataset.Additionally, the FPR/h declines by 79.9% and 45.9% on two datasets respectively, which illustrates the availability of the temporal-conv in capturing multi-scale temporal evolution, and indicates the effectiveness of MB feature extractor in extracting fine-grained temporal features.
Meanwhile, the spatial-encoding also plays an important role in the feature extraction of multi-channel spatial features.For instance, compared to the model without the spatialencoding, the AUC of the proposed model increase from 0.907 and 0.948 to 0.935 and 0.984 on two datasets respectively, and the S n are improved from 94.8% and 85% into 97.8% and 100%.The FPR/h decrease from 0.337 and 0.243 to 0.059 and 0.079.It verifies that the spatial-encoding branch enables exact spatial expression with cortical multi-channel representations, which contributes to the seizure prediction with distinct improvements of performance metrics.
In addition to the above two branches proposed in MB feature extractor, the effect of the spectral-conv is further discussed.From Table IV, we can find that the proposed method with spectral-conv shows a better performance.For the CHB-MIT dataset, its evaluation metrics of AUC and S n are 4.8% and 4.4% higher than that of the model without spectral-conv, and the FPR/h declines by 88.5%.Similarly, when using the spectral-conv on the Xuanwu dataset, our MB-dMGC-CWTFFNet achieves the improvement of 2.3% AUC and 5.0% S n over the model without spectral-conv, whose FPR/h are reduced by 37.8% accordingly.These evaluation results also prove that the designed spectral-conv can extract comprehensive spectrum characteristics in five clinical physiological rhythms, which facilitates the construction of the patient-specific MB feature extractor by combining with temporal-conv and spatial-encoding branches.
Moreover, to further validate the superiority of the proposed MB feature extractor intuitively, the t-SNE is applied to visualize the temporal-spatial-spectral features, which were extracted by the models with and without MB feature extractor.The t-SNE visualization in 2D embedding space of inter-ictal and pre-ictal features on two datasets is shown in Fig. 5.We can see that the binary-class feature distributions, learned by MB feature extractor, presents a better discrimination than the model without MB feature on both CHB-MIT and Xuanwu datasets.Especially for the models without spatialencoding, some inter-ictal and pre-ictal features are confused together.In contrast, the proposed model using MB feature extractor obtain more discriminative features, embodied in visible inter-class distance and dense intra-class distribution on both two datasets.These phenomena also explain that the best seizure prediction performance can be produced by combining the temporal-conv, the spatial-encoding and the spectral-conv branches simultaneously, which fully illustrates the innovation of MB feature extractor in extracting the multi-level finegrained features jointly.
2) Influence of Point-Wise dMGCN: To judge the contribution of the proposed point-wise dMGCN, we perform an ablation experiments for our point-wise dMGCN to investigate its influence in the patient-specific seizure prediction.Fig. 6 shows the comparison of AUC between the proposed models with and without point-wise dMGCN for each patient.For CHB-MIT dataset, we can see that the average AUC of our MB-dMGC-CWTFFNet is 7.3% higher than that of the model without point-wise dMGCN.Especially, a maximum AUC increase of 0.18 (about 22.5% improvement) occurs on Patient 13.For Xuanwu dataset, the AUC of our model increased by about 7.5% compared to the model without IV, after employing point-wise dMGCN, the S n of our model outperform that of the ablation model with 8.1% and 16.7% on two datasets respectively, and the FPR/h is decreased from 0.326 into 0.059 on CHB-MIT, from 0.651 into 0.079 on Xuanwu dataset.These enhancements give substantial evidences that point-wise dMGCN can better learn the three-view graph structures with spatial position guidance, and extract deep multi-domain graph features, which promotes the overall performance in seizure prediction warning.
3) Efficacy of CWTFFNet: In order to fuse the dynamic multi-domain graph features, the CWTFFNet is adopted integrate the local and global representation based on the channel-weighted self-attention mechanism.Therefore, we further compare the efficacy between our MB-dMGC-CWTFFNet and the model without CWTFFNet.The results of the ablation experiment on two datasets are presented in Fig.To further evaluate the ability of the proposed CWTFFNet contributing to the performance of the seizure prediction, we conduct the comparison of the prediction time with and without CWTFFNet on CHB-MIT and Xuanwu respectively, and the results are unfolded in Fig. 8.The proposed models with and without CWTFFNet both successfully implement the seizure prediction in pre-ictal periods.However, for the identical seizure from CHB-MIT dataset, the model without CWTFFNet just achieves the seizure prediction with 5 minutes prior to the seizure onset, while our proposed model using CWTFFNet obtain a 9-minute advance of prediction time.For Xuanwu dataset, the prediction time was improved from 13 minutes to 15 minutes before the seizure onset, and these improvements in seizure prediction time strongly embody the innovative contribution of CWTFFNet.Specifically, the channel-weighted transformer in our CWTFFNet can reinforce the learning of the model by multi-channel-weighted self-attention mechanism.Accordingly, the fused features, containing local multi-graph structures and global channel information, are more conducive to the seizure prediction.
B. Parameters Analysis of Post-Processing
In this subsection, we mainly analyze the influence of two hyperparameters for our training model, the filter length and the threshold ω, on the seizure warning system transformed by our MB-dMGC-CWTFFNet.In the post-processing, the moving average filter can smooth the probability outputs of our model by filtering the outliers, resulting in practical seizure prediction results.Hence, the filter length in the moving average filter is set from 5 to 60 with a step of 5 (unit: second), and the corresponding variations of S n and FPR/h on two datasets are shown in Fig. 9(a) respectively.Interestingly, in both two datasets, the larger filter length leads to the unsatisfactory S n , while the smaller filter length causes the poor FPR/h.The main reason is that a large filter length may result in over-smoothed prediction results, and thus some short-duration warnings are probably missed.However, a small filter length may retain more predicted outliers, which greatly Authorized licensed use limited to the terms of the applicable license agreement with IEEE.Restrictions apply.increases the probability of false alarm [32].Consequently, to maintain the trade-off between S n and FPR/h, the filter lengths are configured to 15 for CHB-MIT dataset and 25 for Xuanwu dataset.
TABLE V EXPERIMENTAL SETTINGS AND PERFORMANCE COMPARISON OF THE STATE-OF-THE-ART METHODS ON CHB-MIT
Since the seizure warning depends on whether the predicted probability exceeds threshold ω, we discuss this hyperparameter to evaluate its sensitivity on the proposed model.The threshold ω is varied in the range from 0.1 to 0.9 with a stride of 0.1, and the performance trade-off between S n and FPR/h is also displayed in Fig. 9(b).As can be observed, the variation trends of two evaluation metrics along with the threshold ω are similar to that with the filter length.The best trade-off results between S n and FPR/h both appear in the 0.6 threshold on two datasets.Thus, to achieve optimal seizure prediction after the post-processing, we set ω to 0.6 as the final fixed threshold for CHB-MIT and Xuanwu datasets, which is consistent with existing studies [9], [32].
C. Performance Comparison of the State-of-the Art Methods
The performance comparison of the state-of-the-art seizure prediction methods on CHB-MIT dataset is summarized in Table V.In order to discuss the advantages of our proposed model, we conduct an objective comparative analysis among these methods.For example, Truong et al. [10] and Yang et al. [35] both employed the short-time Fourier transform (STFT) in the CNN of seizure prediction frameworks, which were tested on 13 patients and achieved the sensitivities of 81.2% and 89.25% respectively, lower than our MB-dMGC-CWTFFNet.This is mainly because our proposed MB feature extractor can extract multi-level fine-grained features compared to traditional time-frequency feature extraction methods.Compared with two deep learning methods using spectral power [4] and common spatial pattern (CSP) [11] respectively, our proposed method applies the point-wise dMGCN learns three-view graph structures and captures deep multi-domain graph features, so it yields 10.8%, 5.81% higher in S n and 0.127, 0.061 lower in FPR/h.Unlike the study [20] that fused the multi-scale temporal-spatial features by attention mechanism based dilated CNN, our MB-dMGC-CWTFFNet introduces the local and global channel-weighted strategy into the multi-head self-attention units, which is beneficial to efficient feature fusion for complex graph structures and outperforms the TS-MS-DCNN with 4.51% S n .Although some advanced methods [9], [15] gained the suboptimal performance in seizure prediction, their validation scheme using 10-Fold CV shuffled original EEG signals and destroyed the continuity of epileptic activity over time, and is not conducive to real-time seizure warning compared with our adopted LOOCV scheme.Additionally, compared with the model proposed by Liang et al. [37], our MB-dMGC-CWTFFNet achieves 9.01% higher in S n and 0.123 lower in FPR/h.Because our proposed method considers multi-domain variable information and constructs the multi-graph framework, it offsets the lack of partial domain information from the feature alignment strategy in SSDA-SPM.In summary, compared to most of existing studies, the main differences and advantages of our MB-dMGC-CWTFFNet include that it can dynamically learn changes in multi-graph topologies with spatial position guidance.Meanwhile, it efficiently fuses multi-domain graph features by using channel-weighted multi-head self-attention mechanism.
D. Limitations and Future Directions
Although our proposed prediction framework achieves satisfactory seizure prediction performance, two limitations still exist in our study.First, our MB-dMGC-CWTFFNet can realize the end-to-end seizure warning without complicated EEG pre-processing, but the artifacts in epileptiform discharges and potential bad channels may interfere with the predictor and cause some false positives in practical warning scenario.Therefore, we will devote to exploring the adaptive channel selection [38], [39] and unsupervised artifact removal algorithms [40], and further eliminating the redundant information in raw epileptic signals.Second, our proposed method conducts a patient-specific seizure prediction by training with the same patient's data, while it is difficult to complete the model fine-tuning across patients.Thus, we will combine the domain-adversarial transfer learning strategies [41], [42] with our seizure prediction framework in the future work, which aims to handle the drifting distribution between target domain and source domain, and contributes to the cross-patient seizure prediction.
V. CONCLUSION
In this study, we propose a novel EEG-based MB-dMGC-CWTFFNet framework for patient-specific seizure prediction.The MB feature extractor is adopted to effectively capture the multi-level fine-grained representations in multiple domains.The designed point-wise dMGCN is further employed to dynamically learn the deep graph structures with spatial position guidance, which contributes to extracting the multi-domain graph features from temporal, spatial and spectral views.Finally, the CWTFFNet utilizes the local and global channel-weight strategy to facilitate the efficient fusion of high-level graph features.Furthermore, we conduct the comparative experiments on two epileptic datasets, and the results show our proposed MB-dMGC-CWTFFNet obtains a better evaluation metrics, whose AUC, S n , FPR/h achieve 0.935 and 0.984, 97.8% and 100.0%, 0.059 and 0.079 on CHB-MIT and Xuanwu datasets respectively, outperforming the state-of-theart methods.These findings prove the outstanding performance of our proposed MB-dMGC-CWTFFNet in patient-specific seizure prediction, and indicate its potential application prospect in neurostimulation treatment of refractory epilepsy patients.
is the initialized adjacency matrix from the channel position encoder, •G A P() represents the global average pooling, •L N () means the layer normalization, •ReLU () is the rectified linear unit, F 1 f c and F 2 f c denote the FC layers, and •F M() is a feedforward module (including two feedforward layers and an ELU activation).Therefore, we acquire the fused features Z f used ∈ R C×d K through the constructed CWTFFNet.Finally, two fully-connected layers are used to conduct the decoding for the fused features.They are flattened into a 1-dimensional tensor to feed into the fully connected layers, then the classification probabilities of the pre-ictal and the inter-ictal states are estimated by the Softmax function, and the index corresponding to the maximum of probabilities represents the final result of seizure prediction.Moreover, a crossentropy loss functionis employed for the patient-specific model training of proposed MB-dMGC-CWTFFNet, the cross-entropy loss L C E between the prediction result and the label is minimized by:
Fig. 4 .
Fig. 4. Performance comparison of AUC between the models with and without MB feature extractor on CHB-MIT and Xuanwu datasets.
Fig. 5 .
Fig. 5.The t-SNE visualization in 2D embedding space of inter-ictal and pre-ictal features by comparing the models with and without MB feature extractor on CHB-MIT and Xuanwu datasets.
Fig. 6 .
Fig. 6.Performance comparison of AUC between the proposed models with and without point-wise dMGCN on (a) CHB-MIT dataset and (b) Xuanwu dataset.
Fig. 7 .
Fig. 7. Performance comparison of AUC between the proposed model with and without CWTFFNet on (a) CHB-MIT dataset and (b) Xuanwu dataset.
7 .
It can be noted that the CWTFFNet increases the average AUCs from 0.880 and 0.936 to 0.935 and 0.984 on two datasets, respectively.The standard deviations achieve 0.013 and 0.032 lower than that of the model without CWTFFNet, which indicate the better generalization performance of our proposed CWTFFNet across multiple patients.Especially for Patient 7 from CHB-MIT and Patient 3 from Xuanwu, their AUCs achieve greater improvements of 21.0% and 13.7% respectively.In addition, compared to the model without CWTFFNet in Table IV, the proposed model gains higher S n of 7.5% on CHB-MIT and 13.3% on Xuanwu, FPR/h values get decline of 0.135 and 0.227 after utilizing CWTFFNet on two datasets.It proves the advantage of incorporating the
Fig. 8 .
Fig. 8. Performance comparison of the seizure prediction time with and without CWTFFNet from (a) CHB-MIT and (b) Xuanwu dataset.
Fig.
Fig. Performance comparison of S n and FPR/h with different postprocessing parameters.(a) filter length; (b) threshold. | 9,242.8 | 2023-10-02T00:00:00.000 | [
"Medicine",
"Computer Science",
"Engineering"
] |
Plant factory technology lights up urban horticulture in the post-coronavirus world
© The Author(s) 2022. Published by Oxford University Press on behalf of Nanjing Agricultural University. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited. Horticulture Research, 2022, 9: uhac018
Dear Editor,
The pandemic of novel coronavirus disease 2019 (COVID-19) has highlighted the critical importance of ensuring a consistent supply of horticultural products (e.g. vegetables and fruits) [1]. Worldwide quarantine and social distancing led to transportation disruptions, labor shortages, and limited access to local markets, all of which had a significant impact on the production, post-harvest processing, distribution, and consumption of horticultural products in urban areas. Moreover, the traditional agricultural approach is currently facing the unprecedented challenge of feeding an expanding population, as approximately 6.7 billion people are expected to live in urban areas by 2050. Rapid urbanization brings great challenges to horticultural production: gradual shrinking of arable land, declining numbers of agricultural practitioners, reduced availability of irrigation water for farming, increased costs of food transportation, and exacerbation of environmental deterioration. Thus, the supply of horticultural products to urban areas will depend critically on whether such farming systems can enable steady and effective production, a stable and balanced supply, shortened distribution chains, and consistent availability and accessibility of products without compromising safety concerns. In this regard, plant factories with artificial light (PFALs) represent an innovative and promising production system that has shown great potential for stable, effective production of horticultural products both during and after the COVID-19 crisis.
PFALs are multi-layer indoor farming systems that employ green and sustainable crop cultivation techniques, including vertical cultivation, optimized lighting recipes, energy-conserving technology, and intelligent control systems to enable horticultural production regardless of climatic and geographic conditions (Fig. 1a) [2,3]. The global indoor farming market reached 32.3 billion USD by 2020, with over 500 PFALs being operated commercially in China, Japan, Singapore, the US, and the UK. Simultaneously, there has been intensive research on increasing the productivity and sustainability of PFALs and on producing high-quality horticultural products in PFALs in recent years.
Vertical cultivation technology significantly increases horticultural productivity through the development of new lightweight materials for structural bracing frames and high-rise modular assembly layers and the application of operating technologies (e.g. operating machinery, auxiliary robots, and automation equipment) [3]. In addition, the enhanced light efficacy and optimization of coupled environmental factors significantly promote plant growth and photosynthesis in PFALs. Numerous investigations have focused on developing "lighting recipes" to optimize lighting conditions (e.g. intensity, spectrum, and photoperiod) for a variety of purposes, including high yields, quality, and energy efficiency, depending on the plant species and the stage of growth and development. Several studies have aimed to apply advanced lighting systems to maximize plant light interception and provide uniform illumination across all leaf surfaces while simultaneously co-optimizing other environmental factors [3,4].
Improving energy use efficiency to reduce production costs and maximize economic benefits is crucial for the environmentally sustainable development of PFALs. For instance, establishing energetic fluxes and applying A, A typical PFAL is composed of several systems, including an external building envelope, a cultivation system, an environmental control system (temperature, humidity, and CO 2 ), an air ventilation and purification system, artificial lighting, a fertigation system, an intelligent control system, and accessory machines. PFALs demonstrate the following advantages: i) vertical cultivation technology that increases the productivity of horticultural crops, with an annual lettuce production per m 2 of 120 kg, more than 40 times that of the open field [2]; ii) high-yield and high-quality products that are free from contamination and contain high concentrations of phytochemicals; iii) closed cultivation systems that maximize the use of internal resources (e.g. water use efficiency >96%, fertilizer use efficiency >80%) [2]; iv) high ability to control the microenvironment around the crops, enabling easy monitoring of plant development; v) continuous production throughout the year; vi) no geographical restrictions, allowing for shorter transportation distances and a smaller carbon footprint during transportation; vii) higher levels of mechanization with reduced labor costs; and viii) a stable food supply regardless of the climate and pandemics. B, In the post-coronavirus era, PFALs will serve a variety of purposes in urban agriculture. PFALs have the potential to contribute significantly to the supply of urban horticultural products, the production of high-quality food, and the alternative manufacture of plant-based biopharmaceuticals. They could be associated with urban activities, enriching a plentiful "vegetable basket" and offering individuals a healthy diet. a flexible yield-energy model could help researchers understand the energy balance and optimize the control strategy for weather conditions in order to minimize energy consumption [5]. Moreover, it will be important to reduce electricity consumption for lighting and air conditioning by implementing multi-factor refined management, generating electrical energy from clean energy sources (e.g. solar, wind, biomass, and geothermal energy), and storing excess electricity in stationary batteries for time-adjustable usage with higher efficiency.
Rapid development of Internet of Things (IoT) devicesincluding optical, mechanical, and electrochemical IoT sensors, communication technologies, methods for massive Internet connectivity, data storage, and processing units-has paved the way for IoT applications in PFALs. In addition, artificial intelligence-based data analytics is a powerful tool for automation and decision-making in agriculture, and IoT-enabled accessory machinery can help increase crop productivity and reduce human resource requirements [6]. Future intelligent control systems for PFALs are expected to incorporate multiple platforms with functions such as order management, seed plot scheduling, production management, environmental control, energy consumption measurement, intelligent decision support, and market forecasting, thus enabling the complete control of unmanned PFALs to achieve cost-effectiveness, conserve resources and energy, and reduce labor requirements.
Cultivating nutritious and healthy horticultural products will be an important goal for agricultural development in the future, especially in the post-coronavirus world. Breeding cultivars that are suited to the constraints imposed by PFALs could provide a breakthrough, with the goal of selecting cultivars that are fast growing, high yielding, high quality, functional, and tolerant of low light with a high edible fraction and high light and nutrient use efficiency. For example, Kwon et al. (2020) bred a compact, early-flowering tomato variety suitable for urban agriculture by editing tomato genes for stem length regulation (SlER), condensed shoots, rapid flowering (SP5G), and precocious growth termination (SP) [7].
PFALs serve a variety of functions in urban horticulture, including provisioning (e.g. horticultural produce and biopharmaceutical supply), social activity (e.g. popularization of horticultural knowledge), and cultural activity (e.g. recreation and amenities) (Fig. 1b). Horticultural plants cultivated in PFALs contribute to a plentiful "vegetable basket" and healthy diets for urban dwellers. Because pre-harvest factors such as lighting, temperature, humidity, CO 2 , fertilizers, and fertigation conditions can be precisely adjusted in PFALs, it is possible and feasible to produce horticultural products with multiple objectives, including high concentrations of specific phytochemicals and high-yield products. Vaccines have played a critical role in preventing disease during the COVID-19 pandemic. Currently, PFAL-based manufacturing systems are recognized as a viable alternative source of biopharmaceutical materials and vaccine candidates, and they are being proposed for the automated, standardized, high-yield production of a variety of biopharmaceuticals, including peptide antigens, recombinant vaccines, and virus-like particles [8]. For example, virus-like particle vaccines containing functional hemagglutinin for pandemic influenza can be produced in plants within a month, whereas egg-and cell-based vaccines take 4-6 months to develop. When combined with gene-editing technology, PFALs may enable the development of numerous novel, necessary vaccines. PFALs can also allow people to farm on and inside urban buildings and support the recovery and productive transition of vacant office space. This type of agricultural system is often associated with leisure and recreational functions [9]. PFALs could also be established inside shopping buildings, be equipped with highly modern vertical cultivation technologies, and integrate production of quality horticultural food with services such as workshops, catering, and entertainment events.
In conclusion, PFALs represent rapidly developing and promising horticultural crop cultivation systems that can produce fresh crops in an environmentally friendly manner. PFALs show great potential for addressing the most challenging issues in agricultural science and associated fields, such as population growth, water scarcity, loss of arable land, food safety, and supply chain challenges. They will thus undoubtedly play an important role in the agricultural revolution, food security, carbon neutralization, and the future of humankind. PFALs employ several green and sustainable approaches, thereby representing a shift away from horticultural practices based on human intuition and experience towards urban and modern horticulture based on precise data management. Innovation within PFALs will promote the future integration of current agricultural practices with other rapidly developing techniques (e.g. artificial intelligence, advanced materials, synthetic biology) and help to achieve the global objectives of sustainable agriculture. Thus, PFALs represent a critical solution for urban horticulture in the post-coronavirus world. | 2,049.6 | 2022-02-19T00:00:00.000 | [
"Computer Science"
] |
Influence of Film Coating Thickness on Secondary Electron Emission Characteristics of Non-Evaporable Getter Ti-Hf-V-Zr Coated Open-Cell Copper Foam Substrates
The application of vacuum materials with low secondary electron yield (SEY) is one of the effective methods to mitigate the electron cloud (EC). In this study, the Ti-Hf-V-Zr non-evaporable getter (NEG) film was deposited on open-cell copper foams with different pore sizes for the suppression of electron multipacting effects. Besides, the influence of the film thickness on the secondary electron emission (SEE) characteristics of Ti-Hf-V-Zr NEG film-coated open-cell copper foam substrates was investigated for the first time. The results highlighted that all uncoated and NEG-coated foamed porous copper substrates achieved a low SEY (<1.2), which reduced at least 40% compared to the traditional copper plates, and the foamed porous coppers with 1.34-μm-thick NEG coating had the lowest SEY. Moreover, the surface chemistry and the morphological and structural properties of foamed porous coppers of different pore sizes with and without Ti-Hf-V-Zr NEG films were systematically analyzed.
Introduction
Low-energy electrons generated and accumulated in accelerators interact with circulating beam, leading to the formation of EC, accompanied with other detrimental effects, such as the increase in vacuum pressure [1], additional heat loads on the cryogenic vacuum system [2], and adverse impact on the stability of accelerated beam [3,4], etc. Beyond a certain electron density threshold, the EC can seriously affect the actual beam quality, and inducing beam instability and even disruption, as well as the degradation of the vacuum [5,6]. Those effects refer to the major limitations of present high energy colliders, such as the Large Hadron Collider (LHC) [7], the PEP-II collider [8], the CERN Super Proton Synchrotron (SPS) [9], Super KEKB [10,11], etc. Therefore, searching for reliable materials with low SEE to mitigate electron cloud effects (ECEs) is an essential technological objective.
The characteristics of high strength [12], a large surface area [13], excellent thermal properties [14], radiation features [15], and energy absorption capability [16] make Opencell Metal Foams (OCMFs) an attractive material to be applied for radiation shielding and beam liner design [17,18] in particle accelerators. It is known that the geometrical modifications on a material surface can effectively suppress SEE from it [19][20][21][22]. Multiple reflections may take place when an electron hits a rough surface, which correspondingly enhances the probability of capturing electrons. Meanwhile, porous metal foams have the potential to create a rougher surface with a larger area and a higher depth-to-spacing ratio than current smooth-surface substrates [23]. R. Cimino et al. [24] proposed and studied some characteristics of foamed porous coppers with a pore size greater than 500 µm, especially their qualification in terms of SEY. Moreover, some characteristics of foamed porous coppers for beam screen were demonstrated as well, such as excellent vacuum behavior, low surface resistance, good mechanical structural properties, etc. [17,24]. These benefits suggested some potential use of the foamed porous coppers as an EC moderator in the accelerator technology. The effect of foamed porous coppers with a pore size less than 500 µm on the SEY property was studied in this article.
It is known that the enhancement and sustainability of the vacuum is important for particle accelerators, which can be effectively solved by coating the non-evaporable getters (NEGs) with micron/sub-micron thickness on the inner wall of the vacuum pipe [25][26][27]. The research by Malyshev O.B. et al. [28,29] demonstrated that the Ti-Hf-V-Zr NEG film with a columnar thin film structure showed a good pumping capacity and a low electronstimulated desorption (ESD) yield after heated at the activation temperature of 140-150 • C for 24 h. In addition, the NEG film could not only adsorb residual gas in the vacuum chamber but also provide a low secondary electron yield [30]. In this study, the Ti-Hf-V-Zr NEG film and the open-cell copper foam were proposed to intrinsically produce low-SEY surfaces and simultaneously maintain vacuum stability in the accelerator. The macroscopically grooved surfaces of foamed porous coppers coated with Ti-Hf-V-Zr films were proposed and studied to reduce the SEY and hence mitigate ECEs. Besides, the Ti-Hf-V-Zr coatings deposited on foamed porous copper substrates with a large surface area also provided a good pumping capacity in the beam vacuum system.
Here, the SEY property of porous copper substrates with different pore sizes less than 500 µm and the influence of the coverage of different NEG equivalent thicknesses on the SEY of foamed porous coppers were discussed for the first time. The trends of the SEY (δ) corresponding to the changes in film thickness were quantified to reveal the differences among the SEYs of Ti-Hf-V-Zr NEG coatings with different NEG equivalent thicknesses. Thus, the Ti-Hf-V-Zr NEG films were deposited on foamed porous copper substrates with different pore sizes and in each case with a range of thicknesses. Besides, the surface morphology, cross-sectional morphology, surface microstructure, and chemistries of foamed porous coppers before and after NEG coating were analyzed.
Sample Preparation and Cleaning Procedures
Foamed porous coppers were purchased from Longshengbao Electronic Materials Co., Ltd. (Kunshan, China), with the average pore diameters of~100,~300, and~500 µm produced by the electron deposition process. The polished Si wafers were purchased from Topvendor Technology Company (Beijing, China) to measure the film thickness. Before deposition, all samples were ultrasonically cleaned in acetone and ethyl alcohol solutions for 10 min, respectively, to remove the impurities.
Film-Coating Equipment
For the purpose of evaluating the effect of different Ti-Hf-V-Zr film thicknesses of coated and foamed porous coppers with different pore sizes on SEY, the Ti-Hf-V-Zr films were deposited on Si<111> single-crystal and foamed porous copper substrates by the direct current (DC) magnetron sputtering (Chuangshiweina, Beijing, China) method. Si substrates were used to evaluate the sputtering rate taken as a reference, and the thickness of the films was controlled by the duration of deposition. An alloy target consisting of Hf, Ti, Zr, and V elements with an atomic ratio of 1:1:1:1 was used. The deposition was carried out with the background pressure of~5.8 × 10 −4 Pa, the working gas pressure of 0.5 Pa, the Ar gas flow of~20 sccm, and the discharge power of 280 W, as well as the cathode voltage of~371 V, the sputtering current of~0.76 A, and the distance from samples to the target of 8 cm. Due to the unevenness of the porous copper surface, the "NEG equivalent thickness" in this study was used to represent the "film thickness." During the Ti-Hf-V-Zr NEG film deposition, the temperature of the sample disc increased from 19 • C to 71 • C. The target was pre-sputtered for 3 min to remove impurities on its surface, while the substrates were blocked by the movable disks. The information of the samples are shown in Table 1.
Characterization Method
The surface and cross-sectional morphologies of foamed porous copper samples before and after the Ti-Hf-V-Zr NEG film deposition were measured by a JEOL 7800F Schottky field scanning electron microscope (SEM, Japan Electron Optics Laboratory, Tokyo, Japan) and a X-ray diffractometer (XRD, SHIMADZU, Kyoto, Japan) with a copper Kα radiation (λ = 0.154 nm). The 2θ data were collected between 30 • and 80 • at a speed of 2 • /min. The cross-sectional elemental compositions of the NEG film coatings were characterized by using a Zeiss GeminiSEM 500 emission scanning electron microscope (FE-SEM, Carl Zeiss GmbH, Hallbergmoos, Germany) with an energy dispersive spectrometer (EDS) system. The surface chemical state was analyzed by using AXIS ULtrabld X-ray Photoelectron Spectroscopy (XPS, Kratos, Manchester, UK) at a working pressure of~10 −7 Pa. An individual sample with an area of 10 × 10 mm 2 was mounted on the sample holder. All XPS data were obtained with a 150 W Al Ka X-rays operator and a 45 • analyzer at the beam size of 300 × 700 µm 2 . The test area of the sample was 2 × 2 mm 2 . Here, the XPS results were analyzed by CasaXps (2.3.17PR1.1, 2015, Casa Software Ltd, Devon, UK). C 1s peak with the binding energy (BE) of 284.8 eV was used for the binding energy calibration. Based on the multiple sets of data tested, the error of XPS chemical composition results was about ±0.3 at%. The SEY measurement was carried out with a primary electron dose of 7 × 10 −6 C·mm −2 with a current of 8 nA. During the SEY test, the base pressure in the test chamber was below 2 × 10 −7 Pa and the beam spot with a diameter of 1 mm. The test error of SEY was within ±5%. The SEY measuring facility is introduced in detail in Ref. [31].
Surface and Cross-Section Morphology
SEM and XRD were utilized to study the structures and morphologies of all the samples. The structures of the Ti-Hf-V-Zr NEG films were investigated by XRD for the Si and foamed porous copper substrates, as shown in Figure 1. The peaks for the NEG films were similar in both cases. The spectrum indicates that Ti, Zr, and V were hexagonal close-packed (hcp) structures, and Hf had a body-centered cubic (bcc) structure. The peaks at 2θ = 43.29 • , 50.43 • , and 74.13 • in the XRD pattern were ascribed to (111), (200), and (220) of copper, respectively (the same as seen previously in reports [32][33][34]). The 2θ position of the major XRD peaks is in the 2θ range of 30-40 • [28,29,35]. Here, the broad peaks associated with the Ti-Hf-V-Zr film occur at 2θ = 36.5 • and 2θ = 37.1 • for Si and foamed porous copper substrates, respectively. Calculated by the Scherrer equation, the average grain sizes of the Ti-Hf-V-Zr coatings deposited on Si and foamed porous copper substrates were 1.40 nm and 1.32 nm, respectively. This means that the surface conditions have a negligible effect on the crystallite size and structure of the Ti-Hf-V-Zr coatings.
The morphologies of the as-prepared cleaned foamed porous coppers with and without coating were obtained by SEM ( Figure 2). It can be revealed from Figure 2a,e,i that the foamed porous coppers are composed of connective pores (like a sponge) and a threedimensional (3D) reticulated structure. The wave-like sintered texture on the surface of the foamed porous coppers can be seen with higher resolution. Besides the macropores, spherical micropores with a diameter less than 3 µm (Figure 2f) can also be observed on the surface of the foamed porous copper. The morphologies of the as-prepared cleaned foamed porous coppers with and without coating were obtained by SEM ( Figure 2). It can be revealed from Figure 2a,e,i that the foamed porous coppers are composed of connective pores (like a sponge) and a threedimensional (3D) reticulated structure. The wave-like sintered texture on the surface of the foamed porous coppers can be seen with higher resolution. Besides the macropores, spherical micropores with a diameter less than 3 μm (Figure 2f) can also be observed on the surface of the foamed porous copper. After depositing a 0.65-μm-thick Ti-Hf-V-Zr layer (Figure 2c,d,g,h,k,l), the 3D structure remained almost the same but featured a rougher surface. The EDS mappings ( Figure 3a-h) show the elemental analysis of the 0.65-μm-thick Ti-Hf-V-Zr NEG film coated on foamed porous copper substrates. There are some dark areas on the surface, and Ti, Hf, V, Zr, C, O, and Cu elements can be seen as well. This may be caused by uneven grooves on the sample surface and the accuracy of the EDS instrument [36]. Overall, the EDS results suggested that the Ti-Hf-V-Zr NEG coating was successfully deposited on the surface. After depositing a 0.65-µm-thick Ti-Hf-V-Zr layer (Figure 2c,d,g,h,k,l), the 3D structure remained almost the same but featured a rougher surface. The EDS mappings (Figure 3a-h) show the elemental analysis of the 0.65-µm-thick Ti-Hf-V-Zr NEG film coated on foamed porous copper substrates. There are some dark areas on the surface, and Ti, Hf, V, Zr, C, O, and Cu elements can be seen as well. This may be caused by uneven grooves on the sample surface and the accuracy of the EDS instrument [36]. Overall, the EDS results suggested that the Ti-Hf-V-Zr NEG coating was successfully deposited on the surface. Figure 4 shows the SEM images of different areas on the surface of the foamed porous copper with a pore size of 500 µm and a 2.65-µm-thick NEG film (Sample #9), which facilitates the exploration of the deposition behavior of NEG on foamed porous coppers. Meanwhile, it can be seen from this figure that the NEG film has an island-like structure with a size ranging from nm to µm (Figure 4a-c). As shown in Figure 4c, it is deposited on the scaffold in the form of island-like aggregation, but in sharp contrast, it is densely deposited on the scaffold of the foamed porous copper (Figure 4a,b). Figure 4d-f show that within the same deposition time, the film density on the local surface of the sample varies greatly (emphasized by red circles in Figure 4e,f), which is affected by the self-shadowing effect caused by the 3D structure of open-cell metal foams. It is worth mentioning that the growth morphology of the film of varied thicknesses on different substrates is the same according to our SEM experimental results. Figure 4 shows the SEM images of different areas on the surface of the foamed porous copper with a pore size of 500 μm and a 2.65-μm-thick NEG film (Sample #9), which facilitates the exploration of the deposition behavior of NEG on foamed porous coppers. Meanwhile, it can be seen from this figure that the NEG film has an island-like structure with a size ranging from nm to μm (Figure 4a-c). As shown in Figure 4c, it is deposited on the scaffold in the form of island-like aggregation, but in sharp contrast, it is densely deposited on the scaffold of the foamed porous copper (Figure 4a,b). Figure 4d-f show that within the same deposition time, the film density on the local surface of the sample varies greatly (emphasized by red circles in Figure 4e,f), which is affected by the selfshadowing effect caused by the 3D structure of open-cell metal foams. It is worth mentioning that the growth morphology of the film of varied thicknesses on different substrates is the same according to our SEM experimental results. The samples were brittlely fractured at room temperature (RT) to produce the fresh cross-section surface after immersion in the 77 K liquid nitrogen for ~5 min. The crosssectional SEM micrographs and the EDS line scannings of Samples #2, #5, and #8 are shown in Figure 5. The EDS line scanning results showed that the deposition of the coating mainly occurred at the outer surface, because the Cu element was barely found on it but the partial coating on the inner surface. Additionally, the contents of Ti, V, Hf, and Zr elements obviously decreased from the outer surface of the substrate to the inner surface. Compared with Samples #5 and #8, it is noteworthy that the line scanning of Sample #2 showed a low content of Ti, V, Hf, and Zr elements on the substrates, and only weak Ti, V, Hf, and Zr signals were observed on the inner pore substrate surface. For open-cell porous materials, the outer surface area is much smaller than the inner porous surface area [16]. It can be seen from the micrographs in Figure 5 that the film covers the inner surface of the 3D structure of the porous copper, suggesting that the 3D structure is beneficial for the increase in surface area in film growth. Besides, Figure 5 also reveals that this foam metal skeleton has a hollow throughhole structure. The samples were brittlely fractured at room temperature (RT) to produce the fresh cross-section surface after immersion in the 77 K liquid nitrogen for~5 min. The crosssectional SEM micrographs and the EDS line scannings of Samples #2, #5, and #8 are shown in Figure 5. The EDS line scanning results showed that the deposition of the coating mainly occurred at the outer surface, because the Cu element was barely found on it but the partial coating on the inner surface. Additionally, the contents of Ti, V, Hf, and Zr elements obviously decreased from the outer surface of the substrate to the inner surface. Compared with Samples #5 and #8, it is noteworthy that the line scanning of Sample #2 showed a low content of Ti, V, Hf, and Zr elements on the substrates, and only weak Ti, V, Hf, and Zr signals were observed on the inner pore substrate surface. For open-cell porous materials, the outer surface area is much smaller than the inner porous surface area [16]. It can be seen from the micrographs in Figure 5 that the film covers the inner surface of the 3D structure of the porous copper, suggesting that the 3D structure is beneficial for the increase in surface area in film growth. Besides, Figure 5 also reveals that this foam metal skeleton has a hollow throughhole structure.
Surface Composition
It was reported that the further increase in the SEY may result in the introduction of impurities such as carbon and oxygen on the sample surface [37,38]. As an example, the states of the XPS-determined surface of NEG-coated and uncoated foamed porous coppers with a pore size of 300 μm were characterized. As shown in Figure 6b, it can be seen that after coating, the shake-up of the peaks corresponding to Cu(II) almost disappeared at 933 eV. In detail, the relative contents of Cu, C, O, Ti, V, Hf, and Zr in each state of Samples #4-6 were compared by XPS calculations, as shown in Table 2
Surface Composition
It was reported that the further increase in the SEY may result in the introduction of impurities such as carbon and oxygen on the sample surface [37,38]. As an example, the states of the XPS-determined surface of NEG-coated and uncoated foamed porous coppers with a pore size of 300 µm were characterized. As shown in Figure 6b, it can be seen that after coating, the shake-up of the peaks corresponding to Cu(II) almost disappeared at 933 eV. In detail, the relative contents of Cu, C, O, Ti, V, Hf, and Zr in each state of Samples #4-6 were compared by XPS calculations, as shown in Table 2
SEY Results
The SEY properties of foamed porous coppers before and after NEG coating are summarized in Table 3, and the dependence of the maximum SEYs (δmax) as a function of incident electron energy is shown in Figure 7a-d. As shown in Table 3, the primary electron energy (Ep) varied from 100 to 3000 eV, and the δmax of uncoated foamed porous copper of 100, 300, and 500 μm were 1.19, 1.18, and 1.20, respectively. Compared with δmax ≈ 2.0 at Ep ≈ 300 eV, which was found for flat coppers [39][40][41][42], the δmax of the foamed porous copper substrates decreased about 40%.
SEY Results
The SEY properties of foamed porous coppers before and after NEG coating are summarized in Table 3, and the dependence of the maximum SEYs (δ max ) as a function of incident electron energy is shown in Figure 7a-d. As shown in Table 3, the primary electron energy (E p ) varied from 100 to 3000 eV, and the δ max of uncoated foamed porous copper of 100, 300, and 500 µm were 1.19, 1.18, and 1.20, respectively. Compared with δ max ≈ 2.0 at E p ≈ 300 eV, which was found for flat coppers [39][40][41][42], the δ max of the foamed porous copper substrates decreased about 40%. Secondary electrons are induced when primary electrons impact on the surface of the material. As shown in Figure 8, when they are induced into the uneven 3D surface of the foamed porous copper, the pores and uneven grooves trap secondary electrons ( Figure 8a,c), and they only partially escape to the outside (Figure 8b), while the rest are trapped in the network-like open pore structure. Therefore, the probability that the electrons can escape from the open-cell copper foam surface is reduced, resulting in a lower SEY. SEY thresholds have thus been evaluated for Large Hadron Collider arcs with a δmax less than 1.5 [43]. Beyond this value, the beam-induced multipacting occurs. However, to suppress the build-up of EC, a surface treatment with a SEY no greater than 1.0 may be required in the Future Circular Collider: The Hadron Collider (FCC-hh) [44]. There is no doubt that open-cell copper foam possesses unique advantages in terms of SEY reduction as a simple and economical material. However, the SEY changing trend of the uncoated foamed porous copper substrates with pore sizes ranging from 100 to 500 μm was unobvious.
An earlier study [45] showed that the δmax of the 2.2-μm-thick quaternary Ti-Hf-V-Zr NEG-film-coated flat copper substrates was 1.37 at Ep ≈ 300 eV and that the δmax of the foamed porous coppers covered by a 2.65-μm Ti-Hf-V-Zr film (Sample #3, #6 and #9) de- SEY thresholds have thus been evaluated for Large Hadron Collider arcs with a δ max less than 1.5 [43]. Beyond this value, the beam-induced multipacting occurs. However, to suppress the build-up of EC, a surface treatment with a SEY no greater than 1.0 may be required in the Future Circular Collider: The Hadron Collider (FCC-hh) [44]. There is no doubt that open-cell copper foam possesses unique advantages in terms of SEY reduction as a simple and economical material. However, the SEY changing trend of the uncoated foamed porous copper substrates with pore sizes ranging from 100 to 500 µm was unobvious.
An earlier study [45] showed that the δ max of the 2.2-µm-thick quaternary Ti-Hf-V-Zr NEG-film-coated flat copper substrates was 1.37 at E p ≈ 300 eV and that the δ max of the foamed porous coppers covered by a 2.65-µm Ti-Hf-V-Zr film (Sample #3, #6 and #9) decreased about 0.19-0.22, compared to that of the NEG-coated flat coppers. Moreover, the NEG-coated foamed porous copper substrates have SEYs of 2.5-5.1%, 5.8-8.4%, and 1.7-3.4% after being coated with the Ti-Hf-V-Zr NEG film with a thickness of 0.65, 1.34, and 2.65 µm, which is lower than those of the uncoated foamed porous copper substrates. This indicates that foamed porous coppers with a 1.34-µm-thick Ti-Hf-V-Zr NEG coating have the lowest SEY, compared to others with different thicknesses investigated in this study. It was also found that the 1.34-µm-thick Ti-Hf-V-Zr NEG coating reduces the δ max of the 300 µm foamed porous coppers from 1.19, 1.18, and 1.20 to 1.09, 1.10, and 1.13, respectively.
As shown above, the foamed porous copper coated with the Ti-Hf-V-Zr NEG film is a promising method of reducing SEY to below 1.2 for electron cloud mitigation, which can be considered to be applied in the accelerator vacuum systems.
Conclusions
Open-cell copper foams before and after the Ti-Hf-V-Zr NEG coating were proposed to mitigate the EC in accelerators and other vacuum devices. The NEG film was deposited on the foamed porous copper substrates of three different pore sizes and in each case with three different thicknesses. For the first time, the relationship between the quaternary NEG film thickness on foamed porous copper samples and the SEY was systematically investigated. The results of this study are summarized as follows: (1) The 3D reticulated structure on the foamed porous copper surface with a pore size of 100-500 µm can significantly reduce the SEY. It is clear that the network structure in foam geometries can reduce the SEY since the uneven 3D regular surfaces and pores trap the secondary electrons. Compared with flat coppers, the SEY of the open-cell copper foam can be reduced by at least 40%, and the foamed porous copper has a promising application potential for reducing SEY to below 1.2. However, the δ max of foamed porous coppers with a pore size less than 500 µm decreases insignificantly. (2) The δ max of the NEG-coated foamed porous copper substrates is lower than that of the uncoated ones. After coated with the Ti-Hf-V-Zr NEG film with a NEG equivalent thickness of 0.65, 1.34, and 2.65 µm, the foamed porous coppers had the maximum SEYs that were reduced by 2.5-5.1%, 5.8-8.4%, and 1.7-3.4%, respectively. However, when coated with a 1.34-µm-thick NEG film, such foamed porous coppers had the lowest SEY. (3) The Ti-Hf-V-Zr films have nanocrystalline structures on both Si and foamed porous copper substrates. XPS results show that the NEG film containing impurities such as carbon and oxygen were partially oxidized. Besides, the Ti-Hf-V-Zr NEG film was unevenly deposited on the foamed copper substrate. The EDS line scanning maps indicate that the inner surface of foamed porous coppers with high depth-to-spacing ratios was also covered by a quaternary NEG film, which was thus provided with a large specific surface area. Moreover, the combination of the NEG film with foamed porous copper substrates could further reduce the SEY. Therefore, the NEG coated open-cell copper foam walls seem to be a promising method to solve problems of high vacuum and EC in the future particle accelerators. | 5,801.6 | 2022-03-01T00:00:00.000 | [
"Physics"
] |
An algorithm for finding and adding boundary conditions with the aim of solving the contact problem in computational mechanics
— In computational mechanics, the contact problem between moving bodies is always an interesting topic. The reason is simple: there is no unique solution for contact problems that works in any case in terms of arbitrary geometry, applied boundary conditions, external loads, material properties, etc. In addition, the process of finding parts of the outer surfaces of the bodies in contact is always a programming challenge. When we keep in mind the fact that numerical programs are written mostly in Fortran programming language, which does not have as powerful tools as some modern languages do, the level of the problem becomes clearer. Of course, there is the possibility of writing such algorithms in some up-to-date programming language with powerful libraries for geometry, searching, etc., but in that case other problems arise, related to code compatibility, data transfer, execution speed, parallelization, etc. This paper shows an algorithm for finding contact entities between two or more different bodies. The algorithm is implemented within the PAK software package for finite element analysis [ref]. The PAK software is written in Fortran and uses its own skyline solver for solving of system of linear equations, but also has MUMPS parallel sparse solver [ref] incorporated in itself.
I. INTRODUCTION
Numerical software based on finite element method can be used for solving of different types of partial differential equations: Laplace's equation, wave equation, Maxwell's equation, Helmholtz equation, Poisson's equation, etc. Those equations, with application of variational methods, are commonly used for solving of different types of scalar or vector field problems such as electrostatics, electromagnetism, sound wave propagation, diffusion, etc. Similar to that, in solid mechanics, using principle of virtual work and constitutive equations, unknown displacements vector field can be solved for prescribed initial and boundary conditions. In all cases, the system of differential equations is reduced to a system of linear equations by application of the numerical integration methods.
This concept is common and solves many different problems. However, it also imposes one significant limitation. To solve any complex problem, the domain of the problem must be fully connected. This means, for example, if we solve the problem of the propagation of a wave through some medium, it must be completely connected, that is, the waves will propagate to all the boundaries of the finite element mesh. If the considered domain consists of two separate finite elements meshes, the propagation of the waves would not manifest from one mesh to another, although they can be in geometrical contact. The equations in the nodes belonging to different meshes are not connected, although the nodes may be in the same geometric position. Similar to the previous example, if we analyze the problem of interaction between two solid bodies, where the boundary condition can be prescribed displacement, velocity, acceleration or force on a body, bodies are not aware of each other's existence. Their mutual interaction must be ensured by appropriate additional equations, boundary conditions or modification of existing equations [ref]. Since the movement of the body takes place over time, at each time point the bodies are in a different configuration and spatial position. This means that for each time point the relative positions of the various bodies should be examined. For the boundary nodes of the finite elements mesh, which are found to be in contact with the nodes of another body mesh, appropriate action should be performed: adding a new equation, adding of new boundary condition or modification of existing equations [ref].
Finding the finite element nodes of different bodies that interact with each other is one of the demanding tasks that needs to be done in order to enable the mechanism of contact between those two bodies. This paper presents one of the ways how this can be done using geometric entities such as planes, vectors, points, polygons, etc. Figure 1 shows simple explanation about the problem. There are three different bodies represented by three independent finite element meshes: two simple plates and wired structure placed between them. This example is a An algorithm for finding and adding boundary conditions with the aim of solving the contact problem in computational mechanics Velibor Isailovic, Nenad Filipovic simple test that is used in medical stent development. The test procedure consists of compressing of the stent between two flat plates and measuring displacement of characteristic points and contact forces as a test output. However, the test cannot give any information about the state of stress or deformation within the material. For this purpose, numerical simulations are very often used to support mechanical tests such as one mentioned before. Some facts and conclusions can be determined from the simulations that cannot be seen by experiment. For example, state of internal forces, stress or deformation, etc. But, in order for such a numerical test to be possible, it is necessary to develop finite element algorithm that allows contact between the two bodies. Otherwise, the bodies can pass through each other without any limitations. Such an algorithm requires a mutual relationship of the bodies as input information, i.e. which nodes (or outer faces) of one body are in contact with the nodes (or outer faces) of another body. Finding and calculation of that information are the topic of this work.
As mentioned in abstract section, all the work was done in the Fortran programming language, which was a challenge in itself. The main reason why it was done in this way is because the rest of the code related to the finite elements is written in the same language. For the purpose of solving examples with a huge number of nodes and finite elements, the code is adapted for running on supercomputers by using MUMPS parallel solver [ref] [ref].
II. METHODS
In order to provide information about the mutual contact of the body, it is necessary to have information of the outer polygons, i.e. the polygons that make up the outer surface. Known data are geometric positions of points in space and polygons with a corresponding list of points (ID of points).
In general, there are two ways how the contact between two bodies can be recognized. The first way is to find a node of one body that penetrates the boundary surface of another body. The second way is very similar, but instead of node, the center of gravity of face is observed. In that case the contact is achieved if center of gravity of face from one body surface penetrates the boundary surface of another body. Both concepts are similar in implementation; the difference is only in set of points that should be tested. where the point C of the body 1 is placed inside the body 2. This is the simplest case of contact between two bodies. In order to define if point C is in body 2, we should examine their relative positions. For cube and point it is pretty simple: based on the angles between cube sides and coordinate axes, we can calculate angle of rotation of cube so that it would take a position parallel to the axes, rotate cube and point and based on new coordinates determine whether the point is inside the cube or not.
However, the finite elements can have arbitrary tetrahedral or hexahedral geometry. In computational mechanics hexahedral elements are used more often than tetrahedral elements due to accuracy of calculation of postprocessing variables like stress or strain. The paper deals with hexahedral elements but this is also applicable to tetrahedral elements. This kind of elements has eight nodes, and their position in space is such that the nodes of one side of element do not have to lie in the same plane ( Figure 3).
Figure 3: Hexahedral finite element with arbitrary geometry
For the element with such shape it is not possible to determine whether the point is inside based on coordinates.
To solve this problem, we must define a new curvilinear coordinate system with the origin at the center of gravity of the element. The axes of this coordinate system follow the geometric orientation of the element. For such a coordinate system, the values of the (r,s,t) coordinates are in the range from -1 to 1. The criteria for determining whether a point is within an element or not is the value of the coordinates in the local curvilinear coordinate system. If the values are in range [-1, 1] the point is inside, otherwise the point is outside. The algorithm that we used for finding the values of coordinates is multivariate Newton -Raphson method [ref]. When a point that falls inside a boundary element is found, the contact pair (node to face or center of face to face) is defined.
It is now possible to calculate the distance from the contact node to the contact face. This can be done by placing a plane through the face nodes. Since faces consist of four nodes that do not have to lie in plane, we define plane by calculation of two normal vectors by dividing a quadrilateral face into two triangles. The mean value of those two vectors is resulting normal vector. It doesn't matter which product is taken. All cross products give the vector with the same direction. Since this vector is normalized, the choice of three nodes is completely arbitrary.
In the second case, when face nodes do not lie in a plane, face normal vector is calculated as mean value of two normalized vectors, obtained from two triangles made by dividing a quadrilateral face.
The resulting vector is obtained as the normalized sum of these two vectors: The way the quadrilateral face is divided does not affect the direction of the resulting normal vector. When we have gravity center of face (the point with mean values of face nodes coordinates) and face normal vector, we can define the plane for calculation of distance from the boundary node of the same body, that penetrates the observed face, and also normal projection of mentioned node on face plane. Those data are necessary for setting up elastic supports, in order to enable contact between the two bodies. The plane parameters can be defined using calculated normal vector and center of gravity: Where: face center of gravity, any point on the plane.
A plane defined in this way allows us to calculate the depth of entry of a point of one body into another body. The depth is calculated by: where are plane parameters obtained from equation [eq] by substitution of face normal and gravity center. It is now possible to calculate the stiffness and force values necessary to return a node that has breached the boundary of another body to its boundary. Figure 3 shows the mutual relationship of the two bodies in contact. During body movement, nodes of one body may pass through the boundary surface of another body. In this simple example both bodies consist of only one finite element. The node A passed through a shaded face. When this happens, the node needs to be returned to the boundary of the body. This is done by projecting node A on the face plane defined by normal vector calculated using of equation [eq]. The criteria for defining the contact between two bodies are:
Figure 5: Contact between two bodies: The node from one body entered another body
1. Boundary node of one body is located inside a finite element of another body; 2. Projection of mentioned boundary node on the face plane of another body falls inside the face.
The first condition must be satisfied in order to test the second one. If the first condition is satisfied and the second one is not, there is the first special case, described in Figure 6. In order to give simpler explanation, the figure is given in two dimensions.
Figure 6: Relation of the nodes and elements that cause special case
As can be seen from Figure 6, it is possible for boundary node from first body to be inside an element of another body while the projection of that node on the plane of the element boundary face is completely outside the element. In that case, an adjacent face must be found for which the projection of the node on the plane of the face falls within the face itself.
In the example given in Figure 6, node N is located inside element E2 but its projection on the plane of the boundary face is outside the element. However, the projection of the node on the plane of the boundary face of the element E1 is inside that face, so it is the geometric position where the node N should be moved. The second special case is shown in the Figure 7.
Figure 7: Second special case: Node N is inside
This case appears when the node N falls inside element E2, which does not have any boundary face. If such an element is found, it is necessary to find all its surrounding elements that have boundary faces, and calculate the mean value of the normal vector of all boundary faces. In the case shown in Figure 7, the normal vector would be calculated as the mean of the normal vector of the boundary faces of the elements E1 and E3. The situation can be more complicated in case of 3D problems, because element like E2 can be surrounded by more than two elements, but the way the normal vector is calculated is similar. By visiting all boundary nodes of one body and checking whether they enter a boundary element of another body (including neighbors of boundary elements due to special cases) we can find appropriate relations and calculate directions where to move boundary nodes that broke the boundary of a neighboring body, based on calculated normal vectors (or mean value of normal vectors in special case two).
III. ELASTIC SUPPORTS
Now, for each boundary node we have two pieces of information: 1. Information whether the node is in contact with another body or not; 2. Value of normal vector in node which is in contact with another body.
Based on these information, the stiffness matrix for elastic supports can be calculated. That stiffness matrix can be calculated by using theory for one-dimensional finite elements [ref].
Figure 8: Elastic supports placed in boundary nodes that broke outer boundary of neighboring body
In some cases, depending on the geometry, it may be more appropriate to observe the relations of the center of gravity of the face to the outer faces of another body. The contact algorithm is very similar to previously explained. In this case the positions of the center of gravity of faces are calculated, based on their corner nodes. Other searches and calculations are the same as in the previously explained algorithm.
IV. RESULTS
As a result, finite element model of medical stent is presented here. The medical stents are wired structures used in treatment of cardiovascular diseases caused by narrowing of blood vessels. During stent development, a large number of experiments are performed to achieve optimal stent design. These experiments are quite expensive and require a fairly long period of time for testing. They are usually performed until complete failure of stent or by applying of cyclic load with number of cycles calculated based on heart rate and the number of years the stent should survive in the human body. Computer modeling plays a very important role here in stent development. For instance, in experiments, it is not possible to see what happens in the stent materialthe state of internal forces caused by external loading is not available information. This information can be obtained by numerical tests using finite element method with contact algorithm. In that sense, development of medical stents can be significantly improved by computer simulations.
In order to be able to make a virtual experiment, it is necessary to provide a contact algorithm to be able to simulate the interaction of the stent with the rest of the equipment for mechanical testing of the stent, i.e. the interaction of the finite element mesh of the stent with the finite element mesh of other pieces of testing equipment.
There are several different tests where stents are tested with pressure loading, cyclic loading, passing stents through tubes that simulate blood vessels, etc. For any type of test, it is possible to make a numerical simulation, which would provide a detailed insight into the state of stress or internal forces in the stent itself.
The aim of this paper was to show the way in which all the necessary data used to modify the system of finite element equations are provided. Figure 9 shows the test example. An example consists of bending of stent between three cylinders. This is the so-called three-point bending test. The boundary conditions of finite element meshes are: 1. Two fixed cylinders placed bellow the stent; 2. One cylinder with prescribed displacement placed above the stent; 3. Three contact surface pairs between stent and three supporting cylinders. Figure 9: Three point bending stent test: The stent is exposed to a bending load using two fixed cylinders and one movable cylinder. Figure 10 shows detail of moving cylinder in the different time steps during the simulation. At the beginning of simulation there are no contact elements between cylinders and stent. During the movement of the upper cylinder, the contact first appears between the upper cylinder and the stent and then between the stent and the lower cylinders. The number of contact elements increases during the simulation, which is visually shown in Figure 10 and also in Figure 11. This paper presents an algorithm for finding contact pairs between finite element meshes of two bodies. We proposed a way how to find information about contact between different bodies modelled by finite element method. Based on that information, system of finite element equations is modified by adding elastic supports, which ensure that a body does not break the boundary of another body.
The source code is written in FORTRAN 95, to ensure maximum compatibility with numerical software written earlier. Project is implemented as a separate module, which can be applied completely independently from the remaining code. The source code is available to all interested scientists who want to use it. By calling the appropriate subroutines, it can be easily adapted to any finite element code, since the input to the module is information about finite element mesh and displacements of all nodes, and the output is the position, spatial orientation and values necessary to calculate the stiffness of elastic supports and adding them into the existing system of finite element equations.
In results section, numerical model of stent subjected to the three-point bending test is presented. This is just one example where a developed contact algorithm can be applied. There are many other examples and areas of application, for instance: numerical simulation of a car crash test, plastic deformation processing, tire rolling, gear or belt transmissions, contact of bone and cartilage, kicking the ball in various sports, etc. The presented algorithm is general and can be applied in any of the stated fields of research.
For modeling of more complex examples, it is possible to introduce various nonlinear material models, in order to make a more realistic model, with as few approximations as possible. It is also possible to use structural finite elements such as a shell, beam, plate, membrane, etc. in order to reduce the number of degrees of freedom and simplify the model.
The numerical software presented here was created as a result of research on international scientific project InSilc 1 . Software solution will be uploaded in public repository provided by European Commission. In this way, it will be available for whole research community interested in the field of contact problems modeling in computational mechanics. Since there is no unique software solution for contact problems that provides a solution for each specific problem, and especially not an open source solution, this software solution can be very interesting for further upgrade and application on problems from other engineering fields. | 4,694.2 | 2021-04-01T00:00:00.000 | [
"Engineering",
"Computer Science"
] |
Interpretable network-guided epistasis detection
Abstract Background Detecting epistatic interactions at the gene level is essential to understanding the biological mechanisms of complex diseases. Unfortunately, genome-wide interaction association studies involve many statistical challenges that make such detection hard. We propose a multi-step protocol for epistasis detection along the edges of a gene-gene co-function network. Such an approach reduces the number of tests performed and provides interpretable interactions while keeping type I error controlled. Yet, mapping gene interactions into testable single-nucleotide polymorphism (SNP)-interaction hypotheses, as well as computing gene pair association scores from SNP pair ones, is not trivial. Results Here we compare 3 SNP-gene mappings (positional overlap, expression quantitative trait loci, and proximity in 3D structure) and use the adaptive truncated product method to compute gene pair scores. This method is non-parametric, does not require a known null distribution, and is fast to compute. We apply multiple variants of this protocol to a genome-wide association study dataset on inflammatory bowel disease. Different configurations produced different results, highlighting that various mechanisms are implicated in inflammatory bowel disease, while at the same time, results overlapped with known disease characteristics. Importantly, the proposed pipeline also differs from a conventional approach where no network is used, showing the potential for additional discoveries when prior biological knowledge is incorporated into epistasis detection.
Detecting epistatic interactions at the gene level is essential to understanding the biological mechanisms of complex diseases. Unfortunately, genome-wide interaction association studies (GWAIS) involve many statistical challenges that make such detection hard. We propose a multi-step protocol for epistasis detection along the edges of a gene-gene co-function network. Such an approach reduces the number of tests performed and provides interpretable interactions, while keeping type I error controlled. Yet, mapping gene-interactions into testable SNP-interaction hypotheses, as well as computing gene pair association scores from SNP pair ones, is not trivial.
Here we compare three SNP-gene mappings (positional overlap, eQTL and proximity in 3D structure) and used the adaptive truncated product method to compute gene pair scores. This method is non-parametric, does not require a known null distribution, and is fast to compute. We apply multiple variants of this protocol to a GWAS inflammatory bowel disease (IBD) dataset. Different configurations produced different results, highlighting that various mechanisms are implicated in IBD, while at the same time, results overlapped with known disease biology. Importantly, the proposed pipeline also differs from a conventional approach were no network is used, showing the potential for additional discoveries when prior biological knowledge is incorporated into epistasis detection.
Background
Genome-wide association studies (GWAS) have identified over 70 000 genetic variants associated with complex traits [1]. Often these variants altogether do not explain the whole variance of a trait. A representative example is inflammatory bowel disease (IBD), like Crohn's disease and ulcerative colitis. Pooled twin studies estimate their heritabilities at 0.75 and 0.67 respectively [2]. Yet, despite large GWAS that identified over 200 IBD-associated loci [3], a low proportion of their variance has been explained [4]. Possible explanations include a large number of common variants with small effects, rare variants with large effects not covered in GWAS, unaccounted geneenvironment interactions, and genetic interactions [5]. In this article we explore the latter, called epistasis, which has been linked to IBD in the past [6,7,8,9,10,11]. Often, two types of epistasis are described: biological and statistical epistasis [12]. Broadly described, biological epistasis refers to a physical interaction between two biomolecules that has an impact on the phenotype. Statistical epistasis refers to departures from population-level linear models describing relationships between predictive factors such as alleles at different genetic loci.
Key Points
• We propose an epistasis detection protocol that exploits prior knowledge on how genes relate to each other (using a genegene network), and how SNPs relate to genes (positional overlap, expression regulation, or chromatin interactions). • The proposed protocol reduces number of tested interactions, and provides more interpretable results. • Applied to an inflammatory bowel disease GWAS dataset, our protocol recovers both known genes and interactions previously reported, as well as potentially novel mechanisms.
tions, the low statistical power, or the absence of a widely accepted GWAIS protocol. Even in the absence of statistical challenges, GWAIS are usually conducted on single nucleotide polymorphisms (SNPs), and SNP-interactions often lack a straightforward functional interpretation. Moving from SNP-to genelevel tests, which jointly consider all the SNPs mapped to the same gene, might address both shortcomings. First, aggregating SNP pair statistics into gene pair statistics is likely to increase the statistical power when dealing with complex diseases [13]. Second, converting statistical findings into biological hypotheses [14] may facilitate their functional interpretability [15].
To both reduce the number of tests and improve the interpretablity of significant SNP interactions, some authors propose examining only pairs of SNPs likely to be functionally related [16]. Such approaches use prior biological knowledge, for instance, of SNPs involved in genes that establish a proteinprotein interaction [17]. Yet, limiting studies to one particular kind of gene-gene interaction might be reductive. To tackle that issue, Pendergrass et al. [18] developed Biofilter, a gene-gene co-function network, which aggregates multiple databases. Additionally, such approaches often require as well a proper mapping of SNP to genes.
In this article, we propose guiding the search for statistical epistasis using plausible biological epistasis. Taking exclusively interactions reported from at least 2 different sources in Biofilter, we compile a subset of gene-gene interactions that are biologically plausible. Then, we exclusively search for those interactions in a GWAIS dataset, reducing the multiple test burden and improving the interpretability. We investigate different ways of mapping SNPs to genes and use the adaptive truncated product method [19] to estimate the association of gene pairs. Network and pathway analyses are used to further assist in the interpretation of epistasis findings. The proposed pipeline is applied to GWAS data from the International IBD Genetic Consortium [3].
Data description
We investigated the IIBDGC dataset, produced by the International Inflammatory Bowel Disease Genetics Consortium (IIB-DGC). This dataset was genotyped on the Immunochip SNP array [20]. We performed quality control as in Ellinghaus et al. [21], hereby reducing the number of SNPs from 196 524 to 130 071. The final dataset contains 66 280 samples, out of which 32 622 are cases (individuals with IBD) and 33 658 are controls. The large sample size of this dataset helps overcoming the issue of reduced statistical power that is common in GWAIS.
The IIBDGC dataset aggregates different cohorts, and contains potentially confounding population structure. As in Ellinghaus et al. [21], we used the first 7 principal components to model population stratification. Because several epistasis detection methods, such as those implemented in PLINK [22], cannot include covariates in their logistic regression models, we instead adjusted the phenotypes by regressing out those principal components. In other words, we derived adjusted phenotypes from the logistic regression model by subtracting model-fitted values from observed phenotype values, i.e. response residuals (see Supplementary Fig 1).
Analysis SNP to gene mapping: Chromatin contacts map more SNPs per gene than other mappings
In this article, we present a pipeline to detect gene epistasis across the edges of a network. We extract interacting pairs of genes from the gene-gene Biofilter network to obtain candidate gene epistatic pairs (gene models). We considered three ways to match genes to SNPs and obtain SNP models from them: Positional, eQTL and Chromatin (detailed in section From gene models to SNP models). Chromatin produced the largest number of unique SNP-gene mappings (2 394 590), an order of magnitude more than eQTL (411 120) and Positional (174 879) ( Table 4). The Chromatin mapping had on average the largest number of SNPs mapped on to a gene, followed by eQTL and Positional ( Fig 1A). Nonetheless, the number of SNPs mapped to a gene varied considerably across genes ( Fig 1B). In addition, the number of SNPs mapped to a same gene varied considerably across mapping methods (Fig 1C, D and E): in general, the genes with most SNPs mapped using the eQTL mapping had relatively few SNPs mapped in the Chromatin mapping, and vice versa.
The Positional analysis does not recover any SNP interaction
The aforementioned SNP-gene mappings, and combinations of them (cross-mappings), yielded seven sets of SNP models. Running our pipeline on them resulted in seven epistatic SNP-SNP networks described in Table 1 (for visualization, see Supplementary Fig 2). We also conducted what we called a Standard analysis, which reflects a conventional epistasis detection procedure. In this one, we exhaustively searched for epistatic interactions between all the SNPs that passed quality control. Then, we used positional mapping to assign gene interactions to the significant SNP interactions. Strikingly, while the Standard analysis generated the largest SNP-interaction network (55 nodes/SNPs and 57 edges/interactions), the Positional + eQTL one was the largest by number of interactions (76). The Positional analysis produced no significant interactions at all.
Gene epistasis: "functional" mappings boost discovery and interpretability
Findings of a GWAIS are often presented as a network, with nodes indicating SNPs and edges between nodes being present when the analysis protocol identifies the corresponding SNP pair as significantly interacting with the trait of interest. We converted SNP model networks into gene model epistasis networks (Fig 2), adding an edge between two genes whenever the corresponding gene model significance was ascertained through an ATPM approach. The largest network was obtained under the Standard analysis (26 edges, For both eQTL and Standard most of the significant SNP models mapped to exclusively one gene model, removing possible sources of ambivalence ( Fig 3A). That was less the case under the Chromatin analysis, where it was more common for the same SNP model to map to different gene models. We also investigated the relationship between significant gene models and the number of significant SNP models that mapped to them ( Fig 3B). Most significant gene interactions were supported by relatively small numbers of SNPs: either few in number, or few with respect to the total number of SNP models for that significant gene model.
Significant SNP pairs are near each other and near loci with main effects
Notably, the SNPs in significant SNP interactions are located near each other in the genome (the median distance between the pair of SNPs in Chromatin, eQTL and Positional + eQTL + Chromatin was 161 kbp). Moreover, they tend to overlap with GWAS main effects loci (Fig 4A). To investigate whether main effects could be driving some of the signals, even when in imperfect LD with epistatic SNP pairs (a phenomenon sometimes referred to as "phantom epistasis" [24]), we conducted a linear regression-based test, including a vector of polygenic risk scores as covariate. The observed effect of many significant SNP model notably decreased when we conditioned on single SNPs in this way (Fig 4B), but not for all. The latter suggests ATR IP C1 orf 14 1 C 5o rf5 6
The type I error is controlled
To evaluate the statistical relevance of the detected gene interactions, we studied if the proposed protocol controlled the type I error. For that purpose, we performed a permutation analysis based on 1 000 permutations for each of the datasets, permuting the phenotypes and running the entire protocol to detect significant gene interactions. This permutation procedure is independent of the one used in the proposed protocol to compute significance thresholds. When at least one significant gene interactions was observed in a permutation, that permutation was considered a false positive (FP). This allowed us to compute the type I error rate as # FP 1000 . Type I error was under control in all tested experimental settings, with estimates ≤ 6.6% (Table 3).
Biofilter boosts discovery of interpretable hypotheses
Searching for epistatic interactions exclusively across edges of the Biofilter network greatly reduces the number of tests. Yet, this gain in statistical power might not lead to greater discoveries as it potentially disregards new interactions absent from databases. Hence, we tested whether exhaustively searching for epistasis on the datasets not reduced for Biofilter models but using each mapping, led to similar results. At the SNP level ( Fig 5A, upper panel), only a small proportion of the significant interactions were still detected when the network was not used. Strikingly, that difference got smaller at the gene level ( Fig 5A, lower panel). This suggests that the significant SNP models, even if fewer in number, are strong enough to lead to the detection of the gene models.
In a similar vein, we studied the overlap between the significant models detected in the different analyses. Including more SNP-gene mappings in the analysis was mostly beneficial with respect to considering one mapping at a time, since both at the gene and at the SNP level, the significant interactions in Positional + eQTL + Chromatin highly overlapped with the other analyses ( Fig 5B). Nonetheless, a few interactions were also missed in this joint analysis, in particular 20 significant SNP models detected in the eQTL analysis.
Positional + eQTL + Chromatin and Standard analyses partially replicate previous studies on IBD
In the past, several genetic studies have investigated epistasis on IBD [6,7,9,10,11,25]. We compared them to our results at the gene level, the minimal functional unit at which we expect genetic studies to converge. Several epistatic alterations have been reported involving interleukins [6,10,11]. Also our Standard analyses resulted in interactions involving three in-terleukins (IL-19, IL-10 and IL-23), although interacting with different genes than in the aforementioned studies. Positional + eQTL + Chromatin recovered five interleukins (IL-4, IL-5, IL-13, IL . In addition, Lin et al. [25] detected interactions involving NOD2, with both IL-23R and other genes. Our Standard analysis highlighted two potentially new epistasic interactions involving NOD2.
Discoveries in the proposed protocol are guided by plausible biological interactions. Hence, every significant gene model can be traced back to a biological database, therefore producing biological hypotheses. For instance, the gene model MST1-MST1R is significant in multiple pipelines. Both genes have been linked to IBD, both by themselves [26,27] and in interaction with other genes [28]. MST1R is a surface receptor of MST1, and, through physical interaction, they play a role in the regulation of inflammation.
Pathway analyses highlight the involvement of the extracellular matrix in IBD
Pathway enrichment analyses of each interaction's neighborhood allowed us to identify broader biological mechanisms that the significant interaction pairs might be involved in. The eQTL analysis produced multiple significant pathways (see Supplementary Table 1), involving the triangle of interactions formed by two genes located in 3p21.31 (HYAL1, HYAL3) and one in 7q31.32 (SPAM1) (Fig 2). The affected pathways were related to the extracellular matrix, and specifically to glycosaminoglycan degradation. Links between the turnover of the extracelular matric and IBD-related inflammation have been reported [29]. More specifically, glycosaminoglycan [30] and hyaluronon [31] degradation products lead to inflammatory response. When restricting attention to pathways of minimum gene size 10 and maximum gene size 500 to avoid imbalances and nonnormality, four pathways are removed: cellular response to UV B, hyalurononglucosaminidase activity, hexosaminidase activity and CS/DS degradation. The Chromatin mapping and the Standard pipeline did not produce significant pathways. The Positional + eQTL + Chromatin analysis produced 71 significant pathways (Supplementary Table 2), involving the neighborhoods HYAL3, HYAL1, HYAL2 and PLA2G2E, PLA2G5, PLA2G2C.
The proposed pipeline increases robustness
We studied whether our proposed pipeline led to more robust results. For that purpose, we ran the whole protocol again on a random subset of the data containing 80% of the samples. We repeated this experiment 10 times for each SNP-gene mapping. In each subset, 49% of the individuals were cases, respecting the initial proportion of cases and controls of the entire dataset. Conservatively, we used the same SNP and gene significance thresholds as for the corresponding entire dataset.
The Standard pipeline, which does not include Biofilter network-information, produced on average 11.4 significant gene models (standard error (SE) 1.1). With the eQTL (respectively Chromatin) analysis, we detected on average 5.8 gene pairs (respectively 3.2) with SE 0.1 (respectively 0.4). With the Table 3. Type I error of the protocol presented in Gene interaction detection procedure, estimated over 1 000 random permutations, as explained in section The type I error is controlled.
Analysis
Average Positional + eQTL + Chromatin mappings process, we detected on average 8.6 gene pairs with SE 1.3. Fig 6 shows that pipelines including biological knowledge recover more than 60% of the gene pairs detected with the entire cohort, on average, (83% for eQTL, 60% for Chromatin mapping and 64% for Positional + eQTL + Chromatin), whereas without this knowledge (Standard), we recover less than 40% of the pairs. Hence, the Standard analysis appears to be the less robust in terms of conservation of gene pairs. This shows that filtering does increase robustness at the gene level.
Tissue-specific mappings do not recover many new interactions
To analyze the impact of tissue-specific mappings, we ran three analyses using exclusively eQTL and Chromatin mappings obtained from relevant tissue types and a combination of these eQTL and Chromatin mappings with the Positional one. Specifically, we used mappings obtained from organs and tissues from the nervous and digestive systems (Supplementary Table 3). While the tissue-specific Chromatin analysis did not produce any significant gene pair, the tissue-specific eQTL and Positional + eQTL + Chromatin analyses produced respectively 4 and 3 significant gene pairs (Supplementary Fig 4). Nonetheless, only one is novel with regards to the organism-wide analyses: IL18RAP-IL18R1.
Discussion
In this article we proposed a new protocol for epistasis detection, based on a variety of functional filtering strategies, and studied its application to GWAS data for Inflammatory Bowel Disease. The protocol included several components to control for type I error, hereby strengthening our belief in the discovered genetic interactions.
A common theme in the interpretation of epistasis results consists on linking the associated variants to an altered gene function. In this article, we considered 3 different such SNPgene mappings. Notably, the number of SNP-gene correspondences provided by each mapping differed by orders of magnitude. Moreover, the different mappings unevenly described genes; for instance, genes that had most SNPs mapped by using a chromatin contact map, had comparatively few eQTL SNPs. To combine these different perspectives of the epistasis process we combined multiple mappings into one analysis (Positional + eQTL + Chromatin). This joint analysis allows to detect biologically interesting interactions, like a SNP in a distal enhancer of a gene (captured the Chromatin mapping) interacting with another gene's eQTL. For the most part, this complementary approach improved the analysis, by recovering most of the interactions significant in the analyses that used one mapping at a time. Importantly, our results display the benefits of going beyond one single SNP-gene mapping (often, genomic position) to interpret epistasis results. To our surprise, tissuespecific analyses using exclusively eQTL and Chromatin mappings from tissues related to IBD resulted in less significant gene pairs. Despite this setback, we believe that more targeted analyses (e.g. using only interactions from open chromatin in relevant cell-types) might lead to novel discoveries.
Restricting the tested interactions to functionally plausible pairs of genes and SNPs joins two faces of epistasis: searching for statistical epistasis, yet exclusively on plausible candidates for biological epistasis. This has several advantages. First, a more targeted input dataset reduces the number of tests and, in consequence, the multiple testing burden. In contrast, the high dimensionality of GWAIS data requires a much more stringent multiple testing correction and limits the detection of epistasis with low effect sizes. Adopting one of the proposed analyses may reduce the number of SNP interactions to test by more than half (Fig 1). Yet, the Standard analysis, which does not use Biofilter, produced the most significant gene models. Second, the proposed protocol addresses the robustness issues widespread in GWAIS by producing results that are consistent at the gene and pathway levels (Fig 5). Indeed, we observed an increased analytic robustness when using Biofilter gene models, in line with previous reports [32]. In particular, eQTL and Chromatin mappings increased said robustness. Third, restricting the search for epistasis to biologically plausible interactions yields results that are biologically interpretable and strikingly different from the ones obtained without using functional filtering (Fig 2). Not surprisingly, different mappings also provided very different interaction signals and give resolution of information on different genes. In particular, we corroborated that the significant gene models from different functional filters were relevant to the biology of IBD. This was especially true for the Chromatin analysis (but also the eQTL analysis), giving rise to interactions with seemingly meaningful biological underpinnings, and stressing the relevance of regulatory variants in susceptibility to IBD. In contrast, the Standard analysis detected multiple interactions that were hard to interpret. For instance, several interactions involved RNA genes of unknown function (e.g. LOC101927272 or LINC02178).
Remarkably, while the Standard analysis produced rich results, the Positional analysis did not lead to any significant SNP model. They both use genomic position to map SNPs to genes, but Positional is restricted to gene models in Biofilter. We note that the Positional analysis does not coincide with how Biofilter is typically used on GWAS data for epistasis detection. The latter involves pooling all SNPs that are mapped to genes which occur in Biofilter proposed gene interaction models, and subsequently exhaustive screening those SNPs for pairwise interactions. These pairs may also involve gene pairs that were not highlighted by Biofilter, in contrast to our Positional analysis. We evaluated the impact of Biofilter on the final results. No significant SNP interaction were detected in the Positional analysis. In the analysis without biofiltering (dataset reduced to mappable SNPs using genomic proximity, but not reduced to Biofilter gene pairs), 62 pairs were significant. Also, on the 86 SNP interactions that passed the experimental threshold in the Standard analysis (dataset not reduced to mappable using genomic proximity, nor Biofilter gene pairs), only 57 are mappable to gene interactions using genomic proximity. Hence, 66% of significant SNP pairs are mappable via genomic proximity in the Standard analysis.
An important component of our protocol is the conversion of SNP-based tests to gene-based tests. The most popular approach consists in aggregating SNP-level P-values into genelevel statistics, which can be done in different ways (see Ma et al. [33] for some early examples, and Vsevolozhskaya et al. [34] for recent developments). Here, we developed a generic approach that exploits a permutation strategy to define a Pvalue cutoff for SNP interactions, at a FWER of 5%, and then we followed the original implementation of the adaptive TPM (ATPM) to accommodate several truncation thresholds at once [19] while taking permutations instead of bootstrap as in Yu et al. [35]. The two algorithms are very similar, but we favored the TPM over the rank truncated product method of Yu et al. [35] that employs the product of the L most significant Pvalues, because the TPM only requires P-values smaller than a specified threshold, which is in line with the output of PLINK epistasis detection and saves storage space. Following both protocols and the recommendation of Becker and Knapp [36] we included measures derived from observed data in computing statistics under the null.
Remarkably, our proposed procedure keeps type I error under control, without additional corrections for multiple testing at the gene model. We hypothesize that this stems from two reasons. First, we apply a stringent correction for multiple testing at the SNP level. Second, when moving from SNP model significance to gene model significance, the ATPM only considers gene models that map to at least one significant SNP model. However, alternative strategies could have been considered. For instance, not restricting ourselves to significant SNP models, hence conducting ATPM on all gene models. This could have led to increased discovery, in cases where the SNP models mapped to a gene tend to be low, albeit non-significant. However, it may also lead to an increased type I error. Accounting for that would require a multiple test correction at the gene level. In turn, such correction would be difficult since the dependency between the tests is unknown. Additionally, in common multiple test correction procedures this would require a much higher number of permutations, in order to obtain the necessary numerical precision.
How to best perform a pathway analysis of epistasis results is understudied. Often, all genes belonging to any significant gene pair are simply pooled together into a joint enrichment analysis. This approach discards the gene-gene interaction information that was, indeed, the object of analysis. Hence, in our procedure we adapted the Network neighborhood search protocol from Yip et al. [37], which considers the topology of the network using the shortest paths between the studied genes. It should be noted that we only used the topology to derive a neighborhood for each significant gene pair; then, we discarded the edge information. Yet, there are several directions for improvement. One is to exploit the topology of the epistasis network beyond the creation of a neighborhood. Another one is to take into account the gene size (or the number of SNPs per gene), for instance by performing a weighted version of the statistical test. Jia et al. [38] suggested a method for gene set enrichment analysis of GWAS data, adjusting the gene length bias or the number of SNP per gene. In our data, we did observe a link between the significance of the gene models and the number of SNPs mapped to the gene. For instance, in the eQTL analysis, the only one producing significant pathways, the median number of SNPs per genes is 385 among genes in significant pairs, versus 3 SNPs/gene genome-wide.
Several protocol changes may impact final results. As reported elsewhere [32], these changes or choices include the modelling framework (parametric, non-parametric, semiparametric), encoding of the genetic markers, as well as LD handling. With regards to the first one, we used a linear regression. Since the IIBDGC dataset is case-control, a more natural choice would have been logistic regression, including the 7 main PCs to account for population structure. However the tool of choice (PLINK) did not allow to include covariates. To work around this, we took the binary phenotype as a continuous variable, regressed out the 7 largest PCs, and fitted a linear regression to this adjusted phenotype. Although this approach works well in practice, it is sub-optimal, and more flexible frameworks might account for the population structure more accurately. With regards to the encoding, we used an additive encoding scheme (0, 1, 2 indicating the number of copies of the minor SNP allele), a popular choice in part because of its computational efficiency. However, this encoding schemes has been reported to tend to increase false positives (for instance [39]). This observation is based on type I error studies with data generated under the null hypothesis of no pairwise genetic interactions but in the presence of main effects (see for instance [40]). Here, we investigated the type I error control of our protocols under a general null hypothesis of no genetic associations with the trait (no interactions and no main effects) and established adequate control. As a consequence, this does not guarantee that our generated SNP interaction results were not overly-optimistic. To this end, we adjusted SNP-level epistasis P-values for main effects as comprised in a polygenic risk score. Not only does such a post-analysis adjustment via conditional regression reduce over-optimism due to inadequate control for lower-order effects, thus addressing phantom epistasis [24], but it may also occasionally highlight the masking of SNP interactions (as was shown in Fig 4B -eQTL). More work is needed to investigate the impact on gene-level interaction results, derived accordingly. For convenience, we used the regression framework to identify SNP interactions and relied on earlier recommendations regarding LD handling [41].
Our protocols are built on output from Biofilter, that can be presented as a co-functional gene network. One of the motivations was its proven ability to highlight meaningful interactions in a narrower alternative hypothesis space, at the expense of leaving parts of the interaction search space unexplored. The database that Biofilter built contained 37 266 interactions. This is notably smaller than other gene interaction databases, like HINT [42], 173 797 interactions), or STRING [43], 11 759 455 interactions). Testing gene interactions with other (combinations of) biological interaction networks was beyond the scope of this paper. Furthermore, Biofilter analysis or exhaustive screening may lead to non-overlapping results. An example within a regression context is given in [32].
Potential implications
In this study we presented a protocol to enhance the interpretation of epistasis screening from GWAS. It includes gene-level epistasis discoveries with type I error under control, as well as a network-guided pathway analysis. Moreover, it improves the robustness of the results. While SNP pairs from a GWAIS study rarely replicate in other cohorts and arrays, results at the gene and pathway level are more likely to be reproducible. This can be achieved directly by applying the proposed protocol, or by testing SNP models in a cohort obtained from the gene pairs and pathways significant in other studies, via the SNP-gene mapping of interest. Aggregating SNP-level results into genelevel epistasis is challenging, but allows to include information from biological interaction databases. Based on that, we conducted multiple analyses that use different sources of prior bi-ological knowledge about SNP-to-gene relationships and gene interaction models, as well as rigorous statistical approaches to assess significance. Each of them offers a different, albeit complementary view of the disease, which leads to additional insights.
Their application to GWAS data for inflammatory bowel disease highlighted the potential of our strategy, including network-guided pathway analysis, as it recovered known aspects of IBD while capturing relevant and previously unreported features of its genetic architecture. These strategies will contribute to identify gene-level interactions from SNP data for complex diseases, and to enhance our belief in these findings.
Gene interaction detection procedure
As we describe in more detail below, we applied different functional filters to the available data. These filters use plausible interactions between genes, and three different ways of mapping SNPs to those genes, and hence, to these interactions. These three mappings exploit different degrees of biological knowledge to map SNPs to genes, referred to as Positional, eQTL and Chromatin. For each of the three SNP-to-gene mappings, we only analyzed the pairs of SNPs corresponding to a gene pair with prior evidence for interaction. Across this article, we compared our findings in these scenarios to a Standard scenario. In this case, we exhaustively search for epistasis between all 38 225 SNPs that passed quality control (Table 4). We mapped the resulting significant SNP interactions to potential gene interactions using the positional mapping. An overview of the entire pipeline is presented in Fig 7.
From gene models to SNP models
Although the unit of analysis in GWAIS is the SNP, biological interactions are often characterized at the gene level. Hence, we mapped all SNPs in the dataset to genes using FUMA [44], a post-GWAS annotation tool. We created an artificial input where every SNP is significant in order to perform such mapping on all the SNPs. We performed three SNP-gene mappings using FUMA's SNP2GENE: positional, eQTL and 3D chromatin interaction (Table 4). In the Positional mapping, we mapped a SNP to a gene when the genomic coordinates of former was within the boundaries of the latter ± 10 kb. The eQTL mapping uses eQTLs obtained from GTEx [45]. We mapped an eQTL SNP to its target gene when the association P-value was significant in any tissue (FDR < 0.05). Lastly, in the Chromatin mapping, we mapped a SNP to a gene when a contact had been observed between the former and the region around the latter's promoter in the 3D structure of the genome (250 bp upstream and 500 bp downstream from the transcription start site) in any of the Hi-C datasets included in FUMA (FDR < 10 -6 ). This mapping might contain new, undiscovered, regulatory variants which, as for SNPs obtained through eQTL mapping, regulate the expression of a gene.
Co-function gene and SNP networks
We used Biofilter 2.4 [18] to obtain candidate gene pairs to investigate for epistasis evidence. Biofilter generates pairs of genes susceptible to interact (gene models) with evidence of cofunction across multiple publicly available biological databases. It includes genomic locations of SNPs and genes, as well as known relationships among genes and proteins such as interaction pairs, pathways and ontological categories, but does not use trait information. As per Biofilter's default, we used gene models supported by evidence in at least 2 databases. Additionally, we removed self-interactions, as detection of within-gene epistasis requires special considerations and is beyond the scope of this paper.
Given this set of gene models, and three different ways of obtaining SNP models from it, we removed all the SNPs that did not participate in any SNP model. Subsequently, we created eight datasets. In one dataset no filter was applied (Standard analysis), i.e. no Biofiltering nor any SNP-to-gene mapping. Hence, the original SNP set was used. We also created one dataset exclusively for each SNP-to-gene mapping (Positional, eQTL and Chromatin). Lastly, we created four datasets using joint mappings: one with all the mappings (Positional + eQTL + Chromatin); and three with only two of them (eQTL + Chromatin, Positional + eQTL and Positional + Chromatin).
Regardless, all risk SNPs described in Liu et al. [46] were included, even when the aforementioned epistasis quality controls criteria did not hold up. Then, when the two SNPs of a SNP model were located in the HLA region, we discarded the pair, as it is difficult to differentiate between main and non-additive effects in this region [47]. Lastly, we discarded models where the SNPs were in linkage equilibrium (r 2 > 0.75), as motivated in Gusareva and Van Steen [41].
SNP-level epistasis detection and multiple testing correction
We used PLINK 1.9 to detect epistasis through a linear regression on the population structure adjusted phenotypes with the option -epistasis: where g A and g B are the genotypes under additive encoding for SNPs A and B respectively, Y is the adjusted phenotype, and β 0 , β 1 , β 2 , and β 3 are the regression coefficients. PLINK performs a statistical test to evaluate whether β 3 = 0. It only returns SNP pairs with a P-value lower than a specified threshold. We used the default 0.0001. Only SNP models were considered, apart from the Standard approach.
To correctly account for multiple testing, the P-value threshold of significance had to be dataset-dependent as the number of tested SNP pairs changed from dataset to dataset. We obtained these thresholds through permutations as in Hemani et al. [48] (Fig 7). More specifically, for each dataset, we permuted the phenotypes 400 times and fitted the aforementioned regression-based association model. This produced a null distribution of the extreme P-values for this number of tests given the LD structure in the data. For each dataset, we took the most extreme P-value from each of the 400 permutations and set the threshold for 5% family-wise error rate (FWER) to be the 5% percentile of these most extreme P-values. Posterior experiments showed that a higher number of permutations, 1 000, barely changed the empirical threshold (data not shown). Hence, 400 was a sufficient number of permutations to obtain an adequate threshold.
ATPM is an adaptive variant of the Truncated Product Method (TPM) of Zaykin et al. [49], which uses as a statistic the product of the P-values smaller than some pre-specified threshold (here, significant SNP interactions) tests. More specifically, given a truncation point τ and a number N of significant SNP interactions, this test statistic is given as where I(·) is the indicator function. TPM is interesting in our context because it does not require P-values for all SNP pairs but only for the most strongly associated ones.
The distribution of W (τ) under the null hypothesis is unknown when the individual tests are not independent, which is clearly the case here, but an empirical P-valueπ(τ) can be estimated through permutations. Because the choice of τ is arbitrary, the adaptive version of TPM (ATPM) explores several values of τ and selectsπ * = minτπ(τ). The distribution of π * under the null hypothesis can again be determined through permutations [50].
In our procedure, which is detailed below for a given gene pair, we used B = 999 permutations and τ ∈ {0.001, 0.01, 0.05}. Remarkably, and following the suggestion of Becker and Knapp [36], the null distribution includes both the statistic from the observed dataset, and from the 999 permutations.
Studying the impact of confounding main effects
The SNPs from some detected interactions were near SNPs with main effects. To assess the impact on the results, we studied the difference between β 3 in Eq. 1 and in the following model: Y = β 0 + β 1 g A + β 2 g B + β 3 g A g B + β 4 PRS.
PRS is the polygenic risk score (PRS) computed for the sample. We expect the PRS to capture the variance explained by main all effects.
We computed the PRS with PRSice-2 [51], using the default options. Since it requires GWAS summary statistics, we used PLINK -assoc to compute the association of each SNP in the original dataset (130 071 SNPs and 66 280 individuals, with the trait adjusted for PCs). Since the adjusted phenotype is quantitative, PLINK computes the linear regression coefficients and assesses their significance using the Wald test. PRSice performs clumping to remove SNPs that are in LD with each other. The r 2 values computed by PRSice are based on maximum likelihood haplotype frequency estimates. From the 130 071 initial variants, 28 389 variants remained after clumping (-clump-kb 250kb, -clump-p 1, -clump-r2 0.1). We used the average effect size method to calculate the PRS, with high-resolution scoring.
Pathway analysis
A pathway enrichment analysis on the neighborhood of a significant gene model can inform about the broader framework in which gene epistasis occurs. To define such neighborhoods, we adapted the network neighborhood search protocol from Yip et al. [37]. We computed the neighborhood of two genes as the list of all genes that (1) participate in any of the shortest paths between the two studied genes in the Biofilter network, once the direct link between them is removed; and (2) are also involved in a significant interaction with at least one other gene on these paths. We restricted our attention to neighborhoods containing at least 3 genes, including the 2 from the considered gene model. For each of these, we conducted a gene set enrichment analysis in all human gene sets from the Molecular Signature Database (MSigDB version 7) [52,53]. We performed the enrichment analysis using a hypergeometric test, which compares the obtained overlap between two sets to the expected overlap from taking equally-sized random sets from the universe of genes. We favored the hypergeometric test over the chi-square test used in Yip et al. [37] because the sample sizes of the neighborhoods were small and because chi-square is an approximation whereas the hypergeometric test is an exact test. The universe set was analysis dependant. It contained all genes in an annotated pathway and that can be mapped via genomic proximity to a SNP of the dataset for the Standard analysis, and genes present in Biofilter gene models, in an annotated pathway and that can be mapped via the appropriate SNP to gene mapping to a SNP of the dataset for the other analysis. Finally, pathways were said to be significant when the corresponding test P-value was lower than the Bonferroni threshold (0.05/(# pathways × # tested gene neighborhoods)), with pathways corresponding to pathways containing at least one gene of the neighborhood. We also made this pipeline available in bio.tools (id: network_epistasis) and SciCruch (network_epistasis, RRID: SCR_021794) databases.
Availability of source code and requirements
Additionally, the code necessary to reproduce this article's results and analyses is available on GitHub at https://github. com/DianeDuroux/BiologicalEpistasis.
Data Availability
The data set supporting the results of this article is available upon request from the International Inflammatory Bowel Disease Genetics Consortium (https://www.ibdgenetics.org/). GWAS summary statistics are publicly available. Snapshots of the code are available in the GigaScience GigaDB repository [54]. | 9,340.8 | 2020-09-25T00:00:00.000 | [
"Computer Science",
"Biology"
] |
Foucault and Agamben on Augustine, Paradise and the Politics of Human Nature
This article focuses on Foucault’s and Agamben’s readings of Augustine’s account of human nature and original sin. Foucault’s analysis of Augustine’s account of sexual acts in paradise, subordinated to will and devoid of lust, highlights the way it constitutes the model for the married couple, whose sexual acts are only acceptable if diverted by the will away from desire and towards the tasks of procreation. While Agamben rejects Augustine’s doctrine of original sin and reclaims paradise as the original homeland of humanity, his reappropriation of paradise remains conditioned by our turn towards our true nature, from which we have been estranged by sin. Agamben’s politics of reclaiming paradise necessarily involves the demand for obedience to this originary model of human nature. It therefore follows to the letter Augustine’s description of paradisiacal sex, in which the will prevails over desire by applying itself to and curtailing itself.
Introduction
Giorgio Agamben's (1998: 9) claim in the first volume of the Homo Sacer series to correct and complete Michel Foucault's theory of biopolitics gave rise to prolific commentary in philosophy and political theory that sought to elucidate the relation between the theoretical orientations and methodological approaches of the two authors (Ojakangas, 2005;Snoek, 2010).Besides the two authors' common interest in divergent approaches to biopolitics, Foucault has also influenced Agamben's work in other fields, most notably the method of archaeology (Agamben, 2009b;see Frost, 2019), the focus on styles of existence (Agamben, 2016; see Van der Heiden, 2020) and truth-telling (Agamben, 2009a;see Crosato, 2020).
What has received less attention in this discussion is the relation of the two authors to questions of political theology.The interest in theology is of course more explicit in the case of Agamben than Foucault, despite the latter's turn to 'political spirituality' in his later work (Foucault, 2005: 15;Foucault and Bremner, 2020).Nonetheless, the recent publication of the fourth volume of Foucault's History of Sexuality, Confessions of the Flesh (2021) and Agamben's The Kingdom and the Garden (2020) permits us to analyse in more detail the two authors' approaches to the Christian theological tradition, and specifically Augustine's doctrine of original sin.
In their readings of Augustine, Foucault and Agamben address the same set of questions: what is the human nature that was lost with original sin and the expulsion from paradise, how did the loss take place, and can its outcome be mitigated or even overcome?Nonetheless, they pose these questions in the context of rather different theoretical projects, respectively genealogical and messianic.For Foucault, Augustine's doctrine of original sin marks a key episode in the genealogy of the subject of desire in the Western tradition, whereby sexual relations became the site for both the 'veridictive' interrogation of the subject's inner truth and the 'jurisdictive' regulation of its behaviour.Agamben is interested less in charting the role of the idea of sin in the constitution of the subject than in the possibilities of overcoming sin as such and reclaiming paradise for humanity.He therefore opposes to Augustine's doctrine the more affirmative visions of human nature developed by Eriugena and Dante.
Despite these divergent orientations, addressing the two readings together is highly instructive insofar as it illuminates the problems involved in Agamben's attempt at the messianic politics of the return to paradise.As we shall demonstrate in this article, Agamben's attempted refutation of Augustine maintains his identification of sin with disobedience, which entails that the project of reclaiming paradise must presuppose the requirement of obedience that characterized human behaviour before the fall.Agamben's vision of paradise regained carries an uncanny resemblance to Foucault's image of the Christian marital couple whose sexual life imitates what Augustine thought sex was like in paradise: obedient, reasonable and wholly subject to the will.Foucault's genealogical approach thus offers a sobering corrective to Agamben's enthusiastic affirmation of the 'cancellation of sin' that leaves its logic intact.
Our argument in this article unfolds in three steps.Firstly, we shall address Foucault's reading of Augustine's account of original sin, starting from his unorthodox interpretation of sexual acts in paradise, devoid of desire and entirely controlled by the will.The disobedience of God in original sin puts an end to this sexual experience, introducing into human nature involuntary sexual urges (concupiscence).The expulsion from paradise thus entails a split in the will that can never be entirely overcome but only mitigated in the context of marriage by diverting one's will from desire in one's sexual acts, subjecting them entirely to the legitimate tasks of procreation and avoidance of fornication.While original sin cannot be cancelled in this life, it can be tempered by practising sexual acts in a voluntarist and dutiful manner, as Adam and Eve did before sinning.
In the second section, we turn to Agamben's reading of Augustine's doctrine, which, similar to that of Foucault, highlights Augustine's rejection of any possibility of overcoming original sin in this life and thereby reclaiming the earthly paradise.Agamben opposes this doctrine with the more heterodox approaches to paradise developed by Eriugena and Dante, for whom human nature has not been corrupted by sin but only deviated from in our acts of will, and that paradise, consequently, remains accessible to us, perhaps as a metaphor for our human nature itself, which may be abused and distorted by will but can always be recovered by applying the will against itself.
In the final section, we shall discuss the way Agamben posits this recovery as the task of a messianic politics that identifies the advent of the Kingdom, which the theological tradition has deferred to heaven after the end of time, with the return to the Garden.Insofar as this return presupposes the cancellation of sin, which for Augustine consists of disobedience, we shall conclude that Agamben's messianic politics is necessarily characterized by the requirement of obedience, which Foucault analysed as the key feature of subjectivation in early Christianity.
The Will (Not) to Sin: Foucault on Paradisiacal and Marital Sex
Foucault's Confessions of the Flesh is the concluding volume in his History of Sexuality project.Foucault's original plan for the six-volume series devoted to the regulation of sexual practices in 17th-19th century Europe was abandoned after the publication of the introductory volume (Foucault, 1990a) in favour of a much longer historical perspective, going back to ancient Greek (Foucault, 1990b), Roman (Foucault, 1990c) and early Christian (Foucault, 2021) sources.The focus of the study also shifts from the 'objectifying' regulation of sexual behaviours towards the emergence of the subject of desire in various technologies of the self.Although it is the final volume in the series, Confessions of the Flesh was actually completed before the second and third volumes (Elden, 2018;Gros, 2021: x-xi) and is thematically closest to the problematic of the first volume, which also addressed confession as the key technology of subjectivation in the Western tradition (Foucault, 1990a: 60-63).In Confessions, Foucault addresses three episodes in the genealogy of the subject of desire: the emergence of the notion of the 'flesh' in the Christian tradition that supplants the idea of pleasure (aphrodisia) in antiquity, the formation of the practice of virginity in monastic settings and the procedures of truthtelling associated with it, and, finally, the problematization of sexual acts in the marital relationship.
In our analysis of Foucault's reading of Augustine we shall focus on the concluding chapter of Confessions of the Flesh, in which Foucault addresses the question of the 'libidinization' of sex, its constitution as no longer merely a bodily pleasure, but rather as an experience of lust or concupiscence.In order to understand the formation of this experience we must first consider the possibility of sexual relations deprived of it, which leads Foucault to Augustine's description of prelapsarian sexuality.In his Reply to Two Letters of Pelagians, Augustine discusses four possible forms that sexual relations could take place in paradise.
[While] maintaining, you Pelagians, the honorableness and fruitfulness of marriage, determine, if nobody had sinned, what you would wish to consider the life of those people in Paradise, and choose one of these four things.For beyond a doubt, either as often as ever they pleased they would have had intercourse; or they would bridle lust when intercourse was not necessary; or lust would arise at the summons of will, just at the time when chaste prudence would have perceived beforehand that intercourse was necessary; or, with no lust existing at all, as every other member served for its own work, so for its own work the organs of generation also would obey the commands of those that willed, without any difficulty.(Augustine, 1887a: 1.34)The first two possibilities are immediately ruled out by Augustine.The first of these would render God's creatures enslaved to lust, while the second would be incompatible with the understanding of paradise as a place of bliss due to the emphasis it places on selfrestraint.This leaves two options, the sole difference between which concerns desire.In the third option, humans could, of their own volition, bring forth desire at the appropriate moment.In the fourth option, humans could have sexual relations in the absence of any desire and only obeying the orders of the will so that 'there is the highest tranquility of all the obedient members without any lust' (Augustine, 1887a[c. 380-410]: 1.35).
In Foucault's argument, Augustine clearly prefers the fourth option and only mentions the third as a concession to his Pelagian adversaries.Yet, even in the third option, the desire in question is produced by the will.In both of these variants we are therefore dealing with an activity that is entirely voluntary and hence can be contrasted with fallen humanity's experience of concupiscence as involuntary desire that the subject must confront and struggle with.Foucault (2021: 261) takes particular care to differentiate this understanding of the sexual act from any idea of natural spontaneity: [Now], if this absence [of desire] is assumed, what would the sexual act consist in?In a natural and spontaneous unfolding that nothing would disturb?Not at all.The text says it without any ambiguity: one must imagine an act whose every element is placed under the exact and unfailing control of the will.Let us not imagine man, in the sexual union of paradise, as a clueless being moved by the urges whose innocence is guaranteed insofar as they are beyond his grasp, but as a skilful artisan who knows how to use his hands.Ars sexualis.If sin had left him the time, he would have been, in the Garden, a diligent sower.Paradisiacal sex was obedient and reasonable like the fingers of the hand.
Even when Augustine concedes to his interlocutors the possibility of humans developing 'sexual urges' by their own volition, the sexual act remains 'subject to the empire of the will' (p.262), which alone decides when these urges are appropriate.As the title of Chapter 35 of the Reply clearly indicates, 'desire in paradise was either none at all or it was obedient to the impulse of the will' (Augustine, 1887a[c. 380-410] The voluntarist character of paradise sex distinguishes it from sexual acts after the fall, which are characterized by involuntary urges of concupiscence.This involuntary character, addressed at length in Book XIV of The City of God, is itself a result of the fateful act of will on the part of Adam and Eve, i.e. the original sin.For Augustine, the original sin did not consist of eating the forbidden fruit as such, which was of no particular importance either to God or to Adam and Eve, but in the rebellion against God that this act manifested.Consequently, the punishment that God imposed on the disobedient humans is 'exactly fitted to the sin' (Foucault, 2021: 262): the disobedience that the first humans showed to God will be replicated within human existence itself, which will from now be split between the will and what escapes it.The sexual act is a prime site for the manifestation of this split, insofar as it is now characterized by involuntary urges, the 'rush of concupiscence' that the first humans never experienced before (p.264).This is why Adam and Eve blush when observing each other naked for the first time: not merely because their genitals were formerly covered by a 'garment of grace' (Augustine, 1993[412 and 426 CE]: XIV 17), but also because, having been stripped of it, they now experience previously unfamiliar urges arising from them.
But when they were stripped of this grace, that their disobedience might be punished by fit retribution, there began in the movement of their bodily members a shameless novelty, which made nakedness indecent: it at once made them observant and made them ashamed.(Augustine 1993: XIV 17) The familiar activity of sex, wholly subjected to the will and devoid of all desire, is now subverted by the involuntary aspect that does not contribute anything to it but functions solely as punishment, showing human beings, who have already learned how to disobey, what it is like to be disobeyed.'In short, sex "springs forth", arisen in its insurrection and offered to the gaze.It is for man what man is for God: a rebel' (Foucault, 2021: 264).What is shameful is neither the organ, which was fully formed and active before, nor the act, which was already practised wilfully, but the urge of the organ to act: 'the involuntary form of a movement is what makes the sexual organ the subject of an insurrection and the object of the eye's gaze.Visible and unpredictable erection' (p.265).
It is notable that, while it is manifested in the sexual act, concupiscence is not intrinsically tied to it: 'it is an element which the transgression, the fall and the principle of "reciprocity of disobedience" tied to the act synthetically' (p.266).Thus, sin has nothing to do with sexual difference or the sexual act, which both pre-existed it, but consists entirely of the disobedience of God, which is in turn punished by resigning man to the resurrection of the sexual organs against their will.The origin of desire is therefore not to be found in the body but in the will itself, which makes human beings want to be their own masters, turning away from God, who made them what they are.This excess of the will ultimately leads to the degradation of human nature itself, which becomes all the more deficient the more it disobeys its creator (p.269).
Thus, the line separating the voluntary and the involuntary does not pass between soul and body, subject and nature, but rather is from the outset placed within the subject's will itself.While animal acts of copulation are at first glance similar to human sexual acts in being involuntary, the involuntariness in animals is of a different kind, since it does not mark a division in the soul.It is only human desire that divides the self against itself, resigning the human to involuntarily imitating the movements of animal copulation.'What is involved is a will whose voluntary deviation from what maintains it in being allows it to exist in the element that tends to destroy it -the involuntary' (p.270).Desire is the very form of the will insofar as, by willing itself, it ends up willing the opposite.This is why Augustine can view concupiscence as at once sui juris and imputable to the subject: 'the "autonomy" of concupiscence is the law of the subject when it wills its own will.And the subject's powerlessness is the law of concupiscence.This is the general form of imputability, or rather its general condition' (p.271).
The establishment of this general condition of imputability permits the emergence of an entirely new rationality of governing sexual behaviours, no longer in terms of excess and continence as in antiquity, but in terms of the individual's relationship to his or her own concupiscence.Foucault traces two consequences of Augustine's theory of concupiscence for the government of sexuality.Firstly, the notion of consent (consensus) makes it possible to impute the act to the subject.Even if all subjects exist under the law of desire, it takes a certain exercise of the will on itself to bring the sexual act to presence.In this exercise of consent, the will 'wills that will that has the form of concupiscence; it takes itself for an end as fallen will; it assumes its own condition as concupiscent will' (p.279).Consent is not simply a matter of actualizing desire but of the constitution and confirmation of oneself as the desiring subject, to whom its involuntary movements may now be imputed.This emergence is a crucial episode in Foucault's history of sexuality since it also marks a point of descent of the subject of law (p.280).
Secondly, the notion of use (usus) permits us to rethink the regulation of sex in marriage, which becomes no longer a matter of the exercise of the right of the use of the body of the other, but instead a matter of the use of one's own concupiscence in the manner that is different from the volition of one's concupiscence.While the form of the sexual act will remain the same, its ends can be diverted from consenting to one's desire and instead consist in engendering children or keeping one's partner from falling into fornication -both legitimate objectives of sexual acts for Augustine.In this manner, one can make use of an evil in a good or an evil manner and it is this manner of usage that will determine whether an act is sinful: It is possible to make an utterly non-concupiscent use of concupiscence, but the latter will not be done away with for all that.It often happens that one makes use of it just for concupiscence, so that the latter seems to carry the day, but this usage will nonetheless remain a specific and imputable act.(p.283) Foucault concludes that this separation between the evil of concupiscence and the manner of its use opens a range of possibilities for the juridification of marital sexual relations that in the period of antiquity were viewed as the least problematic and therefore remained the least regulated (p.284).As sexual acts are neither a good in itself (to be regulated by their 'natural' function) or simply evil (to be regulated by strict demands for continence), they can be classified and codified according to their use, ends, circumstances, etc. (p.283).Of course, this codification actually took place hundreds of years after Augustine (from the 13th century onwards), yet, in Foucault's view, it was nonetheless prepared by Augustine's theory of concupiscence, which made possible [a] very precise codification of the moments, the initiatives, the invitations, the acceptances, the refusals, the positions, the gestures, the caresses, even the words that can take place in sexual relations.Sex in marriage thus becomes the object of juridiction and veridiction.(p.284) Moreover, since the notions of consent and use do not define the relations between the spouses directly but rather define the relation each spouse establishes to their own desire, the regulation of sexual behaviours ultimately ends up founded on the relationship the individual maintains with themselves, not with others.This is how the 'problematization of sexual behaviours . . .becomes a problem of the subject' (p.285), the subject of desire who is at once the subject of law, both the truth of its desire and the goodness of its actions discoverable on the basis of its relation with itself.
Augustine's idiosyncratic conception of paradisiacal sex marks an essential moment in the genealogy of this subject.Entirely devoid of desire and thoroughly subjected to will, it demonstrates the possibility of sex without sin, albeit a possibility that is no longer available to fallen humanity, for whom sexual acts will always be accompanied by involuntary movements of desire that mark them as evil.And yet, while this evil cannot be effaced entirely, it can be mitigated, not only through the simple cessation of all sexual activity (pp.117-134), but also, in the context of marriage, through the redirection of one's will to concupiscence towards the legitimate ends of procreation and avoiding fornication.The sin of disobedience that resulted from the abuse of the will is thus tempered, but not effaced, by the exercise of the will on itself that diverts desire from willing only itself towards more respectable ends.In this manner, marital sex may acquire at least a partial resemblance to sex in paradise: while it might not be entirely devoid of desire, it nonetheless remains subjected to the will and is hence just as 'diligent' and 'obedient' as the sex enjoyed, if the word is appropriate, by Adam and Eve prior to their expulsion from paradise.Tormented by the urges of concupiscence, the subject of desire may at least find consolation in the fact that, by directing these urges to the tasks of procreation and avoidance of fornication, one gets as close as possible to returning to paradise.
Untouched and Pure: Agamben on Paradise and Human Nature
Whereas Foucault's aim in the reading of Augustine is to trace the emergence of the subject of desire, Agamben's (2020) reading of the doctrine of original sin in The Kingdom and the Garden raises the stakes in seeking nothing less than the reappropriation by humanity of the paradise that according to Augustine it has lost forever.If paradise is no longer accessible due to original sin, the entire promise of redemption is deferred both temporally (to the afterlife) and spatially (to heaven).The only way this deferral can be halted and reversed is through the demonstration of the possibility for human nature to free itself from original sin in this life.This, according to Agamben, is the meaning of Bosch's famous triptych The Garden of Earthly Delights.While the left panel of the painting shows God introducing Eve to Adam in paradise and the right panel depicts the torments of sinners in hell, the central panel features an expansive landscape of the Garden teeming with human and animal figures engaged in various amorous activities.While this central panel has been interpreted in various ways, Agamben follows Wilhelm Fränger's reading in approaching it as a depiction of the 'restoration of the Edenic innocence, which humanity had enjoyed in the earthly paradise' (Agamben, 2020: 2; see Fränger, 1951).In The Kingdom and the Garden, Agamben (2020) draws on a variety of sources, of which Eriugena and Dante are the most important, to affirm the possibility of such a restoration and refute Augustine's doctrine of the original sin.
As we have seen, for Augustine the original transgression of the first men is transmitted to all of humanity both synchronically, from Adam to all men, and diachronically, from the parents to children (cf.Foucault, 2021: 272-273).Thus, sin no longer pertains to a person's acts or even their character, but to their very life and nature: [That] wound, which has the name of sin, wounds the very life, which was being righteously lived.This wound was at that fatal moment of the fall inflicted by the devil to a vastly wider and deeper extent than are the sins which are known among men.Whence it came to pass, that our nature having then and there been deteriorated by that great sin of the first man, not only was made a sinner, but also generates sinners; and yet the very weakness, under which the virtue of a holy life has drooped and died, is not really nature, but corruption; precisely as a bad state of health is not a bodily substance or nature, but disorder; very often, indeed, if not always, the ailing character of parents is in a certain way implanted, and reappears in the bodies of their children.(Augustine, 1887b[c. 380-410]: 2.57) Adam's sin produces a change in human nature itself, substituting a lapsed and corrupt nature for the original Edenic nature.This change cannot be healed without God's grace, yet even after such an intervention in the form of baptism, this corruption remains in human nature in the form of the above-discussed urges of concupiscence that drive human beings to sin in the involuntary manner, the disobedience of their organs recalling Adam's original disobedience to God.Agamben (2020: 31) notes that this reading endows human will with the extraordinary power of transforming the nature created by God, but gives it no power whatsoever to reverse or negate this transformation: Man is the living being that can corrupt its nature but not heal it, thus consigning himself to a history and to an economy of salvation, in which the divine grace dispensed by the Church through its sacraments becomes essential.
In the aftermath of the first man's disobedience, human nature remains corrupt and can only be remedied in this life through divine grace dispensed by the Church.Indeed, this dispensation is the only justification for the very existence of the Church: 'if human nature were capable of not sinning without grace, then the Church, which dispenses it through its sacraments, would not be necessary ' (p. 34).
From this perspective, paradise now appears forever lost and, if it exists at all in the present time, it can only exist in vain, forever devoid of human dwellers.At the moment of Judgement, human beings will either be transported to the Kingdom in heaven or spend eternity in hell.Messianic hope is thus relegated from reclaiming paradise on earth towards attaining a Kingdom in heaven: 'while the Kingdom to come is the central paradigm of the history of humanity, the Garden is deprived of any meaning whatsoever for that history' (p.50).It stands vacant as testimony to man's originary transgression and 'the cherubim with the flaming sword keeps watch so that man does not seek to penetrate it undeservedly' (p.73).
Faced with this paradoxical status of the Garden, Agamben chooses to entertain an alternative hypothesis: '[If] the human soul has preserved its possibility of not sinning, then man is in some way still in relation with the originary justice that he had possessed in paradise' (p.50).In order to investigate this possibility, Agamben turns to Joannes Scotus Eriugena's (John the Scot's) Periphyseon (John the Scot, 2011[c. 866-867]), reading this text as an esoteric refutation of the Augustinian doctrine.In contrast to Augustine, who viewed creation as a single act, Eriugena viewed creation as a two-step operation, even though the steps take place at the same time.The first creation produces a spiritual and immortal body of the kind we will have in the resurrection, while the second produces the mortal and corruptible 'clothing', which is added to the first one in advance of any possible sin (pp.800A-808A, 263-264).Thus, our mortal and corruptible bodies do not exist as punishment for Adam's eventual disobedience of God, but were created, with a sort of sublime irony, by the divine wisdom before any sinful event whatsoever, as an animal body that was added to the first not as a second body but as a mutable and corruptible clothing that always already clothes the spiritual body.(Agamben, 2020: 64) This account of creation permits Eriugena to advance a highly provocative claim that man, still dwelling as he is in this mortal and corruptible clothing, has never actually lived in paradise (John the Scot, 2011[c. 866-867]: 809A, 265).While Augustine conceived of paradise as a real place on earth, albeit no longer accessible to us, Eriugena approaches paradise as a metaphor for the first creation of human nature.Thus, paradise has always existed but, since human beings have always dwelled in their mortal bodies, it has remained forever empty.Even Adam and Eve did not stay there long but immediately gave in to sin: Had there been any duration to his stay, man would have been able to acquire sufficient perfection to prevent sin from occurring.We conclude, then, that he never really was in the paradise of human nature.What is written as if describing the first man is really a reference to what is to come at the end of the world.(pp. 809A, 265) Both the sin and the fall must then have happened outside paradise and therefore could not have affected the first creation of human nature, which remains safe and intact in paradise.In this manner, Eriugena clearly breaks with the Augustinian doctrine and affirms a version of Pelagianism, positing human nature as not corrupted or even corruptible by sin: Humanity is both one and many: one in its cause, the highest Good, and indefinitely multiple in the effects of that cause.Since the highest Good is wholly present everywhere, so is its image.Humanity, therefore, is diffused wholly in all men and is wholly present in each.There is no more humanity in the good man than in the evil.(pp. 942C, 321) It is evident that, in this understanding, human nature could never be affected by sinful actions: 'for both just and unjust, the spiritual body will be the same, purified of all animality, equally incorruptible, equally beautiful, equally eternal.Human nature retains its integrity in all' (pp.946A, 322-323).In this approach, sin is not an activity or disposition punished by exile from paradise but rather man's own exit from paradise, which is coextensive with the entire history of humanity.However, this very exit ensures that human nature remains intact: 'There is not a sin that could corrupt human nature, because man is always already descendens, in exit from it' (Agamben, 2020: 69).
Moreover, the fact that we have exited the Garden does not in any way entail its inaccessibility to us but, on the contrary, points to its continuing availability: [Man] -the living being that still does not have access to its own nature, because, by abusing its goods, it has always already abandoned it -will necessarily end up returning to it, when all things will be restored to their cause.Paradise -human nature -is that to which man must return without ever truly having been there.(p. 71) This paradoxical return to where human beings have never been is not an event deferred to the future, let alone to the end of the world, but something available in this life: '[Man] can enter again into the Garden, in order to encounter there original innocence and original justice' (p.127).Thus, even though, for Eriugena, paradise does not refer to a real place on earth, it is ultimately not distinguishable from the earth as such, except by what Eriugena calls the 'difference of beatitude' (John the Scot, cited in Agamben, 2020: 72).
The reference to beatitude or happiness leads Agamben to his second key interlocutor in The Kingdom and the Garden, namely Dante, who identified 'terrestrial' paradise with the 'happiness of this life', contrasting it with the 'celestial paradise' that consists of the 'enjoyment of the countenance of God' (Alighieri, 1904(Alighieri, [1312(Alighieri, -1313]]: 3.16.4).In Agamben's (2020) reading, this understanding of earthly paradise has a clear political significance insofar as Dante's idea of universal monarchy becomes thinkable as the project of a reappropriation by all humanity of its own nature and, consequently, of the 'return of originary justice on earth' (p.96).Once again, we are dealing with a return to something that is at once originary since it formed part of creation and formerly inaccessible as long as we keep exiting and turning away from it.
Thus, in contrast to the Augustinian model, in which the Garden as the original homeland of humanity remains forever out of reach, replaced by the ever deferred Kingdom in heaven, for both Eriugena and Dante paradise is never lost but perpetually awaiting our return, which is at the same time the first ever turn toward our very nature: Paradise -life in all its forms -was never lost: it is always in its place and remains as an untouched model of the good even in the continual abuse that man makes of it, without managing in any case to corrupt it.The heavenly paradise, which is not distinguished from the earthly one, into which man has not yet penetrated, coincides with the return to the originary nature that, untouched and pure, awaits all humanity from the beginning of time.(pp. 73-74) This approach clearly offers a refutation of the Augustinian doctrine of paradise.Yet, there remains a question of whether it also exemplifies the 'negation of the theologians' paradise' (p.127), i.e. whether the paradise that Eriugena and Dante consider accessible is the negation of Augustine's paradise that he considers inaccessible.Divergence on access aside, how much do the two concepts of paradise differ?At first glance, Eriugena's notion of paradise appears to differ from Augustine's concept in being clearly metaphorical or 'figurative', referring to human nature as such and not a particular garden someplace on earth.It therefore becomes easier to present it as awaiting our return since it is not a determinate place we could actually return to and dwell in.Nonetheless, this accessible but unreal paradise continues to be thought according to the same logic as Augustine's real but inaccessible one.After all, Agamben's claim that we 'return to the originary nature, untouched and pure' evidently implies that we return, albeit for the first time, to that very nature augmented by grace that for Augustine characterized the first men's existence in paradise.
For Augustine, this return is unthinkable in earthly life, while Eriugena and Dante consider it possible, yet the condition to be returned to appears to be the same: nature reunited with grace and hence no longer lacking or deficient.It is therefore difficult to agree with Agamben that in Augustine's reading the earthly paradise already marked a constitutive lack in human nature: the natura integra is something from which it suffices to subtract its clothing of grace for it to exhibit its faulty nudity, and sin is nothing but the operator of a defectiveness that was inscribed in it from the beginning.The earthly paradise, from which man, for this reason, could only be expelled, is not the cipher of human nature's perfection so much as, instead, its constitutive lack.(p.125) Surely, for Augustine, it was the loss of paradise that signified this lack while, prior to this loss, there was hardly any lack to speak of, hence the integrity of natura integra.Augustine's account of paradise is only negative to the extent that it describes a condition that is lost forever and cannot be recovered.While Agamben's reading of Eriugena and Dante overturns this negativity by making the Garden available to us again, it does not succeed in modifying Augustine's concept of it.The nature that awaits us in the Garden is the nature that has never sinned by disobeying and therefore could not be punished by the relocation of this disobedience within the human being where it would manifest itself in involuntary concupiscence.If we follow Agamben, Eriugena and Dante in affirming the incorruptible character of human nature, then on our return to the Garden, shedding our corruptible and mortal 'clothing', we will end up in that 'empire of the will' that for Augustine characterized prelapsarian sexuality, in which nothing like concupiscence could ever arise.
The Garden of Diligent Sowers: Obedience and the Cancellation of Sin
What is at stake in Agamben's affirmation of the return to paradise is more than a matter of theological exegesis.Agamben's intention is not merely to affirm an alternative conception of paradise but also to derive from it a messianic politics.This politics affirms the messianic Kingdom prophesied in both Judaism and Christianity, yet rather than equate this Kingdom with the existence of the Church (Augustine) or defer it to heaven after the end of the world (Tertullian), it finds it on earth.
The model for the Kingdom, to be established in the present or near future of messianic time (Agamben, 2005), is nothing other than the Garden as the site of 'originary nature, untouched and pure'.The theological tradition has kept the pre-historic Garden apart from the (ever deferred) post-historical Kingdom, ensuring the reign of the Church throughout history: The Garden must be driven back into an arch-past, which it is no longer possible to obtain in any way; the Kingdom, when it is not simply flattened into the Church and in this way neutralized, is projected into the future and displaced into the heavens.(Agamben, 2020: 152) In contrast, Agamben affirms the two poles as two aspects of the same experience of the present: Only the Kingdom gives access to the Garden, but only the Garden renders the Kingdom thinkable.We grasp human nature historically only through a politics, but this latter, in its turn, has no other content than paradise, which is to say, in Dante's words, 'the beatitude of this life '. (p. 152) These concluding sentences of The Kingdom and the Garden clearly demonstrate the stakes of rethinking the status of paradise for Agamben's political theory.We can return to the Garden only by instituting a messianic Kingdom through political action, yet the content of this political action is exhausted by the affirmation of human nature as it has always already been in the Garden, undivided and devoid of all lack, knowing neither one's own disobedience of God nor the disobedience of one's sexual organs.While humanity spent its entire history in a self-imposed exit from paradise, dwelling in the mortal and corruptible 'clothing' and stripped of the divine clothing of grace, there always remains a possibility of shedding this clothing and reuniting with its originary nature through what Agamben calls the 'cancellation of sin ' (p. 94).
In Agamben's reading of Dante, this cancellation can be attained in the absence of any sacraments or any other assistance from the Church.Contrary to the theological doctrines that reject the possibility of human happiness in this life (p.105), Dante (Alighieri, 1995(Alighieri, [1308(Alighieri, -1321]]: 7.116) argues that the incarnation of Christ makes it possible for human nature to restore itself to its original condition, 'lift [itself] up again'.Sin can be overcome without the help of the sacraments, simply by the exercise of free will, which is the 'greatest gift bestowed by God upon human nature', through which 'we attain to joy here as men, and to blessedness there as gods' (Alighieri, 1904(Alighieri, [1312(Alighieri, -1313]]: 1.12.3).Humanity can cancel sin, restore its nature and attain the 'beatitude of this life' by itself, through the exercise of free will that will lead it to happiness.
Since sin was nothing other than the abuse of our free will in the disobedience of God, the cancellation of sin proceeds by applying our free will to itself, correcting its abuses that kept us away from our integral nature.In this manner, it cancels nothing other than the revolt of human will against God that initiated the corruption of human nature for Augustine, as natura integra was stripped of divine grace.While the 'theological tradition' maintains that this corruption has rendered the Garden forever inaccessible, leaving us with the expectations of the Kingdom in heaven at the end of this world and the reign of the Church in the meantime, Agamben insists that the Kingdom remains possible on earth and consists of entering the Garden for the first time, redeeming our nature in acts of free will that cancel the sinful excesses of the will itself.
It is important to reiterate that, despite repeated references to the 'originary nature', Agamben's approach has nothing to do with any return to the origin understood as a determinate moment or condition in the past.Our originary nature is something that we never had (aside from the first man and woman) and never left (since our nature cannot be altered by sin).For this reason, Agamben's messianic politics has no use for either nostalgia for return or forward-looking progress.Return and progress, the Garden and the Kingdom, pre-and post-history are now brought together and coincide in the human nature that is untouched and pure as a 'model', even when it is constantly abused by various apparatuses governing human existence.Rendering these apparatuses inoperative (Agamben, 2005(Agamben, : 95-112, 2016: 277) : 277) will pave the way for our reappropriation of what we have always been.This is a key point of Agamben's theory that has attracted little attention due to the vaguely emancipatory overtones of his messianic politics.Rendering the apparatuses of 'abuse' will not deliver us to the freedom to experiment with diverse models of human nature or the freedom to dispense with such models altogether, but only to the appropriation of our incorruptible nature as it is and has always been.This incorruptible nature can only be re-(dis)covered, but never altered, other than in a corrupting way of 'descendence' that would entail us leaving paradise once again.
It is difficult to see how it could be otherwise.Let us recall that, for Augustine, sin does not consist of the sexual acts themselves, but the disobedience of God.What happens when this disobedience is 'cancelled' in the return to the Garden?Since sin was an act of will, its cancellation must involve cancelling at least some aspect of the will itself.In Eriugena's words, 'sin is not natural although it is voluntary.It is uncaused in the sense that it lacks all natural causes.It is caused in that it, along with its punishment, is the result of the evil will' (John the Scot, 2011[c. 866-867]: 944A, 322).
While Dante shares this identification of sin with will, he differs from Eriugena in also viewing it as the condition of our redemption: free will, which was God's greatest gift to human nature, actually makes it possible for us to 'lift ourselves up' from our present condition and return to the Garden.The return takes place by the negation of free will (as the agent of disobedience) in an act of free will -an operation that is strictly analogous to that performed by the married couple in Foucault's reading of the theory of concupiscence.One wills (freely, using God's greatest gift) not to will (disobedience, or sin, which leads one astray from one's nature), thereby diverting will away from its power to disobey and directing it to the task of lifting ourselves up to our originary nature.The analogy is so rigorous that one cannot but expect to find the reclaimed paradise full of married couples dutifully imitating Adam and Eve before the fall.
While the return to the Garden makes use of the 'greatest gift' of free will, it also curtails this will by suppressing the potentiality not to lift oneself up to the model of one's nature.This potentiality not to has been identified by Agamben himself as a necessary condition of any meaningful experience of freedom: 'freedom is freedom for both good and evil' (Agamben, 1999: 182-183).The cancellation of sin can only mean that this 'freedom for evil', potentiality not-to, ends up effaced.We are left with only the positive aspect of potentiality, which is capable of happiness and beatitude but not their negation.This is why, to recall Foucault's expression, there are in paradise only 'diligent sowers', obedient and reasonable subjects that have wilfully truncated their free will in order to remain true to their incorruptible nature.It matters little whether paradise is thought literally or metaphorically or whether Adam and Eve ever dwelled there or not.As long as we envision our (re)turn to paradise in terms of the recovery of a certain model of human nature through the cancellation of sin, any Kingdom thus constituted will inevitably demand obedience to this model, curtailing the very will that we rely on to lift ourselves up to it.Whereas Adam and Eve once sinned by willing to disobey and were banished from the Garden, the subject of messianic politics now wills not to will disobedience in order to be allowed to return to it.While Agamben's messianic politics skilfully evades the Church and the sacraments and even manages to avoid any reference to the divine, its reliance on the idea of incorruptible nature ensures that it maintains at least one aspect of the theological tradition it otherwise seeks to subvert, i.e. the imperative of obedience that Foucault traced in his analysis of Augustine's theory of concupiscence and Christian techniques of subjectivation more generally (Foucault, 2021: 91-94).It might appear that in Agamben's Garden-Kingdom this requirement of obedience is somewhat tempered by the fact that what one must obey is ultimately one's own nature and not some exterior agency.Moreover, the description of the actual features of that nature is noticeably scant, leaving us with little knowledge of what it is we must obey or renounce disobeying.
Yet, this is precisely the crux of the problem: whatever we might think human nature consists of, the Kingdom modelled on the Garden can only be constituted and maintained by renouncing our potentiality to disobey it by acting contrary to it.Since humanity has never dwelled in the Garden, we have no way of knowing what the nature that survives there is like and we ought to take all descriptions of it with a grain of salt.What we do know, beyond any doubt, is that we can only lift ourselves up to this nature by renouncing and continuing to renounce all disobedience to it.Even if Agamben's messianism is read as merely metaphorical and his theory is developed beyond its theological sources, the understanding of politics in terms of the recovery of something 'incorruptible', 'pure and untouched', cannot but end up demanding obedience to it in order to prevent a new descent into its abuse.After all, what would be the point of returning to the Garden only to keep exiting it again and again? | 10,413.2 | 2023-01-21T00:00:00.000 | [
"Philosophy"
] |
Poor Coding Leads to DoS Attack and Security Issues in Web Applications for Sensors
Riphah Institute of Systems Engineering, Riphah International University, Islamabad 44000, Pakistan Department of Computer Science, National University of Computer and Emerging Sciences, Islamabad, Pakistan Department of Computer Science and Information Technology, Islamic Azad University, Mahdishahr Branch, Mahdishahr, Iran Department of Information Technology, Quaid-E-Awam University of Engineering, Science and Technology, Nawabshah 67450, Pakistan Department of Information Technology, University of Sufism and Modern Sciences, Bhitshah 70140, Pakistan
Introduction
With the growing number of mobile app users, everyone is trying to develop their business apps as soon as possible. So these mobile apps are used to track the user's activities, getting information regarding vehicle locations, and tracking of logistics. For tracking vehicles, different types of mobile apps or sensors are used. For the full functionality of these apps or sensor devices, support software is required such as Personal Home Page (PHP) for the backend server, MySQL for data storage, and NodeJS for other required functionality as per requirements of clients or devices. Too many different types of open-source frameworks are used for backend functionality if the old version of these third-party tools is used. en they may have a well-known vulnerability that can be exploited by an attacker or adversaries.
All of the above Structured Query Language (SQL) injection attacks are more dangerous for the web application or for other devices (like mobile apps or sensors) which are using it as web services.
is attack is at the top of all injection-type family attacks or web application attacks. In this attack, the weakness of input fields is exploited by the attackers. It is performed by inserting the SQL query command into the input or the query will be appended with the targeted Uniform Resource Locator (URL). ese SQL queries are transformed into SQL code which is inserted by an attacker [1,2]. is injection vulnerability is the main point of web application security exploitation by an attacker. ese loopholes in web application remain because the testing of input boxes is sanitized properly [1]. If the PHP old version is used during the development and testing phase, it will also make web applications vulnerable. A web application developed on a local system with the latest version of PHP and the old version installed on the production server may lead to the unavailability of web application services for users.
is may disturb other applications during the upgradation of the PHP version if there is a shared server to save the operational cost of hosting or any old version of PHP framework such as WordPress in use that can be also more dangerous for web application services [3]. With these vulnerable frameworks, the attacker can delete databases or ask for a ransom to restore those databases or encrypted code. Another issue for the performance and security of web applications is the sockets used for communication between the web server and sensors. e nature of sensors is heterogeneous; due to this, the use of the same protocol for communication is not possible [4]. ese sensors are more vulnerable to be exploited due to low computing and power storage.
ese sensors are used for the different types of services such as, in the health system for monitoring conditions of patients [5], in vehicles to trace the location, and in water management for checking the level of water at rivers, etc. As the usage of IoT sensors is growing in every area of life, the number of attacks also increase. ese attacks are performed on different layers of IoT sensors to stop services for legitimate users, or to forward fake information. is research paper has used sensors in the vehicles for tracking them. e sensors are supported by socket-based communication for sending and receiving information from clients to the server. With the use of these sensors, it creates easiness for the monitoring of taxis and getting the information regarding peak hours more business areas for taxi services. As the number of requests increases and socket connections are going to backlog, then the server starts to stop binding port of sockets with Internet Protocol (IP) Address. To fix the error of port binding with IP address, this backlog should be cleared with two methods. First, in this paper, we have to kill all backlog connections manually or the second option is to restart the service of the socket connection program. As these two methods are performed, these services of sensor connections will be down for the users. It can be said it is a self-created Denial of Service DoS attack on services [6]. Due to this type of SQL injection attack, not only specific application is disturbed, but this complete database server is crashed so that all other applications are not accessible to valid users, which is the cause of DDoS attacks on databases servers. Another issue with sockets is that systems firewalls will be applied to make a socket with the server. It will be a more critical issue with the security of a web server because everyone is allowed to make a socket connection with the server and this can be exploited by an attacker for malicious activities.
is paper is described as follows: In Section 2, the related work is described for the most threatening web application attacks, sensor-based devices usage, and security issues, and WebSocket related problems. In Section 3, we described the proposed methodology. In Section 3.1, we will define how the database server is crashed with poor coding. In Section 4.2, we mentioned a few major issues related to WebSockets by using PHP programming. Section 4 will describe the results and discussions. In Section 5, this paper will be concluded.
Related Work
For one and half decade, the SQL injection attack was at top of all attacks on the web application. Every attacker targets the vulnerability of SQL injection in a web application to exploit them and take control of all services related to the webserver.
is attack is performed by appending or inserting malicious SQL queries with a legitimate query. e author has proposed WAVES to test the vulnerabilities of SQL injection in web applications based on the black-box method [7].
is will find out every entry point of SQL injection in web applications by using the web crawling method. After that, it will use a predefined number of methods and attack techniques on those vulnerabilities for the SQL injection attack. In the last step, WAVES will monitor the web application traffic for checking the reaction of that attack, and for more betterment of the attack method, the machine learning method is used. e researcher has proposed the method for tautology-based SQL injection queries based on the application layer-created queries and should be analyzed with a static method by combining it with an automated process [8]. But it is limited to tautologybased SQL injection and will not detect or prevent attacks related to this. e AMNESIA model has been proposed by an author and it is based on static and dynamic analysis of the queries [9]. In the static analysis process, the AMNESIA looks into the application generated legitimate queries with every connection to the database; on this condition, it will create a prototype of queries. In the second process of dynamic analysis, AMNESIA interprets all queries; after that, these will be forwarded to the database and then it will be comparing every query with a static prototype model already created. If any query is out of scope from this model, then these queries will be considered as SQL injection attacks and these queries will not be executed on a database server. But this model has more ratio of false positive or false negative if queries are encrypted by developers. e ARDILLA tool has been proposed by an author for the detection of SQL injection and Cross-Site Scripting attacks in real-time [10]. e ARDILLA is developed for the PHP scripts input testing only and sessions are not handled by this tool.
e Web Application SQL injection Protector (WASP) has been proposed by the researcher, and this tool is used to detect SQL injection attacks from stored procedures with real-time configuration [11]. But this tool needs much more improvement for the protection of web applications from SQL injection and XSS attacks.
As the demand for easy life increased due to this usage, IoT devices or sensors are also increased. Some people need to know about their business, such as tracking their goods, vehicles, Cab services and monitoring of patient's health conditions, etc. e latest version of Homecare is known as E-Homecare services which have functions of injection timings, diet management, a routine of exercise, and monitoring of health conditions [12]. "SmartPill" Wireless container is utilized to transmit intraluminal pH, pressure, and temperature information at standard interims to SmartPill GI Monitoring framework [13]. Titan implantable hemodynamic sensor (IHM) is a gadget having a size of a pencil eraser that can be embedded in the core of a patient to quantify basic factors like temperature, and afterward, remotely transmit this information to a protected database [14]. Intelligent vehicle: An arrangement of mechanical applications to gather data on the position, kinematics, and elements of the vehicle, the condition of nature, and the condition of the driver and the traveler, to survey such data and settle on choices dependent on it. It is fit for duplex correspondence with a side of the road foundation and different vehicles, to utilize computerized map applications and satellite situating frameworks, it has a functioning web association and its physical location [15]. A Smart Sustainable City (SSC) using Information and Communication Technologies (ICTs) for the creative city will give a better life, productivity of urban facilities, and competitiveness in between them. With this, current and future needs can be met as per economically, socially, and environmental changes [16]. e scholar in [17] proposed a 2-pivot MAG for distinguishing vehicle driving direction. A high discovery pace of 99% was seen when making a trip vehicles pass near the sensor. Execution corrupted to 89% as the signal-to-noise ratio (SNR) decreased. A two-edge, four-state machine calculation was presented in [18] for vehicle discovery utilizing 3-hub MAG.
e WebSocket protocol was created as a major aspect of the HTML 5 activity to encourage communications channels over TCP. WebSocket is neither a request/response nor a distribute/subscribe in the protocol. In WebSocket, a customer introduces a handshake with a server to set up a WebSocket session. e handshake itself is like HTTP, so web servers can deal with WebSocket sessions just as HTTP associations through a similar port [19]. As the WebSocket connection is established between client and server, they can send or receive data to each other with half-duplex. is connection will remain active with unlimited time and can be closed by the client or server as they want [20]. e Websocket Application Program Interface (API) provides great functionality to websites to establish a connection and transmit data to any server [21]. Due to this functionality, it is easy and effortless for a developer to work on WebSockets in websites for transmitting data. e major drawback of WebSocket that it does not add the HyperText Transfer Protocol (HTTP) header along with the connection. Due to this, the policy of origin resource verification does not provide a secure connection anymore because these origins can be spoofed [22]. As per the author, another security issue is cache poisoning with Websockets; to protect them from this vulnerability, the protocol working group introduced the method of frame-masking [21,23]. With the addition of frame-masking, the cross-site scripting injection attack has been blocked, but the information of WebSocket cannot be transferred in plain text between client and server. e frame-masking has been used with WebSockets for the protection from cache poisoning attack, but it makes it harder the detection of malicious data via firewalls and other virus detection tools [24]. e firewalls can be bypassed by the attackers to compromise the targeted user browser and create that as a WebSocket proxy between him and the targeted organization network [25]. It is also vulnerable to another more common attack type of Denial of Service (DoS). In this case, attackers are trying to overwhelm the clients or server with bursts of information or maybe too many numbers of connections request; due to this, the legitimate users will not be able to complete their requests. In any case, on the web applications that use WebSockets, the XSS vulnerabilities open up a few new dangers. For example, with an XSS defencelessness, the attack might have the option to supersede the callback elements of a WebSocket association with custom ones [22]. is methodology permits the attacker to sniff the traffic, control the information, or actualize a man-in-the-middle attack against WebSocket associations.
When InnoDB is applied with MySQL creating too many issues related to SQL injection attacks which may lead to a complete crash of the database server. erefore, there is a need for a solution for the prevention of this type of SQL injection attack.
Proposed Methodology
is research will describe the practical deployment of WebSockets for the tracking of a vehicle installed sensors in them.
e complete deployment scenarios of the vehicle tracking application are defined in this section. To take care of major constraints regarding low power storage, these sensors have been implemented in idle state condition or can say it in passive mode sensors. is web application and MySQL server are deployed on Ubuntu 19.04 along with all updates of operating systems. And all other tools are also upto-date as per deployment of this application, which is implemented about six months ago. Furthermore, in this research proposed method the latest PHP version 7.3, Laravel framework 5.4.36, MySQL 5.7, and NodeJS v13.3.0 are installed for the proposed application (see Figure 1). e aim is to avoid the well-known vulnerabilities regarding the operating system, web application framework, database server, and version of PHP used for WebSockets. To avoid fake sensors, the authentication process has been implemented with the help of the Laravel framework web application. In this process, the drivers or vendors of vehicles need to be registered at web application along with their personal information and sensor identification number, which may be its serial number. As the constraint of sensors, these are accepting WebSockets only for communication instead of any API. So for this communication between vehicles and servers, the WebSocket program is developed in custom PHP, which is defined in the results and discussions section. For the security of web applications at the operating system level, iptables or Uncomplicated Firewall (UFW) has been used to block unauthorized users. For more protection at the system level, the version of the apache web server and operating system is configured as hidden in apache2.conf (see Figure 2) with red circle options.
e user's login information, sensor details, and movement of vehicles are stored in the MySQL database, which is also hosted on the same server along with a web application. For the optimization of the MySQL database, in this research, the InnoDB has been used for the stored procedure, which gives the functionality of foreign key relationships between tables. e more features and drawbacks of InnoDB and MyISAM stored procedure are explained in the next section of the crashing database server. All critical information regarding users, sensors, and vehicles is stored in a database, so the implemented security of it at the system level such as disallow remote login on a database for the root or normal users from any IP address. e default databases in MySQL have been removed, and the database users have been created with complex password authentication to avoid the brute-force attacks on MySQL databases. And for the protection from cache poisoning or man-in-the-middle attack, encryption has been implemented for the WebSocket communication between sensors and with a web application server. To compare the WebSocket issues regarding performance and backlog closing of connections with sensors, the NodeJS has been implemented for it.
Crashing Database Server.
e most critical part of any web application is its databases because it is the main source of information storage regarding its users, user sessions, integrated third-party applications information, financial information, locations of users or vehicles tracking information, and much more. As per the last two-decade research work regarding web application attacks and OWASP top 10 attack reports of 2013 and 2016 [26,27], the SQL injection attack is at top of all. is attack is more dangerous for web applications in the form of information stealing, DoS attack [28], system crashing, alteration in database records to insert the fake information, traffic redirection, and getting root rights of system. is attack is easy to be performed by attackers with little effort. at is why everyone is trying to exploit this injection vulnerability. e SQL injection attack is performed on web applications that have the vulnerability of weak validation on input fields such as login form. ese input fields are not sanitized properly. For the better performance and optimization of MySQL database, two types of stored procedures are used, namely InnoDB and MyISAM. ese methods' usage is based on the requirements of web applications.
e advantages and disadvantages are explained as follows.
3.2. InnoDB Store Procedure. For the transactional database or relationships of tables, the InnoDB stored procedure is used [29]. is is used for more write operations into databases such as insert and update. is stored procedure is used for solving the issue of table-locking weakness. e InnoDB is used in applications where data integrity is in high demand for the users, and this is achieved with the help of Receiving data Sending data WebSockets relationship and transaction functionality. It is used for faster write operations into databases because it supports locking the tables at row-level for better integrity. It is the most fitting stored procedure for high-simultaneousness and high-exchange remaining tasks at hand.
MyISAM Store Procedure.
e default stored procedure for MySQL is MyISAM used for the high usage of reading operations. But another issue with this is that the less transactional and low level of concurrent write operations are supported. If any application needs big-size tables and fewer changes are required, then MyISAM stored procedure is used as a priority [29]. If anyone wants to use it as transactional, then they need to add an extra MySQL SQL extension of Lock Table and Unlock Tables. It is used for the high speed read and simple in implementation due to this most popular for general-purpose stored usage.
In this research paper, we have used the InnoDB stored procedure for the vehicle tracking application. In this application, the relationships between tables and more write operations are needed. e user login information, sensor information, vendors' details, and the location tracking of vehicles are stored in this database. As per the previous study of the SQL injection attack, sanitized input fields for malicious query protection used the latest version of framework and MySQL. But still, the database server has been crashed with one wrong value entry at the user login page. at wrong password value with special characters is in bold (see Figure 3).
In 2008 or early for bypassing HTTP communication, a new method for two-way communication has been developed. Maltreatment of HTTP for bidirectional correspondence prompts imperfect utilization of HTTP connections, causing superfluous issues for correspondence parties. For the solution of this issue, it has been added into working draft 10 for the HTML5 in June 2008 and that program function was named TCP connection which is based on Transmission Control Protocol (TCP) socket API [30]. e TCP connection was renamed WebSocket in late July 2008. Originally the WebSocket was created by the World Wide Web Consortium (W3C) and the WHATWG group, but it was transferred to Internet Engineering Task Force (IETF) for further development in February 2010. As the too many numbers revisions, IEFT published the final version as a WebSocket protocol with Request For Comments (RFC) 6455 in December 2011 [31]. e communication methods are used [32] given below.
Request or Response
Method. It is a system where the customer sends a solicitation to the server and gets a reaction. is procedure is driven by some cooperation, for example, the snap of a catch on the website page to invigorate the entire page. At the point when Asynchronous JavaScript and XML (AJAX) entered the image, it made the website pages' dynamic through the use of JavaScript mechanization and aided in stacking some piece of the page without stacking the entire page once more. When InnoDB is applied with MySQL creating too many issues related to SQL injection attacks, it may lead to a complete crash of the database server. erefore, there is a need for a solution for the prevention of this type of SQL injection attack.
Polling
Method. It is a system for situations where information should be invigorated without client collaboration, for example, the score of a football coordinate. In surveying, the information is brought after a set timeframe and it continues hitting the server, whether or not the information has changed or not. is makes superfluous solicitations the server, opening an association and afterward shutting it without fail. It is related to WebSockets that shows how they handle the request of users.
Long Polling Method. It is an instrument mishandling
Request/Response where the association is kept open for a specific timeframe. At the point when the customer utilizes long surveying, the server reacts to the customer simply after the information is fit to be sent, which contrasts with the conventional Request/Response strategy where the reaction is sent to the customer directly after the solicitation. is is one of the approaches to accomplish constant correspondence. However, it works just with known time interims.
is research has used the PHP custom program for the WebSocket communication between vehicle sensors and web application server.
is connection is used for the tracking of vehicles to get more information regarding peak hours of passengers for taxis and movement of vehicle information for their vendors. e connection between the web server and vehicle sensors has no time limit to close that connection as the few logs of the CLOSE_WAIT state are given (see Figure 4).
As in the above log entries regarding CLOSE_WAIT state or it is known as long-polling of WebSocket connection are given there are too many more connections in this state. It is causing too many problems for the webserver. e main issue of this the IP address cannot be bind with port 25001 of WebSocket for new connection requests for sensors. And sending or receiving of information from vehicle sensors is also stopped due to this issue of IP address binding. e Websocket connection creation code and temporary solution to this custom PHP program and permanent solution of this problem are discussed in the next section of results and discussion.
Results and Discussion
is section will discuss the issue of wrong entry from the user into SQL databases which crashed its InnoDB store procedure, how the WebSocket connections are created in the PHP custom program, and the issue of CLOSE_WAIT state of those connections for an unlimited time. e temporary solution to this problem is to apply the timer for unused opened connections to closing those WebSocket connections. A permanent solution to this problem is the usage of NodeJS-based application for the unlimited WebSocket connections without any overhead on the server. As per best knowledge, this research has used the latest software and tools for this web application of the vehicle's tracking system to avoid known vulnerabilities of SQL injection, PHP frameworks, Apache web server, and any other vulnerability related to the operating system. First, we will discuss how the MySQL database server has been crashed with a single entry at the login page.
InnoDB Crashed.
e InnoDB store procedure is used for the transactional operations and relationships between tables. It is used in web applications on which write operations are performed more frequently with the support of table lock for the integrity of data. But database server has been crashed with a single wrong entry into the database at the login page as described in Figure 3. Due to that wrong entry, the InnoDB store procedure has been crashed. e MySQL InnoDB crash is shown in Figure 5.
As in Figure 5, it is crashed due to the wrong value that has been inserted into databases. It was the main reason behind the crash of it. e value of key_buffer_size is inserted in large size from its normal value. Because of this reason, the InnoDB has been crashed. Figure 6 is given for more details regarding the database crash.
To recover from this issue, we have applied two methods: First, this research proposed changing the store procedure from InnoDB to MyISAM, and secondly, changing the value of my.cnf file to recovery mode for InnoDB. As the database structure changed from InnoDB to MyISAM, all relationships between tables have been deleted with this operation and transactional operations for insertion are also disturbed. is process has taken down web service for the sensors to track the location of vehicles and it is a shared hosting server. Due to this, other web application are also unavailable for the users. is is known as a self-DoS attack on organization's web services to clients. As in Figure 7, the recovery mode entry in my.cnf file has been added for a temporary solution to crashed InnoDB. e InnoDB has been set as in recovery mode to fix the crashing issue of the store procedure. But due to these changes, the insertion, updating, and deletion operations have been locked into databases. All databases with the InnoDB store procedure have been disturbed due to these changes. e users of those web applications are unable to write data into a database. To recover from this issue, the MySQL database server has been reinstalled after getting a backup of all databases on this server. As experienced, the single wrong value has been entered by a user at the login page that has created this problem. To protect the database from this issue again, we have applied a limit of value on that field of the login page. And go through again for validation of each input filed of web application against any malicious record entry.
WebSocket Long Polling Issue.
As in this research, as earlier mentioned in section V regarding states of Web-Sockets for communication between client and server. e first two states of request/response and poling are working fine in the PHP custom program for connection between vehicle sensors and web application for tracking. e code section for WebSocket connection creation between sensors and web applications (see Figure 8).
If a WebSocket connection is already created with the binding of the same port with IP address, then it shows us the message of WebSocket cannot be created. e connection creation time has been set to 300 seconds in decremented order. As the connection is successful, the host IP address and port will be a bind. Communication will be started between sensors and web servers for the storage of information regarding the tracking of vehicles or insurance details, etc.
We have faced issues at the long-polling connection. ey are going into the backlog for an unlimited period. As in this research shown in Figure 3 in Section 6, there are too many connections in the state of CLOSE_WAIT and due to this, the new connection cannot be created and old connections are unable to send or receive required data. is problem occurs as more than 10 users are trying to connect with the webserver at the same time. is is not good for the production servers as in real-time, there are 0.3 million users who will have to use this service. For sharing their location information, insurance details, and other required details for the security of users. e issue of IP address binding with port occurred as the number of users is increasing as in Figure 9. We have just given a single error message of IP address bind, but there are too many numbers of the same type of error messages regarding binding.
To avoid this, IP address binding with the port of WebSocket has been applied a temporary solution. In this research, we have created a service for WebSocket connections (see Figure 10). is service has been added into crontab job scheduling and this service has been restarted after every two hours to kill the backlog of WebSocket connection (see Figure 11).
By doing this, the service of WebSocket connection to users is unavailable because there is a need to kill all the processes related to these connections for sensors. is is not good for production servers or applications that the service of Websocket will be restarted after every few hours and critical for the real-time application, it is not considered as good practice in the case of the vehicle tracking system.
Permanent Solution for WebSocket Connections.
To solve this issue, we have implemented WebSocket connections between sensors and webserver by using NodeJS 13.3.0 version. is research has faced the issue of the backlog in the state CLOSE_WAIT for too many number connections due to which new connection request is not completed. And old connections are unable to send or receive data for tracking the vehicles. e connection code in NodeJS is as shown in Figure 12.
In the above connection, the function has been created for the WebSocket for the sensors installed in vehicles. With the help of NodeJS, the autotest of more than 100K requests for the WebSocket connection has been created without any issue of backlog or error of IP address binding with the port (see Figure 13).
And in production currently, almost 5K requests are handled by the server without any error of binding port with IP address. e long-polling connections are not opened for the unlimited time between sensors and webserver. As the data regarding vehicle location tracking, its insurance details, and vendor details are shared or inserted into the database, the connection will be closed. e confusion matrix has been given in Table 1 for the proposed methodology in this research paper. e second name of the confusion matrix is the error matrix which is used for the quantity-based analysis of static data. e proposed system to change from the structure of the MySQL database from InnoDB to MyISAM is best against the attack mentioned in the above sections. e overall accuracy of this proposed method is 96.154%. Security and Communication Networks
Conclusions
For ease of business and to facilitate the customer's everyone wants his existence on web applications and mobile apps. As the trend of monitoring increased for security reasons and to get more data for traffic jams, peak hours for customers are hiring taxis, delivery services, health services, or online education, etc. Due to this, the usage of IoT devices is also increased. With the use of these devices, some existing security and some new issues have been arising such as for communication, WebSockets has been introduced in back 2008 and existing type of attacks is SQL injection. As have experienced just the latest tools, frameworks or operating system is not a solution to security breaches for a web application or sensors devices. But there is another factor with service unavailability, which is poor coding and selection of poor programming software solution of Web-Socket connections between sensors and the webserver. As the high-security risk to the web applications is an SQL injection attack, it is performed on web applications that are not sanitized properly for input fields as we have seen that just a single wrong value entered at the login page crashed the MySQL InnoDB store procedure. Due to this, the services vehicles tracking to clients remain unavailable for a long time. To come online again temporarily, the stored procedure has been changed from InnoDB to MyISAM. But with this change, the performance of transactional operation decreased and relationships between tables also deleted. ere is a need to change the mode MySQL for InnoDB into recovery mode. Due to this, other hosted websites also go down. For the protection from the injection type of attacks on web applications, the input fields need to be sanitized so that malicious users should be unable to insert malicious scripts into targeted web applications. Secondly, the Web-Socket connection program was written in the PHP custom program, which has created another issue of the binding IP address with ports. e new connections of WebSocket are not created and old connections are also unable to send or receive data. For a temporary solution, we have implemented WebSocket connection service restart in CronJob of Ubuntu. And this was not a good solution for the production server. So, this research has changed the program for WebSocket connections which is NodeJS as a permanent solution to this issue. Now webserver can handle 100K+ requests without any problem of IP address binding with port numbers.
Data Availability
Data used to support this study are available from the corresponding author upon request via email (bux.khuda@ gmail.com).
Conflicts of Interest
e authors declare no conflicts of interest. | 7,730.8 | 2021-05-19T00:00:00.000 | [
"Computer Science",
"Engineering",
"Environmental Science"
] |
Clinically significant prostate cancer detection and segmentation in low-risk patients using a convolutional neural network on multi-parametric MRI
Objectives To develop an automatic method for identification and segmentation of clinically significant prostate cancer in low-risk patients and to evaluate the performance in a routine clinical setting. Methods A consecutive cohort (n = 292) from a prospective database of low-risk patients eligible for the active surveillance was selected. A 3-T multi-parametric MRI at 3 months after inclusion was performed. Histopathology from biopsies was used as reference standard. MRI positivity was defined as PI-RADS score ≥ 3, histopathology positivity was defined as ISUP grade ≥ 2. The selected cohort contained four patient groups: (1) MRI-positive targeted biopsy-positive (n = 116), (2) MRI-negative systematic biopsy-negative (n = 55), (3) MRI-positive targeted biopsy-negative (n = 113), (4) MRI-negative systematic biopsy-positive (n = 8). Group 1 was further divided into three sets and a 3D convolutional neural network was trained using different combinations of these sets. Two MRI sequences (T2w, b = 800 DWI) and the ADC map were used as separate input channels for the model. After training, the model was evaluated on the remaining group 1 patients together with the patients of groups 2 and 3 to identify and segment clinically significant prostate cancer. Results The average sensitivity achieved was 82–92% at an average specificity of 43–76% with an area under the curve (AUC) of 0.65 to 0.89 for different lesion volumes ranging from > 0.03 to > 0.5 cc. Conclusions The proposed deep learning computer-aided method yields promising results in identification and segmentation of clinically significant prostate cancer and in confirming low-risk cancer (ISUP grade ≤ 1) in patients on active surveillance. Key Points • Clinically significant prostate cancer identification and segmentation on multi-parametric MRI is feasible in low-risk patients using a deep neural network. • The deep neural network for significant prostate cancer localization performs better for lesions with larger volumes sizes (> 0.5 cc) as compared to small lesions (> 0.03 cc). • For the evaluation of automatic prostate cancer segmentation methods in the active surveillance cohort, the large discordance group (MRI positive, targeted biopsy negative) should be included. Electronic supplementary material The online version of this article (10.1007/s00330-020-07008-z) contains supplementary material, which is available to authorized users.
Introduction
The standard clinical procedure for diagnosing prostate cancer (PCa) is a systematic transrectal ultrasound-guided (TRUS) biopsy, indicated by an elevated prostate-specific antigen (PSA) level and/or an abnormal digital rectal examination (DRE) [1]. However, this procedure results in low sensitivity and specificity [2,3] leading to underdiagnosis of clinically significant PCa and overdiagnosis of insignificant PCa. Recently, multi-parametric magnetic resonance imaging (mpMRI) has been reported as a more accurate alternative for PCa characterization and detection [4][5][6]. A recent Cochrane review and meta-analysis has shown that mpMRI before prostate biopsy can aid in the selection of patients at risk of having clinically significant PCa [4]. In addition, MRItargeted biopsy improves detection of significant PCa [5].
Radiologists use the Prostate Imaging Reporting and Data System (PI-RADS) v2 for visual lesion characterization on mpMRI [7]. PI-RADS v2 assessment uses a 5-point Likert scale ranging from 1 (highly unlikely to be malignant) to 5 (highly likely to be malignant) [7]. However, visual interpretation of mpMRI by radiologists suffers from large inter-and intra-observer variability [8]. Decreasing this variability is critical to improve PCa diagnosis and monitoring [9]. A computer-aided analysis of prostate mpMRI may improve PCa identification and may aid in standardization of MRI interpretation [10]. Ultimately, it may contribute in improving the diagnostic chain [11] and thereby reducing over-and underdiagnosis and treatment in prostate cancer management [10].
Different computer-aided methods [12][13][14][15] have been proposed to accurately identify PCa on mpMRI using a radiomics approach or deep learning network. The performance, quantified by the area under the receiving operating characteristic curve (AUC), ranges from 0.93 to 0.97 [14,15]. The main limitation in these studies is that the selected patient cohorts consist of intermediate-and high-risk patients. These patients have primarily obvious and large (volume > 0.5 cc) lesions on MRI, and were mostly treated with a radical prostatectomy. There is no general agreement on the definition of clinically significant prostate cancer. According to PI-RADS v2, a clinically significant PCa should have histopathology ISUP grade ≥ 2 and/or volume ≥ 0.5 cc and/or have extra prostatic extension [7]. Most studies [12][13][14] excluded tumor volumes < 0.5 cc; therefore, these methods cannot be generalized to smaller volume PCa, which can be high grade and should be monitored in an active surveillance program. In daily diagnostic workup and MRI reading, the number of obvious cases is limited; moreover, these cases do not cause the substantial reading variability. Furthermore, the challenging cases with discordance between the PIRADS score and the histopathological findings were not included in these studies.
We hypothesize that the potential additional clinical value of MRI-based computer-aided method will be most substantial in low-risk patients who opt for active surveillance. Active surveillance is considered a treatment option for patients diagnosed with a clinically insignificant PCa [16,17]. These low-risk patients most likely do not have high volume or clinically significant tumors; however, they may benefit from a timely diagnosis to prohibit tumor progression to a clinically significant PCa. Current active surveillance protocols require monitoring with regular clinical evaluations and prostate biopsies. The mpMRI is increasingly used to monitor non-invasively the low-risk PCa patients on active surveillance and to enable targeted biopsies [18][19][20]. Assistance in identification and segmentation of clinically significant PCa may reduce MRI-reading variability in active surveillance patients.
In this study, we aim to detect and segment clinically significant PCa in a prospective clinical cohort of low-risk patients on active surveillance using an MRI-based deep learning approach and evaluate its performance in a routine clinical setting.
Patient cohort
The study was HIPAA compliant and written informed consent with guarantee of confidentiality was obtained from the participants. Initially, 377 patients with low-risk PCa (defined as International Society of Urological Pathology "ISUP," grade 1) were prospectively enrolled in our in-house database from 2016 to 2019 as part of the global MRI-PRIAS protocol (www.prias-project.org), a web-based active surveillance study with defined criteria for inclusion and follow-up [21]. All participants received a multi-parametric MRI and targeted biopsies of visible suspicious (PI-RADS ≥ 3) lesions at baseline (3 months after diagnosis) and during every repeat standard TRUSguided biopsy, scheduled at 1, 4, 7, and 10 years after diagnosis. A detailed description of the clinical workup was recently published [22].
For each patient, two MRI sequences, i.e., a T2-weighted imaging (T2w) and a high b-value diffusion-weighted image (DWI) at b = 800, and the apparent diffusion coefficient (ADC) map were selected. Histopathology data from MRItargeted biopsies were also extracted and considered reference standard. Patients who refused or had no biopsy procedure or whose MR images showed artifacts were excluded from the study (Fig. 1).
The remaining cohort (n = 292) was divided into four groups ( Fig. 1) Clinically non-significant and significant PCa were defined based on histopathology-defined ISUP grade or Gleason score [23]. The patient characteristics, grouped based on the found ISUP grade, are listed in Table 1. The total number of lesions are divided in two zones (peripheral and transition) and also reported in the Table 1. A sub-cohort analysis of the transitional zone vs. peripheral zone was done and presented as supplementary material. Magnetic resonance imaging and pre-processing The MRI protocol included T2-weighted imaging (T2w), diffusion-weighted imaging (DWI) from which apparent diffusion coefficient (ADC) maps were constructed, and dynamic contrast-enhanced (DCE) imaging, according to the PI-RADS v2 guidelines [7]. Detail of acquisition parameters are presented in the supplementary material (Table S3) Experienced operators performed the biopsy procedures. One expert uropathologist reviewed biopsy specimens according to the ISUP 2014 modified Gleason Score [23]. For every patient in our cohort, suspicious lesions were evaluated according to PI-RADS v2 guidelines, with the DWI and ADC maps as the dominant sequence for peripheral zone lesions and T2W images for the transition zone lesions [7]. All manual delineations of suspected lesions were translated to T2w images using AW server 2.0 (GE Healthcare). Delineated T2w images are necessary in MRI/US fusion method to provide image guidance for targeted biopsy procedure, as T2w images contain more anatomical information as compared to DWI or ADC maps. The manual delineation of the suspicious lesion on T2w images was used for reference ground truth (binary mask) for each lesion having ISUP grade ≥ 2. For each patient, the DWI images with ADC values were manually rigidly co-registered to the T2w images. Moreover, the mpMRI images (T2w image, DWI (b800), and ADC) with reference ground truth were resampled to a uniform voxel spacing of 0.371 × 0.371 × 3.3 mm. Furthermore, the 3D images were cropped to the whole prostate region of interest having dimension 192 × 128 × 24 voxels along x, y, and z directions.
Convolutional neural network
The developed convolution neural network (CNN) [24] takes three MRI images; the T2-weighted (T2w) image, the diffusion-weighted image (DWI), and the apparent diffusion coefficient (ADC) maps as input and consider each sequence a separate input channel to generate a PCa segmentation (Fig. 2a).
The network contains twelve single 3D convolution layers with 3 × 3 × 3 kernel size, followed by a Rectified Linear Unit (ReLU). In the down-sampling and upsampling blocks, at the last two layers of the network, a 3 × 3 × 1 kernel size filter was used due to the small image size in the z-axis. Batch normalization (BN) was added after each 3D convolution to improve convergence speed during training [25]. A concatenation with the corresponding computed featured map from the down-sampling part was performed after up-sampling. In the final layer, a 3D convolution having 1 × 1 × 1 kernel size was used to map computed features to the predicted PCa segmentation. In each convolution layer, appropriate padding was used. A schematic representation of the used CNN is shown in Fig. 2b.
The training of the network was implemented in Keras (version 2.0.2) with Tensor Flow (version 1.0.1) as backend in Python (version 3.5.3). The training and prediction was performed on a GeForce GTX TITAN Xp GPU (NVIDIA). The loss function during training was the binary cross-entropy metric and optimized using Adam optimizer [26] with a learning rate of 0.01. As the number of annotated data was limited, data augmentation was implemented; rotation (0-5°, along x,y,z-axes) and shearing (along x,y,z-axes) with rigid transformation and 50% probability for all images during training. This allows the network to learn invariance to such deformations and also helps to prevent overfitting and to generalize better. The total number of epochs was set to 500. The output of the trained network was a binary segmentation of clinically significant PCa lesions. combinations. The evaluation was performed on the left-out positive set and the negative cases from group 2 (n = 55) and group 3 (n = 100). Since the systematic biopsy locations were not available, patients found with significant PCa based on systematic biopsies in group 3 (n = 13) and group 4 (n = 8) were excluded from training and testing
Prostate cancer segmentation
In the experiments, the MRI-positive targeted biopsy-positive group (n = 116) was randomly divided into three sets (Fig. 3). The CNN model was trained as described in the "convolutional neural network" section, in threefold crossvalidation using different combinations of these sets. The three trained networks were named model 1, model 2, and model 3. The MRI-negative systematic biopsy-negative group 2 (n = 55) was not used in the training because of the absence of PCa lesions in these images. Also, group 3 (i.e., patients with a positive MRI but negative targeted biopsy) was not included in the training set due to the absence of ISUP ≥ 2 grade prostate cancer. After training, each trained model was used to predict PCa on the corresponding test data. The systematic biopsy locations were not available; therefore, patients found with significant PCa based on systematic biopsies in group 3 and group 4 (n = 21) were excluded from testing (Fig. 3).
Statistical analysis
To evaluate the performance of the method, the sensitivity and the specificity were calculated and receiver operating characteristic (ROC) curves were plotted for three different lesion volumes (0.03 cc, 0.1 cc, and 0.5 cc) of the segmented lesions. For each of the three lesion volumes, the sensitivity was calculated only for the patients with lesion volumes higher than the threshold volume. The lesion volume thresholds were selected based on the minimum significant PCa lesion volume (0.031 cc) in our data and the standard maximum lesion volume of All images show the same axial slice as 2D view of mpMRI images (a, e T2w images; b, f DWI b800; c, g ADC map) of the prostate with the reference ground truth (d) and the segmented false PCa lesion by model (h) clinically significant PCa based on the PI-RADS v2 definition. The lesions volume was calculated by multiplying the total number of voxels in the lesion with the voxel size (0.371 mm × 0.371 mm × 3.3 mm).The lesion segmentation was considered true positive when the overlap lesion volume between the reference ground truth and segmented lesion is larger than 0.01 cc.
Patient cohort analysis
The division of patients into the different ISUP grade groups and their relation with the assigned PIRADS score (Fig. 4a) show that many patients (n = 101) were scored PI-RADS ≥ 3 by the radiologist but these patients had no significant PCa based on targeted biopsies (specificity = 33%). Also, some patients (n = 8) were assigned PI-RADS score ≤ 2 and had significant PCa based on the systematic biopsies procedure (sensitivity = 94%). The lesion volume distribution (Fig. 4b, c, d) of the significant PCa from > 0.03 to > 0.5 cc showed that the study data contained a wide range of lesion volumes (0.031-12.06 cc) and approximately 81% of them have ISUP grade = 2.
Prostate cancer segmentation
For patients with tumor volumes > 0.03 cc (number of lesions = 135), the average sensitivity was 82% at an average specificity of 43% with AUC 0.65. For patients with tumor volume > 0.1 cc images (number of lesions = 123), the average sensitivity was 85% at an average specificity of 52% with AUC 0.73. It further improves to 94% sensitivity and 74% specificity with AUC 0.89 for patients with tumor volumes > 0.5 cc (number of lesions = 51). The cutoff points used to calculate above-stated sensitivity and specificity for the three ROC curves are shown in Fig. 5d.
To illustrate the performance of the method visually, three examples (a true positive, a false negative, and a false positive) of PCa segmentation are shown Fig. 6(A-C). In the true positive example (Fig. 6), the model successfully segmented the large and the small lesions as delineated by the radiologist and proven by targeted biopsy as significant PCa. In some cases, PCa segmentation was unsuccessful leading to a false negative (Fig. 6B).In the false positive example (Fig. 6C), the model segments a lesion in the peripheral zone that matches with the radiologist's delineation; however, the targeted biopsy found no significant PCa.
Discussion
The use of mpMRI has increased in the early diagnosis of PCa because of its ability to identify suspicious lesions for imageguided biopsy. The MRI-targeted biopsies can improve PCa detection as compared to the random TRUS biopsies [4,27]. However, to exploit the full benefits of the MRI pathway in the PCa diagnostic process, it is important to increase work efficiency and optimization of the mpMRI analysis, resulting in reduction of under-and overdiagnosis. Optimization of the diagnostic and monitoring process is particularly necessary in low-risk patients on active surveillance, where fear of undergrading is present. An objective qualification and quantification of suspicious lesions on mpMRI may have a positive influence on the monitoring protocol and the (redundant) number of repeated biopsies. Therefore, an automatic approach in monitoring MRI suspicious lesions over time in low-risk patients on active surveillance is indispensable.
In this study, a computer-aided method based on deep learning convolutional neural network to identify PCa in patients on active surveillance was presented. The method used mpMRI (T2w, DWI, ADC map) to segment the PCa with ISUP grade ≥ 2. The performance of the method was evaluated by calculating sensitivity, specificity, and AUC in the total prostate. The average sensitivity achieved by the method was 82-92% at the average specificity of 43-76% by considering different lesion volumes ranging from > 0.03 to > 0.5 cc. The AUC for the average models varied from 0.65 to 0.89. The results showed that the large lesions (> 0.5 cc) can be relatively easily detected and segmented as compared to the smallest lesion volume threshold (≥ 0.03 cc).
In literature, different computer-aided methods are presented to localize PCa [4,7,8,13]. The database used in these studies mostly contained patients, who underwent radical prostatectomy (i.e., high grade and large tumor sizes). Therefore, the usage and advantage of these developed methods is limited in active surveillance population, as these methods cannot deal with the daily reading difficulties of low-risk and small-size PCa. Algohary et al [13] showed that radiomics features from biparametric MRI (T2w and ADC map) could accurately detect clinically significant PCa in an active surveillance cohort. However, a limited number of patients (n = 56) were included. Furthermore, patients with lesions assigned to PI-RADS suspicion score 3 and with lesions of volume size ≤ 0.5 cc were excluded from the study. The authors showed in two different patients groups that 80% of the positive cases correctly identified as having clinically significant PCa and that 60% of the negative cases were correctly identified as not having clinically significant PCa. In our proposed method, we achieved a higher average sensitivity of 92% at a specificity of 76% by including this subgroup (Fig. 5).
Our study has some limitations. First, our model was specifically trained on an active surveillance cohort; therefore, the results on other patient cohorts (e.g., cohorts of initial diagnosis) may be different. Second, we had access to 116 positive cases, sufficient for algorithm development; however, an increment in the number of training data may improve results. Third, in our data, most of the patient's PCa lesions (81%) have ISUP grade = 2 (Gleason score = 3 + 4, where 3 represent the most predominant pattern in the biopsy). During training, the network learned features from the dominant nonsignificant part of the PCa (Gleason score 3) and will segment it in the test data, most particularly in the discordance group 3, which led to a limited specificity. By providing more patient data of high-grade PCa (ISUP grade ≥ 3), the number of false positive segmentations might decrease. Furthermore, the reference ground truth is limited by two factors. First, the accuracy of the MRI-Ultrasound fusion technique (Koelis UroStation™) is reported to range from 3.8 to 5.6 mm [28], with a mean of 4.5 mm. Second, the mean needle placement error is reported to be 2.1 mm [29]. The average combined error will therefore be in the range of 5 mm (0.13 cc). This could affect the localization accuracy of the reference ground truth, and may also influence the results as can be seen for the lower volume thresholds (Fig. 5a, b).
Implementing the proposed method in daily clinical routine has the potential to improve the diagnostic accuracy and monitoring process of prostate cancer. The proposed method can be utilized as second reading, confirming, adding, modifying, or even changing the original decision. Furthermore, the automatic identification and segmentation of the lesions during surveillance will provide consistent quantitative analysis over time, alerting significant changes in volume or conspicuity. The eventual real value will need to be established in prospective clinical use.
Conclusion
This study presents a deep learning-based computer-aided diagnostic method with acceptable diagnostic accuracy to identify and segment significant (ISUP grade ≥ 2) prostate cancer in patients on active surveillance. The evaluation of the method showed that an average sensitivity of 92% can be achieved with specificity of 76% at the lesion volume threshold > 0.5 cc. The proposed deep learning computeraided method yields promising results in the automatic identification and segmentation of significant (ISUP grade ≥ 2) prostate cancer in low-risk patients. Low-risk patients may benefit from this objective qualification and quantification of MR images by computer-aided methods, since MRI readings are most difficult in low-volume and low-grade tumors. | 4,751.6 | 2020-06-27T00:00:00.000 | [
"Medicine",
"Engineering"
] |
Quantitative Lithiation Depth Profiling in Silicon Containing Anodes Investigated by Ion Beam Analysis
: The localisation and quantitative analysis of lithium (Li) in battery materials, components, and full cells are scientifically highly relevant, yet challenging tasks. The methodical developments of MeV ion beam analysis (IBA) presented here open up new possibilities for simultaneous elemental quantification and localisation of light and heavy elements in Li and other batteries. It describes the technical prerequisites and limitations of using IBA to analyse and solve current challenges with the example of Li-ion and solid-state battery-related research and development. Here, nuclear reaction analysis and Rutherford backscattering spectrometry can provide spatial resolutions down to 70 nm and 1% accuracy. To demonstrate the new insights to be gained by IBA, SiO x -containing graphite anodes are lithiated to six states-of-charge (SoC) between 0–50%. The quantitative Li depth profiling of the anodes shows a linear increase of the Li concentration with SoC and a match of injected and detected Li-ions. This unambiguously proofs the electrochemical activity of Si. Already at 50% SoC, we derive C/Li = 5.4 (< LiC 6 ) when neglecting Si, proving a relevant uptake of Li by the 8 atom % Si (C/Si ≈ 9) in the anode with Li/Si ≤ 1.8 in this case. Extrapolations to full lithiation show a maximum of Li/Si = 1.04 ± 0.05. The analysis reveals all element concentrations are constant over the anode thickness of 44 µ m, except for a ~6- µ m-thick separator-side surface layer. Here, the Li and Si concentrations are a factor 1.23 higher compared to the bulk for all SoC, indicating preferential Li binding to SiO x . These insights are so far not accessible with conventional analysis methods and are a first important step towards in-depth knowledge of quantitative Li distributions on the component level and a further application of IBA in the battery community.
Introduction
Si has stayed in the limelight as a promising material to supersede graphitic carbon, the currently predominant element of the negative electrode, in commercial lithium-ion batteries (LIB). The almost factor 10 higher specific capacity and nearly three times higher volumetric capacity as well as stability, natural abundancy, and a proper operating potential range (~0.2-0.4 V vs. Li/Li + ) make this material stand out [1][2][3]. Nonetheless, Si has rarely been applied to commercial LIBs, especially with regard to the application for an electric vehicle, due to several fatal drawbacks such as a 200-300% volume change while undergoing lithiation and delithiation [4,5], a large amount of Li-loss during the first cycle [6,7], and a high ionic and electrical resistance, which in turn boosts the cell degradation and exacerbates the rate capability. A number of strategies have been proposed to alleviate these defects in a wide variety of ways: constructing a nanocomposite [8][9][10][11], coating Si particles with a robust material [12,13], and developing a new binder [14][15][16] or electrolyte additives [17][18][19][20]. Even though they increased the feasibility of applying Si to the commercialized cells, a Si-based anode has yet to outstrip the current state-of-the-art anode active material, graphite, from a practical point of view. Introducing silicon oxides (SiO x ) has been regarded as one of the viable alternatives to utilizing pure Si, since the oxides generated during cycling (Li 2 O, Li 4 SiO x , etc.) act as buffer material for significantly reducing the volume change [21][22][23][24][25][26]. As a result, SiO x has more stable cyclability than pure Si, despite its lower specific capacity [1,24,27]. On the other hand, due to the additional reactions forming lithium silicates, the lithiation reactions in SiO x are more complex, hindering further understanding of driven improvements. This work, therefore, studies an anode with SiO x added to a graphite anode, which is a highly feasible material but still needs to be understood throughout its lithiation process. Since Si starts to be reduced at a higher potential range in comparison with graphite [28] and decomposition of the electrolyte, and its additives is dominated at the beginning of lithiation [29], it is particularly valuable to demonstrate the underlying phenomena at low state-of-charge (SoC) levels. Hence, this work focuses on the concentration gradients of the composition of the anode at early lithiation stages to comprehend the actual lithiation degree of Si and graphite.
The investigation of lithium distribution during (dis-)charging battery cells especially along the depth direction remains a difficult task since it requires a method sensitive to the light element while providing 1% range accuracy, sub-µm spatial resolution, and a fast time resolution required by fast charging and discharging processes. The 3D nature of the Li migration and the layered cell structure potentially induce non-constant depth profiles and lateral variations due to inhomogeneous migration rates, electrical resistances, and fundamental limits. In battery cells, this can limit the battery performance or even lead to fatal failure mechanisms such as dendrite formation [30]. Consequently, the knowledge of the migration dynamics enables revealing fatigue processes and bottlenecks within the cell stack. A deeper understanding of the local SoC and state-of-health (SoH) is desired for the development of new cell types (e.g., all-solid state cells), commercial purposes, and further performance improvements.
Numerous techniques exist, fulfilling these tasks to certain extents. Discussions of various methods can be found in reviews, e.g., [31]. However, most of the widely used methods enable only to analyse the distribution of the electrode composition on a specific surface level or in a limited volume range. For example, even though data from X-ray photoelectron spectroscopy (XPS) contains chemical bondings as well as their relative quantity, this information mostly comes from the outermost atomic layer of the sample. The distribution can be conveniently captured by energy-dispersive X-ray spectroscopy (EDX), but it only allows a small range with limited depth resolution and has a lower accuracy especially for the light atoms, in particular, lithium. Inductively coupled plasma-optical emission spectroscopy (ICP-OES) allows for quantifying the sample composition, but requires a step for dissolving the sample into solution, which precludes spatially-resolved measurement [32]. One of the recent strategies to investigate the content of electrode composition along the depth direction is to utilize glow discharge optical emission spectroscopy (GD-OES). This method analyses spectroscopic data from excited atoms released by argon plasma sputtering. It provides an elemental depth profile in a porous electrode within a micrometer-scale depth range, but suffers from the matrix effect, requiring complementary methods or complex calibration steps [32,33]. Neutron depth profiling (NDP), as a nuclear technique, offers superior depth resolution, detection limits, and direct quantification [34]. Its non-destructive nature and the negligible heat load induced by the thermal neutrons allow for operando analysis of cells. Its analytical range remains limited, though, since the neutrons can penetrate deeply, but the emitted ions have a limited range in the order of µm. The count rates limit the time-resolution to about 5-15 min per point. NDP works best with thick and large cells and only detects specific isotopes, namely 6 Li for lithium-based batteries, limiting its capability to determine full stoichiometries.
Ion-beam analysis (IBA) offers a possible solution for the other methods limitations or at least provides complementary information. IBA has the fundamental advantage of providing the complete compositional depth-resolved information together with the lithium information, enabling to reveal the full stoichiometry of every cell component in a single measurement. With this, its application is in no way limited to Li-based batteries, but this work will focus on Li batteries as the most common example. NDP and GD-OES offer a better depth resolution (no projectile straggling), but IBA offers the better lateral resolution (focussing of the beam with spot sizes down to~1 µm) compared to NDP and GD-OES. The IBA information depth depends on the projectile and products, allowing for a higher degree of flexibility in terms of range and spatial resolution compared to optical methods or NDP by proper selection of projectile ion species and energy. Therefore, IBA potentially provides valuable information on the cell dynamics, degradation processes, and manufacturing aspects as an input to advanced cell modelling, research and development, and quality assurance.
Several challenges are connected with IBA of batteries, due to the requirements of low beam-induced heat-loads in sensitive battery materials, a conflict with high counting statistics, accuracy, and small beam spots. Furthermore, limited analysis ranges and the requirement of vacuum impose practical drawbacks, in particular when compared to photon-based methods. Careful optimisation of the analysis setup enables a feasible solution in this optimisation space, even allowing for operando electrical connections of those cells [35].
In the literature, IBA of cathodes [36,37], electrolytes [38], and full cell assemblies with µm-resolution of its components ex-situ and operando was proven quite recently by a few groups [39][40][41]. This work first goes a step back and investigates the fundamental possibilities of several IBA methods and parameters suitable for conducting such an analysis. Different reactions and projectiles are compared to find the optimal solution for lithium battery-specific questions. Nuclear reaction analysis (NRA) and Rutherford backscattering spectrometry (RBS) are then applied ex-situ to a 20 weight% SiO x -doped graphite anode material charged in a coin-cell set-up to six different SoC from 0-50% to investigate its lithium depth profiles and other compositional aspects. Data analysis methodology and uncertainties are presented in order to conclude on future prospects of the method in the battery field.
IBA Method in View of Lithium Analysis
Battery materials typically consist of mostly light elements (Li, C, O), a few 10% of intermediate elements (Si, Ni, Co, Ti . . . ), and sometimes minimal amounts of heavy elements (La, Ta . . . ). The latter two can be analysed using particle-induced x-ray emission analysis (PIXE) and RBS with theoretically available cross-sections and a single known reaction (RBS) or peak group (PIXE). The light element analysis requires NRA or particle-induced gamma-ray emission analysis (PIGE). NRA and PIGE feature several reaction options for each light element and require measured cross-sections for each of them. Consequently, the analysis of batteries requires combining at least two IBA methods and selecting the right reactions for the light elements.
For IBA of lithium, several nuclear reactions are possible, each with individual range, resolution, and detection limits, see Table 1. Only depth probing by penetration is considered, as the depth resolution for side analysis of cross-cuts only depends on ion beam focussing and spot size and is typically worse compared to the depth resolution. The depth resolutions for the individual reactions are analysed using the software RESOLNRA 1.7 from the SimNRA7 (Garching, Germany) package [42]. The analysis employed 3 MeV projectiles at perpendicular incidence and a reaction angle of 165 • with a 15 keV FWHM detector resolution, if not stated differently, probing into pure LiCoO 2 . LiCoO 2 is taken as an example material for Li-based cell materials since it features a representative Li concentration and stopping power for many materials. An ion beam only sees passed atoms, not passed distance. This makes IBA insensitive to porosity or crystallographic volume changes e.g., upon dis-/charging or phase transitions. Therefore, a standard LiCoO 2 density of 5.05 g/cm 3 corresponding to 1.25 × 10 23 atoms/m 2 µm can be used for recalculating the depth in units of atoms/m 2 to the geometric depth or the other way round for any given mass density of the sample. Table 1 compares the possible reactions with H, D, and 3 He reactions for Li analysis by NRA. The 7 Li(p,α 0 ) 4 He reaction offers the best compromise between resolution, range, and practical aspects. The 7 Li(d,α 0 ) 5 He has similar properties but so far lacks in cross-section data, and the decay of 5 He potentially leads to increased background levels and lower accuracy. The 6 Li(p, 3 He) 4 He reaction has the best depth resolution, but due to the low Q-value resulting in low energy products, an overlap with the RBS reactions of heavier elements such as Co and their pile-up strongly limits its practical range. For following the isotopic ratios in 6 Li-enriched materials, it might be suitable due to higher signal levels, though. Below 2200 keV proton energy, a window without overlap of the 6 Li(p, 3 He) 4 He peak with typical battery elements such as Mn exists, which was used to test the existing cross-sections for this reaction using thin films. The Chia-Shou Lin data [43] for 147.1 • appear to match the experimental spectra at 150 • , but they offer only a few data points. The Bashkin data [44] offer more data points, but were found to be largely incorrect, at least for 150 • . The data suggest a better sensitivity due to larger cross-sections at higher reaction angles. The application of deuterium ions and beam energies >3 MeV potentially provides an improved range or depth resolution, but an application is so far limited due to the lack of reaction cross-sections. All three considered ions provide suitable reactions for the separate analysis of 6 Li and 7 Li and hence could be applied to enrichment-based lithium migration studies. Reactions with 3 He projectiles have significantly lower range, but similar depth resolution as H and D induced reactions. In contrast to the Li resolution, the detection and resolution for other elements is 4x better with 3 He compared to H and D.
In addition to these reactions, all ions can also excite the first nuclear level of 7 Li(p, pg 1-0 ) 7 Li at 477.6 keV for PIGE analysis. A relevant PIGE cross-section is given for >1 MeV protons with a mostly monotonous increase towards higher energies (no resonances). The working of PIGE enables ranges from 280 × 10 22 at./m 2 for 2 up to about 6000 × 10 22 at./m 2 at 10 MeV proton energy (~0.5 mm in LiCoO 2 ). The 477.6 keV photon energy results in negligible photon absorption (~1%) even for the thickest samples. A depth resolution can be generated by using several beam energies and generating differences. The depth resolution then depends on the ratio of beam energy width to stopping power at a certain depth. This value varies from~3 to~600 × 10 22 at./m 2 in the range stated above, resulting in generally much lower depth resolution for Li compared to NRA.
For most of the Li reactions discussed here, especially the 7 Li(p,α 0 ) 4 He reaction, the product energy is hardly influenced by the reaction angle or projectile energy. Hence, a narrow aperture according to the setup described in [35] is chosen to reduce the detectors angular acceptance, leading to a better depth resolution (geometrical straggling). In general, larger reaction angles (towards 180 • ) lead to improved depth resolution, but are technically more challenging due to the detector size. 7 Li(p,α 0 ) 4 He can achieve a surface near resolution of 1.7 × 10 22 at/m 2 = 136 nm in LiCoO 2 in our standard setup, hence the analysed cell systems can be thin-film or bulk cells. The maximum probing depth at 3 MeV is~22 µm, but below 12 µm the signal will interfere with the RBS part of the heavier constituents. This situation is mostly independent of the beam energy. Table 1. Comparison of the possible NRA reactions of lithium with 3 MeV light ions. For recalculation to length use e.g., the dense LiCoO 2 density of 0.08 µm/(10 22 at/m 2 ) corresponding to 5 g/cm 3 or a porous Si/graphite anode as used later of 0.164 µm/(10 22 at/m 2 ). Figure 1 demonstrates the scaling of depth resolution with the analysis setup properties and the depth inside LiCoO 2 . The reaction products can be detected to a depth of 310 × 10 22 at./m 2 , but the RBS edges begin to overlap with the NRA reaction in this range with~100 times higher intensity, limiting the final range. Depending on the setup and the elements present in the material, pulse pile-up can further reduce the effective range. In fact,~3 MeV protons provide the highest analysis depth range, since for lower energies the cross-section strictly decreases; while for higher energies, the elastic scattering edges shift to higher energies while the 7 Li(p,α 0 ) 4 He product energy remains mostly constant. Only methodical additions such as PIGE would allow further extending this range. For analysing Li in the Si-doped carbon anode material investigated later, the range, in terms of atoms/m 2 , is within 10% of the values depicted above. The range in terms of µm is significantly higher, depending on the material density. For a density of the anode material used below of 1.4 g/cm 3 , we expect a Li probing range of~60 µm. Generally, we expect similar ranges and resolutions in units of atoms/m 2 for all typical lithium battery materials due to the similar relevant elemental ranges and compositions. In conclusion, for the analysis of lithium, the 7 Li(p,α 0 ) 4 He reaction provides the best compromise in terms of resolution, range, and practical properties. It requires high reaction angles, as close as possible to 180 • , in order to optimize depth resolution and sensitivity. Typical commercially available silicon-based detector energy resolutions of 10 keV FWHM suffice for the analysis with only little gain expected from improved energy resolution. On top of that, every (dis-)charged Ah corresponds to moving a certain number of lithium nuclei, and every lithium nucleus corresponds to a certain (small) number of counts in the spectrum, further limiting the detection properties statistically (minimum resolved Ah step). The µNRA setup [35] provides up to 200 nm depth resolution, which is sufficient considering the limits of counting statistics, although~70 nm is technically possible with a technically optimised setup employing a maximum reaction angle and energy resolution.
For assessing the radiation damage in the material, the SRIM 2013 code (Quick Calculation of Damage mode, 10 5 particles) is applied according to [47,48]. We use the output "Total Target Vacancies", which equals an integral over the depth, as the displacements per incident ion (DPI) [49]. As the exact values for the displacement threshold are not known, the standard SRIM value of 28 eV for carbon is used in a representative material of 64% C and 9% H, O, Si, and Li each. The ions penetrate up to 95 µm, assuming a density of 1.83 g/cm 3 , inducing 23.3 DPI. If we restrict the calculation to a depth closer to our actual material thickness in this study of 50 µm, the calculation yields a mostly constant 4.2 DPI throughout the target depth. For comparison, the typical active material LiCoO 2 with an assumed displacement threshold of 25 eV yields relatively constant 7.9 DPI up to 35 µm depth with a total range of 47 µm and 26 DPI. Calculating the integral damage during analysis with 10 µC on 3.14 mm 2 spot area yields a total damage of 1.3 × 10 −5 displacements per atom. 1.5 × 10 −4 of the H projectiles remain in the sample, coming to a total of 3 × 10 15 H/m 2 . These small numbers seems negligible compared to the typical defect densities and H impurities. In conclusion, neither displacement damage nor H implantation can significantly influence the composition or structure of battery materials during our analysis.
The upper limit of ion beam-induced heating can be estimated by equating Planck's radiation law and the beam power as described in [50]. This conservative approach assumes zero heat conduction, but only losses by thermal radiation. An ion-beam energy of E ion = 2960 keV, a beam current I Beam = 5 nA, an emissivity ε = 1 (black surface), and a beam spot area of A Beam = 3.14 mm 2 yield a maximum sample temperature of 364 K. Finite element simulations show the conduction in the copper substrate reduces the sample temperature increase during analysis to a few K above room temperature. In conclusion, the ion beam-induced temperature increase could influence the ionic and electronic conductivity of the cell in extreme cases during in-situ analysis, but it will not induce chemical changes or reactions, influencing the typical post-mortem analysis.
Material and Electrode Preparation
The anode slurry contains 20 wt.% of silicon-oxide SiO x (AS2D, ShanShan Tech, Shanghai, China); 70 wt.% of graphite (SMG-A5, Hitachi, Japan); 2 wt.% of carbon black (C65, Imerys, France); and 8 wt.% of binders composed of carboxymethyl cellulose (CMC 2000PA, DOW Chemical, Midland, USA), styrene-butadiene rubber (SBR) (BM-451B, Zeon Europe GmbH, Düsseldorf, Germany), and lithium substituted polyacrylic acid (LiPAA). Graphite and SiO x have similar densities of 2.24 and 2.26 g/cm 3 , but different particle sizes of 11-23 and 5-7 µm, respectively. The slurry is then coated on a copper foil (10 µm thickness), dried at 60 • C, and subsequently calendered. The density of the produced anode is 1.375 g/cm 3 or 0.164 µm/(10 22 at./m 2 ). The anode is punched to 14 mm diameter using a high-precision electrode cutter (EL-CELL) and then dried for longer than 10 h at 80 • C in a glass oven B-585 Drying (Buchi). The dried anode is transferred to an argon-filled glove box to be assembled with a lithium foil (PI-KEM, Wilnecote, UK) in a coin cell set-up (2032-type). A glass microfiber filter (GF/C, Whatman, UK) is used as a separator and 1.0 M LiPF 6 in ethylene carbonate (EC): ethyl methyl carbonate (EMC) (3:7, w/w) (E-Lyte Innovations GmbH, Münster, Germany) mixed with 10 wt.% of fluoroethylene carbonate (FEC) (E-Lyte, Germany) as an electrolyte. Figure 2 demonstrates the cell assembly. Figure 2. Schematic diagram of sample preparation. In this study, the part named "anode" is analysed for its elemental composition. The coin cell set-up is used for the lithiation process. The anode is extracted from the set-up and rinsed with EMC.
Lithiation Process
The assembled coin cells are lithiated with a galvanostatic mode at 0.2 C-rate until the lithiation capacity of the cells reaches the designed degrees as shown in Figure 3a. Table 2 describes the lithiation parameters for the different SoCs. However, prior to this process, the full capacity of the anode is measured with a cut-off potential of 0.01 V vs. Li/Li + to 622 mAh/g, which is defined as the 100% SoC in this study. This value agrees to a calculation using 340 mAh/g for C and 1700 mAh/g for SiO x together with the deposited masses. The open circuit voltage (OCV) of each cell is monitored for 10 h after lithiation to check the potential recovery, as shown in Figure 3b, which may reflect lithium migration inside the structure. When the increasing trend of OCV subsides, the cells are carefully disassembled to take the lithiated anodes out. The anodes are rinsed with EMC for 5 min and dried under vacuum at room temperature.
Experimental Setup
After sample preparation, the charged samples are sealed in glass bottles and sent to the post-mortem analysis facility [35]. A few weeks pass between both, hence an equilibration of the lithium profiles is possible. For analysis, 2960 keV protons with ~5 nA ion current and 10 µ C doses are applied with 2 mm diameter beam spots positioned in the anode centre, see Figure 4. For the lateral scan along one sample, 0.9 mm diamete spots with 4 µ C of ion dose are applied. Similar coin cells are tested for cycling stability, see Figure 3c,d. After 200 cycles, the capacity retention is 80% of the initial discharging capacity with a constant Coulombic efficiency close to 100%. The anode volume expands linearly towards 50% SoC with approximately 37% increase compared to the pristine anode (from 44 to~60 µm). These lithiation related volume changes of the anodes are not expected to affect IBA since it is not sensitive to distances.
Experimental Setup
After sample preparation, the charged samples are sealed in glass bottles and sent to the post-mortem analysis facility [35]. A few weeks pass between both, hence an equilibration of the lithium profiles is possible. For analysis, 2960 keV protons with~5 nA ion current and 10 µC doses are applied with 2 mm diameter beam spots positioned in the anode centre, see Figure 4. For the lateral scan along one sample, 0.9 mm diameter spots with 4 µC of ion dose are applied. The 11 keV FWHM resolution RBS detector is used to obtain the RBS and NRA spectra. The spectra are analysed using SimNRA 7.03 [51] together with the Paneta cross-sections for lithium [45] and RBS or Sigmacalc [52] cross-sections for all other elements. The depth profile analysis employed six layers arranged in a way to cover the full IBA analysis range as outlined in Section 2. Their composition is optimised using MultiSimnra [53]. Here two individual fitting regions are set for the elastic and the inelastic parts of the spectrum. The inelastic part is weighted with a factor of 100 in order to equalize its importance for the χ 2 optimizer to the elastic region, which has intrinsically more counts. The factor 100 is chosen according to the rough ratio of counts per channel in the two regions. Figure 5 shows an exemplary fit following this procedure, resulting in χ 2 = 87.4 for a spectrum with 3.5 × 10 6 counts in the elastic and~90,000 counts in the inelastic region. The high counting statistics result in negligible statistical measurement uncertainties and relative uncertainties of~5% between different samples dominated by uncertainties of Particle×Sr, detector live-time, and fitting quality. Absolute uncertainties relate mostly to the Particle×Sr value derived from the RBS part, and the nuclear reaction cross-section data. These uncertainties sum up to~7%. Figure 6 demonstrates the depth profiles obtained from this fit of the 50% SoC sample (a) and the comparison of the Li concentration between all six samples (b). The sample thickness is evaluated by assuming a stack of six layers with increasing thickness towards the sample backside. We see 7.7 at.% of Si, 7.6% O, 14% Li, and the remaining 70.7% C in this sample. With the lower Li content in the other samples, the other elements increase their share with constant ratios within uncertainties. The 0% SoC sample features 8.6 at.% Si, 8.6% O, 0.2% Li, and the remaining 82.6% C. The detection of H is not directly possible with the applied method, but it can be detected indirectly as a missing element for a consistent sample composition. Since RBS, NRA, and PIXE can actively detect practically all other elements such as N, C, O, or Si, it is highly probable that the missing aspect can be accounted to H. Nevertheless, this indirect nature significantly increases the uncertainty of H concentrations shown in Figure 6, in comparison to the other elements. The measurements show a few % H in the samples with most of it sitting near the surface in the reaction layer.
Results
with the applied method, but it can be detected indirectly as a missing element for a consistent sample composition. Since RBS, NRA, and PIXE can actively detect practically all other elements such as N, C, O, or Si, it is highly probable that the missing aspect can be accounted to H. Nevertheless, this indirect nature significantly increases the uncertainty of H concentrations shown in Figure 6, in comparison to the other elements. The measurements show a few % H in the samples with most of it sitting near the surface in the reaction layer. In all cases, except for the 10% case, the Li concentration remains mostly constant, besides a thin surface reaction layer. The 10% case shows a slight dip in the Li concentration of 2 at.% starting after the surface reaction layer, with ~3.2% in the remaining regions. It remains unclear whether the increased Li concentration close to the surface relates to the intercalation process, since the Li infiltrates the anode from this side and a maximum concentration could be expected here, or if it relates to the air exposure, since also H and O are higher in this region, or the formation of a solid-electrolyte interface (SEI). Figure 7 compares the Li concentration average excluding the first 40×10 22 at./m 2 and the ratio of maximum Li concentration to this average. Table 2 compares the injected amount of Li derived from the charged capacity and the total amount of Li found by IBA. Interestingly, we find a mostly constant ratio of maximum to mean Li concentration of 1.23. The Li depth profile of the 0% SoC sample is constant within the somewhat larger uncertainties with no pronounced surface peak/maximum. Equation (1) depicts the resulting linear fit of Error! Reference source not found. (a) with R 2 > 0.999. In all cases, except for the 10% case, the Li concentration remains mostly constant, besides a thin surface reaction layer. The 10% case shows a slight dip in the Li concentration of 2 at.% starting after the surface reaction layer, with~3.2% in the remaining regions. It remains unclear whether the increased Li concentration close to the surface relates to the intercalation process, since the Li infiltrates the anode from this side and a maximum concentration could be expected here, or if it relates to the air exposure, since also H and O are higher in this region, or the formation of a solid-electrolyte interface (SEI). Figure 7 compares the Li concentration average excluding the first 40 × 10 22 at./m 2 and the ratio of maximum Li concentration to this average. Table 2 compares the injected amount of Li derived from the charged capacity and the total amount of Li found by IBA. Interestingly, we find a mostly constant ratio of maximum to mean Li concentration of 1.23. The Li depth profile of the 0% SoC sample is constant within the somewhat larger uncertainties with no pronounced surface peak/maximum. Equation (1) depicts the resulting linear fit of Figure 7a with R 2 > 0.999.
The Si depth profiles are shown in Figure 8. Similar profiles are obtained for O. The results indicate a 20% enrichment of SiO x at the surface (up to 30 × 10 22 at./m 2 depth), although the results somewhat scatter due to inaccuracies in the data fitting in this region and the mass resolution of the given RBS setup cannot distinguish between Si and Al enrichment. The O concentration is also higher near the separator side.
The lateral homogeneity of the 50% sample is checked using a radial line scan of 10 points with~0.8 mm step length. The lateral analysis yields no significant variations of the layer composition within uncertainties. The observed copper signal suggests slight variations in the anode layer thickness, but the roughness and the thickness of the anode layer being close to the IBA range make quantification impossible. Figure 9 shows representative SEM pictures of the separator side sample surface after lithiation. The anodes feature significant porosity and consist of SiO x and C particles. A negligible amount of sub-µm-sized Si-containing fibres is found, but these originate from the applied fibre-glass separator. A histogram analysis of the amount of pixels with light and dark grey levels yields about 15% surface coverage of the SiO x particles and 80% of graphite, with the remainder related to dark areas of the porosity. The accuracy of this approach is limited, due to the high porosity and the grey-scales it introduces into the image. The result agrees quite well with the educt mixture, but it cannot support or disprove the observed~20% Si surface enrichment. Figure 10 extends this surface investigation using a focussed-ion-beam (FIB) cut through the anode, and Figure 11 shows a cross-cut. The porosity extends through the anode depth. The given FIB cross-sectional area only features 11 SiO x grains, hardly allowing for any statistically sound profile information. The cracked surface features more statistics, but the grey-scale and EDX analysis show no clear depth profile or trend. The natural roughness of a cracked surface limits the accuracy of this approach, though. The EDX analysis shows a surface-near (few µm) presence of Al, but the sample structure prevents a quantification. This supports the assumption of an Al signal overlap with the Si signal in the above RBS analysis, but due to the missing quantification of the EDX data it remains unclear if the quantity of Al is sufficient to explain the apparent enrichment of Si near the separator interface.
IBA Method
The study used MeV ion-beam analysis (IBA) for investigating the lithiation depth and lateral profiles of a SiOx-containing graphite active material lithiated in a coin cell. The problems observed in mixed cathode and electrolyte analysis of significant immobile lithium, which deteriorates the accuracy for the mobile lithium fraction [40], are not relevant for the analysis of anodes/anode materials presented here. The low immobile Li background of 0.2% is negligible for the overall signal beyond a few 1% SoC. The lithiation degree was successfully determined up to the total sample thickness of about 44 µm. The IBA data for Li concentrations are proportional to the electro-chemically injected charge (Ah) at all SoC ( Table 2). All samples showed a reduction of Li concentration with depth at the separator side, proofing the depth-resolved analysis of the lithiation degree using the selected 7 Li(p,α 0 ) 4 He reaction. Calculations of the irradiation-induced changes show the possibility to apply orders of magnitude higher ion doses and currents without relevant changes of the sample structure or temperature, confirming the non-destructive nature of IBA as supported by the visual inspection after analysis. Only for situations requiring µmsized spots, the local heat-load could become a critical factor. The good counting statistics observed in our data leave room for reducing ion current and doses <<10 µC without a significant impact on the total uncertainty of 5%, to further mitigate such problems.
In conclusion, our results indicate no fundamental limitation of the application of IBA to lithium and lithium-ion batteries. Besides the Li-based examples shown here, IBA can be used to analyse any cell type and component such as sodium-based cells, metal anodes, liquids, solids, and powders without major methodical changes. The analysis range is limited due to fundamental constraints of the nuclear reactions and the product detection. For 7 Li(p,α 0 ) 4 He, we obtained 46 µm range for Li in the carbon-based anode material, which is supported by calculations and finding identical amounts of Li ions in the anodes through lithiation (mAh/cm 2 ) and IBA. The calculations show a perspective for even higher ranges when using deuterons or IBA methods other than NRA. The missing cross-section data should be determined for those options as soon as possible. In the future, the experiments will be extended towards an in-situ analysis of the lithiation depth profiles during (dis-) charging for revealing the lithium kinetics and the migration of other elements during cell operation. The current data suggest the possibility for a time resolution of 5-10 min (up to 12 C rate) for a relative accuracy of~2%.
Si-Doped Anodes for Li Batteries
The amount of injected charge recalculated from mAh to Li-ions shows a good agreement with the amount of Li atoms found in the samples by IBA. Combining the knowledge of maximum loading of Si and C with the determined Si and C quantities by IBA allows for determining the anodes SoC with a method independent of the electro-chemistry. The analysis revealed a C/Li = 5.4 (when neglecting Si) and a Li/Si = 1.8 (when neglecting C) at 50% SoC (compared to Li/Si ≤ 3.75 for pure Si). IBA cannot disentangle whether the Li preferably binds to Si/SiO x or C, but since already at 50% SoC the C/Li ratio is lower than the typical limit of C/Li = 6 (LiC 6 ), the SiO x definitely binds to at least a share of the injected Li at the highest investigated lithiation degree, as would be expected from electrochemistry. The presence of inactive C from the binder is not even considered here. We can further investigate this question by adding information from the electro-chemical analysis. Extrapolating the Li loading ratios with the given C/Li = 6 limit and the electro-chemically determined capacity of 622 mAh/g, we obtain a limit for Li storage in the used SiO x of Li/Si = 1.04 ± 0.05. This value of Li uptake in the used SiO x of~1:1 is about four times lower compared to the value of Li/Si= 3.75 for pure Si, but still represents a 67% increase in specific capacity of the investigated C/Si≈9 (IBA elemental ratio) material compared to pure graphite (372 mAh/g). The depth analysis revealed a constant Li enrichment factor at the anode-separator interface. The gradient consistently shows a factor of 1.23 when comparing the surface-near maximum with the in-depth average, independent of the SoC (up to the investigated 50% SoC). The 0% SoC sample (this is not installed into a cell) features a constant Li depth distribution, indicating that the surface-near maximum in the other samples relates to their installation into the coin-cell and/or the charging process. The gradient extends from the separator interface to~6 µm depth in the anode, a depth corresponding to the centroid of the SiO x particle size distribution. Since the samples are exposed to air for installation into the IBA device, this enriched region could also originate from air reactions at this side. We would expect a constant fraction of Li in this case, related to the presence of e.g., a LiOH layer, not the observed SoC-dependent concentration. The formation of a solidelectrolyte-interface (SEI) also cannot explain this effect. Assuming a constant porosity through depth, the SEI present on the internal surfaces would add a constant contribution to the Li depth profile, not only a surface near contribution, since it grows equally throughout the anode depth. Another possible explanation is a higher Si concentration near the surface, in which the Li would bind preferably. This could originate from the different particle sizes of graphite and SiO x (factor~2.6), which is known to induce particle size segregation, but the SEM data are inconclusive on this particle depth distribution due to the limited particle distribution counting accuracy. The preferable binding of Li to Si compared to C is already indicated by the C/Li ratio discussed above, supporting a connection between Si and Li. The data on the Si depth profiles show a significant scatter, but within, uncertainties on average-matching Li and SiO x enrichment profiles are found at the separator interface. Unfortunately, due to the limited atomic mass resolution of the IBA setup, the measurements remain inconclusive if the Si enrichment near the separator side exists or if there is an Al deposition on the anode as the EDX analysis suggests. This Al, probably originating from the fibre-glass separator as the presence of sub-micron glass-fibre fragments on the anode surface suggests, could bind additional Li or even show a particular beneficial behaviour in combination with Si/SiO x . Lastly, the gradient could originate from the diffusion of Li into the anode with a depth-dependent concentration given by the lithiation rate and temperature, but further measurements with different conditions would be required to confirm this option. Consequently, at least one modification of the anode composition in terms of Al or SiO x concentration induced inhomogeneities in the Li depth profiles and potentially the local SoC. | 9,144.8 | 2022-02-08T00:00:00.000 | [
"Materials Science",
"Engineering",
"Physics"
] |
Spectroscopic detection of single Pr3+ ions on the 3H4−1D2 transition
Rare earth ions in crystals exhibit narrow spectral features and hyperfine-split ground states with exceptionally long coherence times. These features make them ideal platforms for quantum information processing in the solid state. Recently, we reported on the first high-resolution spectroscopy of single Pr 3 + ?> ions in yttrium orthosilicate nanocrystals via the 3 H 4 − 3 P 0 ?> transition at a wavelength of 488 nm. Here we show that individual praseodymium ions can also be detected on the more commonly studied 3 H 4 − 1 D 2 ?> transition at 606 nm. In addition, we present the first measurements of the second-order autocorrelation function, fluorescence lifetime, and emission spectra of single ions in this system as well as their polarization dependencies on both transitions. Furthermore, we demonstrate that by a proper choice of the crystallite, one can obtain narrower spectral lines and, thus, resolve the hyperfine levels of the excited state. We expect our results to make single-ion spectroscopy accessible to a larger scientific community.
Introduction
Rare earth ions are ubiquitous in many technologies such as solid-state lasers, amplifiers for optical telecommunication, and magnetic materials. They have also played a central role in the development of highresolution laser spectroscopy methods such as hole burning and photon echo [1]. Transitions to the lowest lying excited states of an isolated rare earth ion take place within its 4f-shell and are dipole-forbidden. In a solid matrix, however, these transitions become weakly probable with long excited-state lifetimes up to milliseconds. Moreover, because the 4f-intrashell electrons are well shielded by the outer electrons, they are less sensitive against environmental perturbations, resulting in long spin coherence times of the order of hours [2]. In addition, they possess transitions at various wavelengths in the visible and near infrared, where optical detectors are very sensitive. Furthermore, they are optimally photostable even at room temperature and high excitation powers.
The combination of the above-mentioned features makes rare earth ions also very appealing for emerging applications in quantum information processing. Here, one would often like to access quantum states of welldefined spin via optical interactions and have the possibility of storing qubits for long times. In fact, a number of interesting effects have recently been demonstrated in rare earth-doped bulk crystals [3][4][5][6][7][8]. For the ultimate control of quantum states at the levels of single material and light particles, however, it is desirable to detect and manipulate single rare earth ions. Interestingly, although rare earth ions have been on the wish list of single particle microscopy and spectroscopy since the mid 1980s, this goal was only very recently achieved [9][10][11][12][13][14].
laser through the inhomogeneous band of the sample even if the concentration is higher by a few orders of magnitude. In this scheme, every time an emitter becomes resonant with the excitation light, it fluoresces. This signal can be detected on a low background because the other emitters in the excitation volume do not absorb light at that frequency. In our previous work [12], we employed this method pioneered by Moerner and Orrit for the detection of single molecules [15,16] to detect single Pr 3+ ions in Y SiO 2 5 yttrium orthosilicate (YSO). As illustrated in the energy level scheme of figure 1(a), we used the 3 H 4 − 3 P 0 transition at a wavelength of 488 nm (blue). The figure shows an example of the excitation spectra of several individual ions over 4 GHz.
Because of a limited access to samples with suitable concentration, in that work we used YSO nanocrystals to keep the background fluorescence to a manageable level. The spectral diffusion caused by the cracks and surface defects in these samples resulted in the broadening of the resonances from the lifetime-limited linewidth of 82 kHz to several MHz [17,18]. As a result, the hyperfine levels of the excited state could not be resolved. Furthermore, the weak signal of only tens of photon counts per second prevented us from performing various studies, which we now present in this article. In our current work we have increased our signal level beyond 500 counts per second (see figure 1(a)) by using a single photon counting module (SPCM) with a significantly higher quantum efficiency but also a higher dark count rate. This leads to faster measurement times and an overall 1.8fold improvement of the signal-to-noise ratio. We now provide new spectra recorded in larger YSO crystallites (diameter 1-5 μm) with a praseodymium doping level of 0.0001%, which is 50 times lower than that of the nanocrystals used earlier. As sketched in the inset of figure 2(a), the frequencies of three laser beams detuned by the energy differences of the ground-state hyperfine levels are scanned synchronously. The three peaks in figure 2(a) show that the hyperfine levels of the P 3 0 state of a single ion are resolved. The measured separations of 5.65 and 2.96 MHz are in very good agreement with bulk hole-burning spectra presented in our previous publication [12]. A fit to the spectrum composed of three Lorentzian functions yields a transition linewidth of 1.3 MHz, which is considerably smaller than what we obtained in our former study [12]. Figure 2(b) reveals that although the fast components of the spectral diffusion have been reduced, a certain level of slow diffusion remains on the time scale of minutes. The fact that the dynamics of transitions to all three hyperfine levels are correlated indicates that the diffusion is not caused by spin instabilities. We find it unlikely that the slow resonance frequency variations are caused by temperature changes at the sample because we monitor and maintain the temperature of the cold finger at 4.3 K. In future experiments, we should be able to eliminate spectral diffusion fully by optimizing the YSO crystal size and quality. state and, thus, a broader transition, which makes the requirements on the laser linewidth less stringent. Considering that our current state of the art in linewidth is limited to about 1 MHz by spectral diffusion, we decided to set up a new dye laser system (without frequency stabilization) to see whether it would also be possible to detect single ions via the H D 3 4 1 2 − transition. As in the case of [12], sidebands at −10.19 and +17.3 MHz were added to the center excitation laser frequency to prevent shelving of the population in other hyperfine levels of the ground state. Furthermore, we employed a longpass filter designed for 609 nm with a 4 nm edge to suppress the excitation light. Figure 1(b) shows an example of sharp resonances obtained when sweeping the frequency of the excitation laser over 4 GHz, whereas figure 1(c) displays a zoom of the excitation spectrum of a single ion. These measurements were performed in the very same nanocrystal that was used for our previous work [12]. As a result, the hyperfine levels of the H D
Antibunching and fluorescence lifetime measurements
A robust proof that the fluorescence signal at each resonance stems from a single emitter can only be provided by the observation of strong antibunching in the second-order autocorrelation function G ( ) (2) τ of the emitted photons. Commonly, the short fluorescence lifetimes of molecules, quantum dots or color centers in the nanosecond regime are comparable with the afterpulsing and deadtimes of SPCMs. To get around this problem, one uses a Hanbury Brown-Twiss configuration and searches for a lack of coincidences on two detectors within short time intervals. The long lifetimes of the excited states in rare earth ions permit, however, to use a single SPCM.
First, we present measurements of the fluorescence lifetime decay. To obtain these data, we tuned the frequency of the excitation laser beam to a narrow resonance (see figure 1) and used an AOM to pulse the excitation light with pulse duration and repetition period of 5 μs and 1.5 ms, respectively. The average laser power was set to achieve 10% of the continuous-wave fluorescence. The fluorescence was detected with the SPCM and the photons were time-tagged with a time-correlated single photon counting system. Figure 3 [19,23]. Figure 3(b) shows the same measurement as in (a) but for an ion detected on the red transition. This decay curve was also fitted with two exponentials, yielding e 1 times of 25 and 277 μs. The long component has a weight of about 96%, which we attribute to the lifetime of the D 1 2 state. Interestingly, 277 μs is about 1.7 times longer than the value expected for this state in the bulk. This difference might be caused by the local (2) normalized by the peak height of single ions studied in (a) and (b) integrated for 2.5 (blue) and 4 (red) hours, respectively. Afterpulsing probability was ≈ 0.1%. The detector deadtime of 42 ns only affects the first bin and was omitted for the analysis. Since all photons in the time tagged photon collection were used to calculate correlation events, the data are symmetric about 0 τ = ms. The red lines represent fits to the data which are used for normalization and determination of g (0) (2) . inhomogeneities of the environment in a similar fashion that single dye molecules show varying fluorescence lifetimes in dielectrics with disorder [24]. Furthermore, the spontaneous emission rate is also sensitive to the local density of states and can be slowed down in subwavelength dielectric geometries [25,26]. Access to such variations at the level of individual emitters is one of the fundamental advantages of single-ion spectroscopy. In our current work, we found that the lifetimes do vary from ion to ion in the nanocrystal. However, a proper quantitative investigation of this effect requires further studies. We also note that the weight of the contribution of the shorter lifetime component only amounts to 4%. We could not localize the source of this weak trace of fluorescence.
To generate clean autocorrelation functions, we removed all photons arriving within 7 μs of the leading pulse edge and chose time bins of 5 and 25 μs for the blue and red transitions, respectively. The center bin at 0 τ = was discarded to remove the majority of the SPCM afterpulses. Figure 3(c) shows the resulting photon correlation for excitation via the blue transition. The measured histograms were fitted by a piecewise exponential decay and rise function with baseline, overall peak and zero-peak heights as variable parameters. The data were further normalized to set the baseline and the peaks of the fit to 0 and 1, respectively. We note that the data for the blue and the red transitions were not recorded on the same ion (see section 5). The measurement on the red channel is somewhat noisier because of a lower fluorescence collection efficiency since the decay into the lowest crystal field of H 3 4 is filtered out. The resulting values of g (0) 0.07 (2) = and 0.36 are well below 0.5, confirming that in each case the fluorescence stems from a single ion. More importantly, these measurements show that despite the fact that rare earth ions are orders of magnitude weaker than other emitters such as alkali atoms, molecules, quantum dots or color centers, it is possible to study their photon statistics, which is an important quantity for quantum optical investigations.
Polarization dependence
Having demonstrated the feasibility of single-ion spectroscopy on the H D − transitions, we asked whether it was possible to address the same ion through both channels. Here, one might hope to identify the same spectral landscapes (see figure 1) within the inhomogeneous band of the two transitions. However, after an extended effort, we found no correlation between them. Furthermore, we found that spectral holes burnt in a bulk sample at 488 nm could not be addressed by light at 606 nm and vice versa. We, thus, suspected that each ion experiences different local electromagnetic environments at the two transition wavelengths. Investigations of the polarization behavior of the two excitation channels shed light on this hypothesis.
The blue (top) and red (bottom) maps in figure 4 display the inhomogeneous spectral spread for the transitions H P − as a function of the excitation polarization. The measurements were performed by focusing the laser beams at a few micrometers below the surface of a bulk crystal without using a solid-immersion lens. We clearly see that the maxima of the two channels are offset by 90°. The orthogonality of the dipole moments associated with the two transitions implies that they are influenced by different components of the local crystal fields and fluctuations [27]. As a result, one cannot expect any correlation between the details of the spectra within the two inhomogeneous distributions.
Emission spectroscopy
Next we turn to the emission spectra of single ions. To record such data, we sent the collected fluorescence to a grating spectrometer equipped with a peltier-cooled EM-CCD camera. The spectra were integrated over 2 s and accumulated 800 times, whereby we occasionally corrected for drifts between the laser frequency and the ion resonance. The fluorescence background was registered by repeating the measurement with the laser frequency detuned from the ion resonance by 50 MHz. Figure 5 shows the single ion spectra for the blue (a) and red (b) transitions. Considering the weak fluorescence rate and the strong dispersion of the emitted photons in a grating spectrometer, such measurements require long integration times and, thus, a high degree of spectral stability.
To identify the origins of the individual peaks, we have also recorded spectra obtained from a bulk sample (0.005% Pr-doping). Figures 5(c), (d) display these spectra recorded with optimum excitation polarization for the respective transitions (see section 5) and detection polarizations along the crystal axes D 1 (cyan) and D 2 (magenta). Each peak is attributed to a transition in agreement with previously published data [19,28]. We find a clear correspondence between the single-ion and ensemble measurements for each spectral feature although the relative peak heights differ substantially. Here, it is important to note that the emission intensities of different transitions are strongly polarization dependent. In the case of single-ion spectra, the arbitrary orientation of the host crystallite and the strong polarization-dependent coupling of the emission into the solid-immersion lens make it difficult to quantify the relative peak heights.
Conclusion
In the present work we have reported on the first detection and spectroscopy of single praseodymium ions via the H D 3 4 1 2 − transition, which has been most widely used in ensemble measurements. In addition, we have demonstrated narrower linewidths than the previous report of single ion spectroscopy [12], allowing us to resolve the hyperfine levels of the P 3 0 state. Furthermore, we have presented several new measurements that are usually desirable at the single-emitter level. These include the measurement of the fluorescence lifetime, autocorellation function and strong photon antibunching as well as emission spectra of single ions. Finally, we have examined the polarization dependence of the excitation and emission channels and have provided evidence that a given ion experiences different local fields and frequency shifts on its various transitions. The findings of this work further fuel the recent emergence of activities on the detection and spectroscopy of rare earth systems at the single-ion level [9][10][11][12][13][14]. | 3,551.8 | 2015-08-10T00:00:00.000 | [
"Physics",
"Materials Science"
] |
Analysis on performances of the optimization algorithms in CNN speech noise attenuator
In this paper, we studied the effect of the optimization algorithm of weight coefficients on the performance of the CNN (Convolutional Neural Network) noise attenuator. This system improves the performance of the noise attenuation by a deep learning algorithm using the neural network adaptive predictive filter instead of using the existing adaptive filter. Speech is estimated from a single input speech signal containing noise using 64-neuron, 16-filter CNN filters and an error back propagation algorithm. This is to use the quasi-periodic nature of the voiced sound section of the voice signal. In this study, to verify the performance of the noise attenuator for the optimization algorithm, a test program using the Keras library was written and training was performed. As a result of simulation, this system showed the smallest MSE value when using the Adam algorithm among the Adam, RMSprop, and Adagrad optimization algorithms, and the largest MSE value in the Adagrad algorithm. This is because the Adam algorithm requires a lot of computation but it has an excellent ability to estimate the optimal value by using the advantages of RMSprop and Momentum SGD
INTRODUCTION
Noise attenuation is to attenuate noise included in speech, and various studies have been conducted on noise attenuation technology so far. As noise attenuation methods, there are the spectrum subtraction method [1,2] and the Wiener filter method [3,4] based on the short-term spectrum estimation. These methods subtract the spectrum of noise estimated from the input speech signal or estimate the clear speech spectrum, and are advantageous when the noise and statistical characteristics of the speech signal are known in advance. Another method is to use a Comb filter [5] or an adaptive filter [6,7] using the quasi-periodic characteristics of the speech signal. The Comb filter method is used for noise having a specific frequency band, and the adaptive filter method has a function to automatically adjust the filter coefficients without knowing the statistical characteristics of the noise in advance. A single-input adaptive noise attenuator with one sensor receives a voice signal from one microphone and estimates the voice signal using the quasi-periodic characteristics of the voiced sound section.
Recently, a deep learning model is making great achievements as a technology that can learn using many hidden layers based on neural networks has been developed. By using the error back propagation algorithm to train multi-layer neural networks, even deep neural networks composed of many layers can be trained [8]. CNN [9] has finally proven to be a reliable tool for generalization of real world noise attenuation problems [10]. CNN is the most widely used deep learning model at present and can estimate the characteristics of speech well. In 2016, a model based on SNR (Signal to Noise Ratio)-aware CNN for speech enhancement was published [11]. This CNN model can efficiently process local temporal and spectral speech. Thus, the model effectively separates speech and noise from the input signal. Two SNR recognition algorithms have been proposed using CNNs to improve the generalization ability and accuracy of these models. The first algorithm incorporates a multi-task learning framework. Given a noisy speech as input to the model, the algorithm mainly reconstructs the noise-free speech and estimates the SNR level. The second algorithm does SNR adaptive noise attenuation. This algorithm first calculates the SNR level. Then, based on the calculated SNR level, an SNR-dependent CNN model is selected to reduce the noise. The proposed two SNR-aware CNN models outperform the simple deep neural network. In 2017, a CNN model for complex spectrogram enhancement was proposed to solve the phase estimation difficulties [12]. The proposed model restores clean real and virtual spectrograms from noisy spectrograms. This spectrogram is used to generate speech with very accurate phase information. The basic idea is that any signal can be represented as a function of real and virtual spectrograms.
Optimization algorithms [13,14,15,16] for updating the weight coefficients of CNN filters include Stochastic Gradient Descent(SGD), Momentum SGD, Nesterov momentum SGD, Adagrad, RMSprop, and Adam. In this study, we propose the best performing algorithm by examining the effect of the optimization algorithm on the performance when noise is attenuated using the deep learning algorithm of the CNN neural network filter instead of the adaptive filter of the adaptive noise attenuator. The content of this thesis is about the adaptive noise attenuator in Section II, the linear prediction of speech signals in Section III, the structure of the CNN neural network filter in Section IV, and the update algorithm of weight coefficients in Section V. And in Section VI, the simulation of the optimization algorithm and its results are described, and finally, a conclusion is drawn in Section VII. Figure 1 is a single-input noise attenuator that estimates the current voice sample from signals delayed by more than one sample by an adaptive prediction method using the quasi-periodic characteristics of the voice signal. A speech signal delayed by one or two pitches has a high correlation, but has little correlation with the white noise component. That is, the voice signal converges so as to have the least squares error of the target value as a relationship independent of noise. The output of the CNN filter estimates the characteristics of the voice signal included in the input signal, and this signal is subtracted from the input signal to become an error. This error signal is used as an update signal to update the weight of the CNN filter, and the average power is the same as Equation (1).
ADAPTIVE NOISE ATTENUATOR
Here, {•} is the average value, and assuming that the voice signal and noise are independent of each other, Since the noise energy in an arbitrary section is a fixed value, Minimizing { 2 } is to minimize the estimation error of the voice signal {( −̂) 2 }, and the output of the filter ̂ at this time estimates the voice signal best. Therefore, the minimization of {( −̂) 2 } means to minimize {( − ) 2 } and the error signal is estimated the noise.
LINEAR PREDICTIVE CODING ANALYSIS OF SPEECH SIGNAL
Linear predictive coding analysis is a method used in various fields such as speech analysis and synthesis, and can accurately express the characteristics of speech spectrum with a relatively small number of parameters. Assuming that the speech sample at discrete time is , and the predicted value of the speech sample at time is ̂, it can be expressed as Equation (4) [17].
Therefore, the present value of the voice signal can be predicted from the previous values of the previous values from Equation (4). Therefore, assuming the prediction error representing the difference between the actual input value and the predicted value is , it can be expressed by Equation (5).
Where is the linear prediction coefficient. Therefore, the LPC coefficient is calculated so that the mean squared value of is minimized. In this paper, the speech signal is estimated using the unique lowfrequency spectrum structure of voiced sounds.
STRUCTURE OF CNN NEURAL NETWORK FILTER
The neural network filter in Fig. 2 used in this paper shows the three-layer structure when 16 CNN filters are used. The CNN layer of the first layer consists of 64 neurons and 16 feature filters, and the size of the kernel is 16 samples, and a kernel exists at every sample interval. The input signal consists of 64×16 data for every sample, and ReLU is applied as an activation function at the output. The output of the CNN layer is flattened in one dimension through the next Flatten layer and spreads out to 49×16 = 784 nodes. These signals are input to the Fully-connected Neural Network (:FNN) layer with 784 neurons, and the ReLU function is applied again at the output. Then, it goes through the last layer, the FNN layer with 64 neurons, and is output as one signal. To reduce the amount of computation, the batch size was set to 30 and the bias parameter of each layer was omitted. The weight parameters to be calculated in this model are 256 (=16×16) in the CNN layer, 50,176 (=784×64) in the hidden layer, and 64 in the output layer, for a total of 50,496. The weight update algorithm uses Adam and the error backpropagation algorithm. This system is classified as supervised learning and prepares training data and learning target values with single input data.
THE ALGORITHM FOR WEIGHT COEFFICIENTS UPDATING
A multi-layer perceptron has the structure of a multi-layer neural network having one or more hidden layers. omitted. Also, the weighted sum inputted to the j-th hidden neuron is called ℎ , the weighted sum inputted to the k-th output neuron , and the activation function of the hidden neuron uses the ReLU function denoted by ∅ , and the output neuron does not use the activation function. Therefore, the output values of the hidden neuron and the output neuron can be expressed by Equations (6) and (7).
If we denote all weights as a parameter θ, we can express the value of the k-th output neuron as a function ( , ) given the input x.
The error backpropagation learning algorithm [18] is an algorithm for learning the multi-layer perceptron, and the supervised learning of the multi-layer perceptron defines an error function that is the difference between the values output by the multi-layer perceptron and given the learning target value. When the learning data and the target output value are given as a pair of input and output orders ( , )( = 1, ⋯ , ), the error for the whole learning data X can be defined as a mean square error as shown in the following equation.
In the above equation, the error function ( , ) is set to one value given the data set X and the parameter . The data set X is the value given from the outside, and the target to be optimized is . Therefore it can be written ( ). The back-propagation learning algorithm uses the gradient descent method to find the parameters to minimize the error function ( ). The gradient descent method is an algorithm that finds a parameter that minimizes the value of a cost function iteratively.
Where is the learning rate that controls the speed of learning. In the multi-layer perceptron, the backpropagation learning uses the error function ( , ) for a data by applying a stochastic gradient descent method of updating a data for each weight.
In the above equation, the weight 2 between the hidden layer and the output layer, and the weight 1 between the input layer and the hidden layer are parameters which should be corrected through learning.
As another method, the momentum method obtains and updates the current error and the error that reflects the previously used error to some extent.
Here, γ is 0< γ <1 as the reflection coefficient. And the Nesterov momentum method predicts the direction of movement before calculating ( ), and calculates the gradient after moving in that direction in advance.
In neural network training, if the learning rate value is too small, it takes a long time to learn, and if it is too large, the learning is not performed properly. A simple way to solve this problem is the learning rate decay method, which lowers the learning rate value of all parameters. The Adagrad method is a method of adjusting the learning rate according to the number of updates of the weights to give larger changes to parameters with fewer changes.
Where is a very small value and prevents division by zero. ( ) squares the existing gradient value and adds it continuously. Among the elements of the parameter, the significantly updated element has a lower learning rate, and is applied differently for each element of the parameter. However, as the learning is repeated, the value of the gradient squared gradually decreases, the update intensity becomes weak, and eventually becomes 0 at some point, and there is a disadvantage that learning may not proceed any more.
And the RMSprop changed the ( ) obtained by adding the square value of the slope in the Adagrad to an exponential moving average instead of a sum. Like Adagrad, ( ) does not grow indefinitely, and the relative magnitude difference of recent changes between variables is maintained.
Where is called a decaying factor and has a value of 0.9 to 0.999. In the Adagrad, since ( ) is defined as the sum of changes up to the current time, it increases as time passes and the learning rate decreases. However, in the RMSprop, it is defined as the exponential average of the previous change and the current change, so that a sudden decrease in the learning rate is prevented.
Finally, the Adam is an optimizer created by combining the RMSprop, which changes the learning rate, and the Momentum, which changes the update path by optimization. Like Momentum, it stores the exponential average of the gradients calculated so far, and stores the exponential average of the square values of the gradients like RMSprop.
RESULTS OF SIMULATIONS
In this study, a simulation program was written using the Keras library to verify the performance of the noise attenuator for the optimization algorithms. The input signal mixed with voice and noise was sampled at 8 kHz and consisted of 300,000 samples (37.5 sec). Since this system corresponds to supervised learning, the input data is internally composed of an input array of 64×499,901 samples and a target value of 499,901 samples. To evaluate the performance of the system, the mean square error MSE for the error between the target value, the input signal and the speech prediction value, was used. The MSE curves were compared. Figure 3 shows the MSE curves for the Adam, RMSprop, and Adagrad algorithms when the SNR is 20dB. Here, the Adam curve is black, the RMSprop curve is blue, and the Adagrad curve is red. From this figure, as the update progresses, it can be seen that the MSE decreases rapidly at first, and then gradually decreases from the number of batches of 3,000. The Adam algorithm represents the smallest MSE and the Adagrad algorithm represents the largest MSE. Next, Figure 4 shows the MSE curves for the three algorithms when the SNR is 10dB, and shows similar performance to the curves when the SNR is 20dB. That is, the Adam algorithm shows the best performance and the Adagrad algorithm shows the lowest performance. And Figure 5 shows the MSE curve when the SNR is 5dB because more noise is mixed. It can be seen that the difference in performance for each algorithm is reduced compared to the previous figure. However, the performance of Adam algorithm is still the best and the performance of Adagrad algorithm is the lowest. Finally, Figure 6 shows the MSE curve when the noise is very mixed and the SNR is 1dB. In this case, it was found that MSE significantly increased no matter which algorithm was used. It can be seen that if the noise is large, there is almost no difference according to the algorithm, and the performance is poor, so the noise is not removed well. In addition, Table 1 summarizes the MSE values for each optimization algorithm in a table. From this table, it can be seen that as the SNR decreases, the MSE increases, and when the SNR becomes 1 dB, almost no noise is removed. In addition, the performance of the Adam algorithm is the best, the performance of the RMSprop algorithm is slightly inferior to that of the Adam algorithm, and the performance of the Adagrad algorithm is the lowest
CONCLUSIONS
In this paper, the effect of the optimization algorithm on the performance of the noise attenuator using CNN deep learning technology was investigated. The noise attenuator was implemented using a 64-neuron, 16-filter CNN filter and an error backpropagation algorithm. The model was coded using the Keras library, and how the MSE value changes according to the optimization algorithm was observed. As a result of simulation, this system showed the smallest MSE value when using the Adam algorithm among the Adam, RMSprop, and Adagrad optimization algorithms, and the largest MSE value in the Adagrad algorithm. This is because the Adam algorithm requires a lot of computation, but it has an excellent ability to estimate the optimal value by using the advantages of RMSprop and Momentum SGD. | 3,833.2 | 2021-12-15T00:00:00.000 | [
"Computer Science"
] |
Hot Corrosion Behavior of a Powder Metallurgy Superalloy Under Gas Containing Chloride Salts
Hot-corrosion behavior of a powder metallurgy superalloy (Alloy 1) under gas containing chloride salts at 700 °C, 750 °C and 800 °C were investigated via weight-gain measurements, X-ray diffraction (XRD), scanning electron microscopy (SEM), and electron probe micro analysis (EPMA). The hot-corrosion behavior of a similar alloy (Alloy 2) at 800 °C and the same conditions were also carried out for comparison. The experimental results showed that the average mass gain of Alloy 1 increases as the temperature elevates. The corrosion kinetics followed linear power law at 700 °C, 750 °C and 800 °C. The corrosion layers obtained after 100 h of hot corrosion were mainly composed of Cr2O3, TiO2, Al2O3, NiO and NiCr2O4. The cross-sectional morphologies and corresponding elemental maps indicated that the chloride salts penetrated into the corrosion product and caused it to produce many cavities and cracks. According to these results, the hot corrosion of Alloy 1 under gas containing chloride salts is confirmed to be an accelerated oxidation process due to the damage in integrity of the oxide film caused by continuous corrosion with chloride salts. Compared to Alloy 2, the increased Co and Al content in Alloy 1 with better hot corrosion resistance at 800 °C promoted the rapid formation of continuous Cr2O3 and Al2O3 protective films on the alloy surface in which Co inhibited internal oxidation of Al through the third element effect.
Introduction
Nickel-based superalloys consisting of a continuously disordered γ matrix and ordered γ′ precipitates are widely used in aircraft turbine engines and subjected to a highly aggressive environment, in which both good mechanical properties and high oxidation resistance are required [1][2][3]. Powder metallurgy superalloys have quickly became the preferred material for advanced aero-engine disks with the advantages of uniform structure, no macro-segregation, high yield strength and decent fatigue performance. At the same time, among current turbine manufacturing technologies, powder metallurgy is regarded as a mature and reliable method for high-performance turbine disks. In order to improve the service temperature and comprehensive performance of powder metallurgy superalloys, the third generation of powder metallurgy superalloys, e.g., Alloy 1 (its composition is listed in Table 1), was invented by Beijing Institute of Aeronautical Materials on basis of the second generation of powder metallurgy superalloys, e.g., Alloy 2 ( Table 1). The main difference between the two alloys lies in the composition of Co, Al, Cr et al. Alloy 1 is the key material to ensure the high performance and reliability of the most advanced aero-engines.
Normally, the oxidation/corrosion resistances of Ni-based superalloys are primarily determined by the content of Cr/Al due to the thermally-induced growth of continuous protective Cr 2 O 3 /Al 2 O 3 scales [4][5][6][7]. Compared to Cr 2 O 3 scales, Al 2 O 3 scales provide superior oxidation resistance at temperatures above 871 °C [8,9]. However, Cr 2 O 3 exhibits better hot corrosion resistance to sulfate other than Al 2 O 3 because Crrich phases (gamma) tend to be more resistant to sulfate induced corrosion because Cr 2 O 3 is able to establish protective behavior faster than Al 2 O 3 [10][11][12]. Co improves mechanical properties at high temperature, but the effects of Co on oxidation behavior are unclear as different researchers have concluded differing results. For example, Choi et al. [13] indicated that the addition of Co could slightly increase the isothermal oxidation resistance of Ni 3 Al base alloy but decreased the cyclic oxidation resistance at 1000-1200 °C. Weiser et al. [14] reported that the replacement of Ni by Co in Ni-9Al-8W-8Cr (at.%) alloy enhanced the oxidation resistance at low temperature (≤ 850 °C). Ismail et al. [15] indicated that an increase in Co concentration decreased the oxidation performance of Co-Ni base superalloys at 800 °C. Therefore, the effect of Co on oxidation resistance is complex, and may be affected by other elements, environments, and temperatures. However, the addition of Co can improve the hot corrosion resistance of Ni-based superalloys [13,16,17].
In marine environment, gas hot corrosion caused by chloride salts could significantly reduce the service life of powder turbine disks [18]. Hot corrosion became a topic of important and popular interest in the late 1960s as gas turbine engines of military aircraft suffered from corrosion during operation over seawater [19][20][21][22][23][24][25][26]. For wrought and [15,[27][28][29][30][31][32][33]. However, there are few studies on high temperature corrosion of powder metallurgy superalloys in which the mechanism is still not clear. At present, investigations of Alloy 1 are mainly concentrated on microstructure, preparation process and mechanical properties, while resistance and mechanism of gas hot corrosion of Alloy 1 remain unclear.
In this study, gas hot corrosion tests at different temperatures were performed to obtain corrosion kinetics of Alloy 1 to evaluate corrosion resistance. Phase constitutions of corrosion layers were detected by X-ray diffraction (XRD). Surface and cross-sectional morphologies as well as elemental distributions in corrosion layers were observed by scanning electron microscopy (SEM) and electron probe micro analysis (EPMA), respectively. Since the service temperature of the turbine disk developed from Alloy 1 will reach 800 °C. In order to compare the hot corrosion resistance at this temperature, the gas hot corrosion tests of Alloy 2 at 800 °C were also carried out. The experimental results were analyzed to uncover the hot corrosion mechanism that would promote a better understanding of corrosion resistance in Alloy 1.
Specimens Preparation
The chemical compositions of Alloy 1 and Alloy 2 were analyzed and listed in Table 1. Specimens with dimensions of 30 mm in length, 10 mm in width and 2 mm in thickness were used. Before gas hot corrosion tests, specimens were ultrasonically cleaned in acetone and ethanol, then dried at 120 °C for 1 h.
Gas Hot Corrosion and Oxidation Tests
The gas hot corrosion tests were carried out with reference to China Aviation Standard HB 7740-2004 which is the most-widely used by China Aviation Industry to evaluate corrosion resistance of superalloys and high temperature protective coatings. The principle of gas hot corrosion test is to use a special test equipment ( Fig. 1) to form corrosive gas through continuous mixed combustion of compressed air, atomized aviation fuel and artificial seawater, then the gas is sprayed on the surface of specimen to cause corrosion. The equipment is an atmospheric low-speed gas corrosion test device, which has the characteristics of accurate control of test parameters and wide adjustment range. It can better simulate the temperature, corrosion medium and cold/hot alternation of the working environment of turbine components. According to the standard HB 7740-2004, it is a cycle of continuous corrosion for 55 min at the test temperature and cooling out for 5 min. The chloride salts in artificial seawater mainly containing NaCl and KCl were used in the present paper since powder metallurgy superalloy suffers severe hot corrosion from high temperature gas containing chloride salts during operation over seawater.
3
The original artificial seawater containing chloride salts was prepared according to the composition in Table 2. Specimens were inserted in the gas hot corrosion test equipment for corrosion of 25 h, 50 h, 75 h, 100 h at 700 °C, 750 °C and 800 °C. Afterward, specimens were air cooled to room temperature and weighed using an electronic balance with an accuracy of 0.0001 g. Oxidation of 100 h in air at 700 °C, 750 °C and 800 °C was also carried out for comparison.
To calculate the corrosion rate at each temperature, at least three specimens tested for 100 h need to be washed using alkali with 40% NaOH + 60% Na 2 CO 3 at 500 °C, removing corrosion products until the surface of the specimen was metallic. The weight of the corrosion products was calculated by subtracting the weight after alkali washing from the original weight of the specimen in which the average corrosion rate can be obtained.
Characterization
Phase constitutions of corrosion layers were determined by XRD. After that crosssectional specimens were embedded in an epoxy, then polished with water-based sandpaper of different particle sizes, and finally the coating samples were polished to a mirror-like effect with a polishing agent. SEM and EPMA were employed to investigate the cross-sectional morphologies and elemental distributions in corrosion layers.
Results and Discussion
Corrosion Kinetics and Characterization Figure 2 illustrates corrosion rate of Alloy 1 at 700 °C, 750 °C and 800 °C. It is clear that mass gain increases gradually as the time extends and the temperature elevates. It is suggested that the mass gain of Ni-based superalloy during isothermal oxidation process follows a relationship of the form [34][35][36]: where ΔW is the mass gain per unit area, n is the rate exponent, t is the exposure time at a particular temperature, and K is the isothermal rate constant. Mohanty and Shores [37] confirmed that corrosion kinetics of high alloy stainless steels can also be described by Eq. (1). Moreover, they found that corrosion kinetic follows a square power law at low temperature while a linear one at high temperatures. According to Fig. 1, it is clear that corrosion kinetics follow linear power law (n = 1) at 700 °C, 750 °C and 800 °C. Thus, isothermal rate constants K can be obtained by use of regression analysis (see Table 3). In addition, the correlation coefficients R of regression analyses nearly equals to 1, indicating that linear power law is applicable to Alloy 1. Figure 3 illustrates the oxidation/corrosion rates of Alloy 1 at different temperatures in air and in gas containing salts. It is evident that the oxidation rate of Alloy 1 during oxidation is much lower than that during gas hot corrosion, the former is only about one tenth of the latter, which implies that the corrosion layer is not protective.
(1) (ΔW) n = kt, Figure 4 illustrates the corrosion rates of Alloy 1 and Alloy 2 at different temperatures. It is clear that the corrosion rates of Alloy 1 are significantly lower than that of Alloy 2 at 800 °C, indicating that Alloy 1 has more superior hot corrosion resistance at this temperature. Figure 5a illustrates the surface morphology of Alloy 1 after gas hot corrosion of 100 h at 700 °C. It can be seen that fine particles of corrosion products were almost all over the surface of the specimen, indicating that the corrosion products have yet formed films on the surface of Alloy 1. Figure 5b is the XRD spectra of Alloy 1 after hot corrosion of 25 h and 100 h at 700 °C. It shows that corrosion products are Cr 2 O 3 after hot corrosion of 25 h, while NiO, TiO 2 and NiCr 2 O 4 appear after 100 h. This confirms that the types of corrosion products will change during hot corrosion of 100 h. Furthermore, the corrosion products demonstrate that the process of hot corrosion is mainly oxidation at this temperature. Figure 6a depicts the surface morphology of Alloy 1 after hot corrosion of 100 h at 750 °C. It is seen that corrosion products have formed a dense film on the surface. Fig. 6b also validates that the process of hot corrosion is namely oxidation. Figure 7a depicts the surface morphology of Alloy 1 after hot corrosion of 100 h at 800 °C. The corrosion film becomes loose and porous. Figure 7b is the XRD spectra of Alloy 1 after hot corrosion of 25 h and 100 h at 800 °C. It presents that the corrosion products after hot corrosion of 25 h and 100 h are the same as those at 750 °C. This confirms again that the process of hot corrosion is mainly oxidation.
Surface Morphologies and Phase Constitutions of Corrosion Layer
It is known that NiCr 2 O 4 can be formed by the following reaction:
Cross-Sectional Morphologies
Together with measurements of mass gain and phase constitution, severity of hot corrosion attack was studied by cross-sectional morphologies of corrosion layers. Figure 8a displays cross-sectional morphology of Alloy 1 after 100 h of gas hot corrosion at 700 °C. It is clear that the corrosion layer is not complete and dense, and there are voids in it. There is a clear gap between the corrosion layer and the substrate, which means the adhesion of the corrosion layer is poor. The surface of the substrate under the corrosion layer has an area with a different color contrast from the substrate, and its depth is about 2.2 μm. EDS analysis shows that this area is a Cr-poor zone. Figure 9 illustrates electron probe elemental maps of cross section. It shows that the corrosion layer is composed of Cr 2 O 3 , TiO 2 , Al 2 O 3 , and Figure 9 also validates that Na in the corrosion layer appear, which means the salts have yet penetrated into the corrosion layer. Figure 8b and c displays cross-sectional morphology of Alloy 1 after 100 h of gas hot corrosion at 750 °C and 800 °C, respectively. It is clear that the integrity of the corrosion layer deteriorates, which is divided into many pieces by cracks and tends to spall along cracks. There are a large amount of cavities between the corrosion layer and the substrate, which means the adhesion of the corrosion layer Figure 12 displays the cross-sectional elemental maps after hot corrosion of 25 h of Alloy 1 and Alloy 2 at 800 °C. In the initial stage of corrosion, a continuous corrosion layer consists of Cr 2 O 3 and Al 2 O 3 appears on the surface of Alloy 1, without obvious internal oxidation. The thickness of oxide layer is about 4 μm. In contrast, the Cr 2 O 3 layer of Alloy 2 is thinner, about 2.5 μm. Furthermore, no continuous Al 2 O 3 layer is formed on the surface, but the internal oxidation of Al is serious, and Al 2 O 3 penetrates into the substrate like fingers.
Corrosion Mechanism of Alloy 1
It is evident from the comparison of oxidation in air and gas hot corrosion that salts accelerate the failure of Alloy 1. Normally, oxidation of alloys is selective, which depends on the content of a certain element and its oxidation free energy [38]. The products during oxidation in air of Alloy 1 are mainly oxides of Cr 2 O 3 , TiO 2 and Moreover, it is evident that the corrosion rate of Alloy 1 during oxidation is much lower than that during gas hot corrosion (Fig. 3). Therefore, the hot corrosion behavior under gas containing chloride salts is actually accelerated oxidation, which is obviously different from the hot corrosion mechanism of the cooperating mechanism of oxidation and sulfurization of powder superalloy in molten NaCl-Na 2 SO 4 salts [39]. This is because the salt in this study mainly chlorine and only a small amount of sulfur in aviation fuel.
The presence of chloride salts such as NaCl will cause great damage to the protective oxide film. NaCl can reacts with the oxide film as follows [40]: CrO 2 Cl 2 may evaporate in a gaseous form or become chromate: Fig. 11 Cross-sectional elemental maps after hot corrosion of 100 h at 800 °C It can be seen from the cross-sectional element distribution maps that the corrosion layer contain Na but without Cl (Figs. 9, 10, 11), this infers that CrO 2 Cl 2 was likely to evaporate in a gaseous form. Due to these reactions, the oxide films became loose and porous, further promoting salts penetration.
At the beginning of the hot corrosion, Alloy 1 is corroded slightly by the selective oxidation and oxides of Cr, Al, and Ti appear on the surface. With the extension of time, chloride salts penetrated the oxide film and reacted, destroying the integrity of the oxide film and making it less protective. Moreover, the oxide film formed during gas hot corrosion easily spalled from the substrate. Normally, a higher oxidation rate always develops more interfacial cavities [41]. From this (6) CrO 2 Cl 4 + 2NaCl + 2H 2 O(g) = Na 2 CrO 4 + HCl. work, it was found that a higher oxidation rate during gas hot corrosion than that during oxidation in air (Fig. 3). Therefore, compared with oxidation in air, more interfacial cavities occurred in the oxidation layers formed by gas hot corrosion (Fig. 8), causing poorer adhesion and a higher driving force for spallation. Once the oxide layer spalls, the substrate corrosion will accelerate.
Compared to Alloy 2, Alloy 1 has better gas hot corrosion resistance at 800 °C (Fig. 4). In the initial stage of corrosion, a continuous corrosion layer composed of Cr 2 O 3 and Al 2 O 3 appeared on the surface of Alloy 1, without obvious internal oxidation (Fig. 12a). In contrast, for Alloy 2, beneath the Cr 2 O 3 layer, significant internal oxidation of Al occurred (Fig. 12b). This is mainly due to the difference in the content of Co and Al. Firstly, Alloy 1's Co content is significantly higher than that of Alloy 2 ( Table 1). Previous studies indicated that the diffusion coefficient of Cr in Co exceeds the same value for Ni at lower temperatures (≤ 850 °C); therefore, the increase of Co content promotes the outward diffusion of Cr to the oxidation front and enhances the oxidation resistance at low temperature (≤ 850 °C) [14]. Furthermore, increased Co increases the Cr content in γ matrix as a result of the increased γ′ fraction [41]. Moreover, the initial fast outward diffusion of Co due to oxidation also increased the Cr content below scale. All of them are believed to favor the growth of a continuous Cr 2 O 3 scale on Alloy 1 in a shorter transient stage (Fig. 12a). With the growth of Cr 2 O 3 scale, the oxygen partial pressure at the Cr 2 O 3 scale/underlying substrate interface further decreased. The selective oxidation of Al started, therefore, a continuous Al 2 O 3 layer formed on Alloy 1 (Fig. 12a). These imply that the increase of Co also slightly promoted the growth of Al 2 O 3 through the third elemental effect. The mechanism of the third element effect is not yet clear, but some ideas include: inhibition of internal oxidation of Al, complete solubility between Al 2 O 3 and Cr, inhibition of the growth of external oxides [42,43]. These ideas were basically confirmed by comparing the hot corrosion of above two alloys. Secondly, there is the significant difference of Al content between Alloy 1 (3.6 wt%) and Alloy 2 (2.2 wt%), extra Al in Alloy 1 might be enough for the continuous alumina layer to form and protect the alloy [44].
According to preceding analyses, the hot corrosion of Alloy 1 under gas containing chloride salts was confirmed to be an accelerated oxidation process due to the damage in integrity of the oxide film caused by continuous corrosion with chloride salts. Compared to Alloy 2, Alloy 1 has better hot corrosion resistance at 800 °C, which is attributed to the increase of Co and Al content. The increased Co and Al content can promote the rapid formation of continuous Cr 2 O 3 and Al 2 O 3 protective films on the alloy surface with better hot resistance in which Co inhibits internal oxidation of Al through the third element effect.
Conclusions
In this study, hot corrosion tests under gas containing chloride salts at different temperatures were performed. Phase constitutions, surface and cross-sectional morphologies, elemental distributions of corrosion layers had been studied. The conclusions can be summarized as follows: • The average mass gain of Alloy 1 increased as the temperature elevates. The corrosion kinetics followed linear power law at 700 °C, 750 °C and 800 °C. The corrosion rates during oxidation in air were much lower than that of gas hot corrosion. • The corrosion layers obtained after 100 h of hot corrosion were mainly composed of Cr 2 O 3 , TiO 2 , Al 2 O 3 , NiO and NiCr 2 O 4 . The cross-sectional morphologies and elemental maps indicated that the chloride salts penetrated into the corrosion layers and caused lots of cavities and cracks. • The hot corrosion of Alloy 1 under gas containing chloride salts was confirmed to be an accelerated oxidation process due to the damage in integrity of the oxide film caused by continuous corrosion with chloride salts. • Compared to Alloy 2, Alloy 1 has better hot corrosion resistance at 800 °C, which is attributed to the increase of Co and Al content. The increased Co and Al content can promote the rapid formation of continuous Cr 2 O 3 and Al 2 O 3 protective films on the alloy surface with better hot resistance in which Co inhibits internal oxidation of Al through the third element effect. | 4,859.4 | 2022-08-08T00:00:00.000 | [
"Materials Science"
] |
Design, Evaluation and Implementation of an Islanding Detection Method for a Micro-grid
: Correct and fast detection of a micro-grid (MG) islanding is essential to the MG since operation, control, and protection of the MG depend on its operating mode i.e., an interconnected mode or islanding mode. This study describes the design, evaluation and implementation of an islanding detection method for an MG, which includes a natural gas-fired generator, a doubly fed induction generator type wind generator, a photovoltaic generator, and some associated local loads. The proposed method is based on the instantaneous active and reactive powers at the point of common coupling (PCC) of the MG. During the islanding mode, the instantaneous active and reactive powers at the PCC are constants, which depend on the voltage of the PCC and the impedance of the dedicated line. The performance of the proposed method is verified under various scenarios including islanding conditions for the different outputs of the MG, and fault conditions by varying the position, type, inception angle and resistance of the fault, using the PSCAD/EMTDC simulator. This paper also concludes by implementing proposed method into a TMS320C6701 digital signal processor. The results indicate that the proposed method successfully detects islanding for the MG in islanding conditions, and remains stable in fault conditions.
Introduction
Distributed generations (DGs) are small-scale power generations, which are considered to be a promising approach as a solution of economic and environmental issues for conventional power systems [1][2][3]. DGs provide economic advantages to reduce the amount of energy lost in transmitting electricity, and the number and size of power distribution lines, since the electricity can be generated near where it is consumed. In addition, the integration of distributed renewable energy sources i.e., wind energy, solar energy, bio-energy, hydraulic energy and so on, into the traditional electric power system, can reduce the emission of the greenhouse gases, and thus solve the environmental problems. However, the limitation of integrating DGs is that, they must be obedient to the operating conditions of the interconnected distribution networks. When the grid breaks the connection between the distribution network and DGs, DGs must detect the islanding condition and immediately stop producing power, which is called anti-islanding. Otherwise, grid operators may not realize that a circuit is still powered by DGs, and automatic re-connection of devices may be prevented. This will reduce the capacity factor of DGs and increase the power lost in transmitting electricity, since the loads which were powered by DGs would be powered by other remote power sources.
impedance of the fault, are analyzed using the PSCAD/EMTDC generated data. Lastly, this paper concludes by implementing the proposed method into a TMS320C6701 digital signal processor (DSP).
Calculation of the Instantaneous Active and Reactive Power
The three-phase instantaneous active power (p 3ph ) delivered from the MG to the dedicated line, which is defined and given in (1), can be calculated from the voltages (v a , v b and v c ) and currents (i a , i b and i c ) measured at the PCC.
The instantaneous reactive power could be calculated by taking the imaginary part of the complex power, whose real part represents the instantaneous active power [21,22]. However, this calculation of the instantaneous reactive power is invalid under unbalanced operating conditions [23,24]. To correctly calculate the instantaneous reactive power even under an unbalanced fault condition, the instantaneous reactive power (q 3ph ) delivered from the MG to the dedicated line is calculated from the voltages (v a ', v b ' and v c '), which respectively lag v a , v b and v c by a quarter of period, and currents (i a , i b and i c ) [25], as shown in (2).
Islanding Detection Method Based on the Instantaneous Active and Reactive Power at the PCC
In this subsection, the proposed islanding detection method for an MG based on p 3ph and q 3ph is described. When the MG is disconnected from the distributed network by opening the circuit breakers at the grid side, p 3ph and q 3ph will be constant, which depend on the voltages at the PCC as well as the impedance of the dedicated line. p 3ph will become almost zero because little resistance exists in the dedicated line. On the other hand, q 3ph has some value corresponding to the series inductance and the shunt capacitance of the dedicated line. Since the parameters of the dedicated line can be obtained, these constant active and reactive powers can be easily calculated. Therefore, if the calculated instantaneous active and reactive powers converge to the pre-calculated constant values, the islanding inception will be detected. Figure 1 shows the islanding detection region. The reference value for the complex power (S ref ) is given by where, V PCC and Z line * represent the rated line-line voltage of the PCC and the impedance of the dedicated line, respectively.
Calculation of the Instantaneous Active and Reactive Power
The three-phase instantaneous active power (p3ph) delivered from the MG to the dedicated line, which is defined and given in (1), can be calculated from the voltages (va, vb and vc) and currents (ia, ib and ic) measured at the PCC.
The instantaneous reactive power could be calculated by taking the imaginary part of the complex power, whose real part represents the instantaneous active power [21,22]. However, this calculation of the instantaneous reactive power is invalid under unbalanced operating conditions [23,24]. To correctly calculate the instantaneous reactive power even under an unbalanced fault condition, the instantaneous reactive power (q3ph) delivered from the MG to the dedicated line is calculated from the voltages (va', vb' and vc'), which respectively lag va, vb and vc by a quarter of period, and currents (ia, ib and ic) [25], as shown in (2).
Islanding Detection Method Based on the Instantaneous Active and Reactive Power at the PCC
In this subsection, the proposed islanding detection method for an MG based on p3ph and q3ph is described. When the MG is disconnected from the distributed network by opening the circuit breakers at the grid side, p3ph and q3ph will be constant, which depend on the voltages at the PCC as well as the impedance of the dedicated line. p3ph will become almost zero because little resistance exists in the dedicated line. On the other hand, q3ph has some value corresponding to the series inductance and the shunt capacitance of the dedicated line. Since the parameters of the dedicated line can be obtained, these constant active and reactive powers can be easily calculated. Therefore, if the calculated instantaneous active and reactive powers converge to the pre-calculated constant values, the islanding inception will be detected. Figure 1 shows the islanding detection region. The reference value for the complex power (Sref) is given by where, VPCC and Zline* represent the rated line-line voltage of the PCC and the impedance of the dedicated line, respectively. The criteria of the islanding detection are given by where k 1 and k 2 depend on the variation of the voltages at the PCC and the measurement ratio errors of the current transformer (CT) and the potential transformer (PT). The variation of the voltages at the PCC is ±20%, with the fully consideration of the voltage deviation both in the steady-state and the transient state after an islanding inception, and sufficient margin. Thus, k 1 and k 2 are set by where 1.0, 0.8, 1.2, ±3% and ±6% are respectively per unit value of the rated voltage, lower and upper limits of voltage variation, limits of CT and PT ratio errors [26,27]. In (6), the minimal magnitude of the complex power at the PCC in islanding conditions is calculated when the actual voltages at the PCC are only 80% of the rated voltages considering maximal voltage variation, and the measured currents and voltages at the PCC are respectively 97% and 94% of the actual currents and voltages considering the maximal ratio errors of CTs and PTs. Meanwhile, as shown in (7), the maximal magnitude of the complex power at the PCC in islanding conditions is calculated when the actual voltages at the PCC are 120% of the rated voltages, and the measured currents and voltages at the PCC are respectively 103% and 106% of the actual currents and voltages. k 3 is set to be 15 • , considering the limits of phase errors of both CTs and PTs, also with sufficient margin [26,27]. These three coefficients, k 1 , k 2 and k 3 , only depend on the limits of voltage variation and the limits of measurement errors of CTs and PTs defined in the IEC Standards [26,27]. Thus, when the proposed method is applied in another MG, k 1 , k 2 and k 3 will not be changed, and only S ref should be pre-calculated considering the parameters of the new dedicated line. As described above, the islanding will be detected when the trajectory of the point (p 3ph , q 3ph ) moves into the islanding detection region, which is used to prevent the mal-detection for the transient disturbances.
Case Studies
An MG, which is connected to a 22.9 kV, 60 Hz distribution network through a Y-Y transformer (22.9/6.6 kV) and a dedicated line (1 km), is shown in Figure 2. The dedicated line is modeled ACSR 58 mm 2 , whose series resistance, series inductance and shunt capacitance are 0.8316 Ω/km, 0.0022 H/km and 0.0021 µF/km, respectively. The MG is composed of three DGs i.e., a 2 MW natural gas-fired generator, a 2 MW DFIG type wind generator, and a 1 MW photovoltaic generator, and some associated local loads. The system is modeled using the PSCAD/EMTDC simulator, where the sampling rate is 32 samples/cycle. The signals of the currents and voltages measured at the PCC are passed through anti-aliasing RC low-pass filters with the cutoff frequency of 960 Hz, which is half the sampling frequency.
The performance of the proposed islanding detection method is verified under various islanding conditions for the different outputs of the MG, as well as various fault conditions varying the fault position, fault inception angle, fault type and fault impedance, as shown in Tables 1 and 2. When an islanding incepts, the proposed islanding detecting method should activate the relay as soon as possible; on the contrary, the proposed method should not activate the relay in fault conditions. Case 8 Dedicated line 0° Single line-toground 0 Ω Figure 10 Case 9 Dedicated line 0° Double lineto-ground 0 Ω Figure 11 Case 10 Dedicated line 0° Line-to-line 0 Ω Figure 12 Case 11 Dedicated line 0° Single line-toground 1 Ω Figure 13 Case 12 Dedicated line 0° Single line-toground 5 Ω Figure 14
Islanding Conditions
The performance of the proposed islanding detection method is verified under various islanding conditions for the different outputs of the MG. The results of three cases, where the generating power of the MG is smaller than (Case 1), close to (Case 2) and larger than (Case 3) the local loads, are shown in Case 8 Dedicated line 0 • Single line-to-ground 0 Ω Figure 10 Case 9 Dedicated line 0 • Double line-to-ground 0 Ω Figure 11 Case 10 Dedicated line 0 • Line-to-line 0 Ω Figure 12 Case 11 Dedicated line 0 • Single line-to-ground 1 Ω Figure 13 Case 12 Dedicated line 0 • Single line-to-ground 5 Ω Figure 14
Islanding Conditions
The performance of the proposed islanding detection method is verified under various islanding conditions for the different outputs of the MG. The results of three cases, where the generating power of the MG is smaller than (Case 1), close to (Case 2) and larger than (Case 3) the local loads, are shown in Figure 3a indicates the voltages and currents measured at the PCC, where v a (solid), v b (dashed), and v c (dotted) are shown in the upper subfigure and i a (solid), i b (dashed), and i c (dotted) are shown in the lower subfigure. After the islanding inception, the voltages decrease slightly whilst the currents decrease to nearly zero. p 3ph and q 3ph are respectively shown in the upper subfigure and lower subfigure of Figure 3b. After islanding inception, p 3ph becomes almost zero after a slight fluctuation whilst q 3ph becomes a very small value directly. Both p 3ph and q 3ph change and become stable again in several milliseconds after the island incepts. In Figure 3c, the trajectory of the point (p 3ph , q 3ph ), which is shown with the marks of "o", is located in the third quadrant prior to the islanding inception. This is because the power is delivered from the grid to the MG prior to the islanding inception, which can also be confirmed in Figure 3b. When the islanding incepts, p 3ph is nearly zero and q 3ph has the negative value because of the characteristics of the dedicated line. Therefore, the trajectory moves to the islanding detection region. In Figure 3d, where "0" and "1" respectively mean mode of interconnection and islanding, the islanding detection signal is activated at 22.53 ms after the islanding inception. The results indicate that the proposed method can successfully and quickly detect the islanding operation in 1.5 cycles after the islanding inception. Figures 4 and 5 show the results for Case 2 and 3, where the generating power is close to and larger than the local loads of the MG, respectively. In both cases, an islanding incepts at 33.33 ms. In Case 2, the power transmitted between the MG and the grid prior to the islanding inception is nearly zero. Due to the little variation of the transmitted power in the dedicated line prior to and after the islanding inception, the magnitude and the phase angle of the voltage and the frequency measured at the PCC do not change significantly. However, the proposed method can discriminate the islanding inception from normal load variations by considering the trajectory of the point (p 3ph , q 3ph ). As seen in Figure 4d, the trajectory of the point moves into islanding detection region at 20.45 ms after islanding inception. In Case 3, the generating power of the MG is larger than the local loads of the MG. Thus, the point (p 3ph , q 3ph ) is in the fourth quadrant prior to the islanding inception. As expected, the trajectory of the point (p 3ph , q 3ph ) enters the islanding detection region from the fourth quadrant at 22.53 ms after the islanding inception, as shown in Figure 5c,d.
The results for Cases 1-3 indicate that the proposed method can successfully detect the islanding operation irrespectively of the relationship between the generating power of the MG and its local loads. In addition, the detection speed is much faster than that in [10], almost 1.5 cycle after the islanding incepts.
Fault Conditions
The performance of the proposed islanding detection method is also verified under various fault conditions varying the position and type of the fault. In addition, the qualitative analysis about the effect of the fault inception angle and fault resistance is also included in this subsection. In all these fault cases, the power transmitted between the MG and the grid prior to the fault inception is nearly zero, which is the same as that in Case 2. All faults occur at 33.33 ms, and the proposed method should not activate the islanding detection signal in fault conditions.
Faults with Different Position
In this subsection, three-phase faults, whose inception angles are all 0 • , with different position i.e., distribution line of the MG (Case 4) and dedicated line (Case 5) are considered. The results are shown in Figures 6 and 7, respectively. Figure 6 shows the results for Case 4. In this case, it is assumed that a three-phase fault occurs at the distribution line in the MG. As shown in Figure 6a, the voltages decrease whilst the currents increase sharply when the fault occurs. From Figure 6b,c, both p 3ph and q 3ph fluctuates after the fault incepts due to the large fault current. The point (p 3ph , q 3ph ) is near the origin prior to the fault inception, since the generating power of the MG is same as the local loads. However, when the fault occurs, the point moves far away the origin, since the fault currents are considerably large. The proposed method does not activate the islanding detection signal (Figure 6d). Figure 7 shows the results for Case 5, where a three-phase fault occurs at the dedicated line. The results are similar with those for Case 4. The trajectories of the point (p 3ph , q 3ph ) do not move into the islanding detection region, as shown in Figure 7c. Thus, the islanding detection signal is not activated.
Energies 2018, 11, x FOR PEER REVIEW 10 of 24 point moves far away the origin, since the fault currents are considerably large. The proposed method does not activate the islanding detection signal (Figure 6d). Figure 7 shows the results for Case 5, where a three-phase fault occurs at the dedicated line. The results are similar with those for Case 4. The trajectories of the point (p3ph, q3ph) do not move into the islanding detection region, as shown in Figure 7c. Thus, the islanding detection signal is not activated.
The results indicate that the proposed method does not activate the islanding detection signal no matter where a fault occurs. The results indicate that the proposed method does not activate the islanding detection signal no matter where a fault occurs.
Faults with Different Inception Angle
In this subsection, three different fault inception angles i.e., 0 • (Case 5), 45 • (Case 6) and 90 • (Case 7) are compared. The fault occurs at the dedicated line at 33.33 ms. The results of Cases 6 and 7 are shown in Figures 8 and 9. From the results and analysis in the previous subsection, the considerably large fault current is why the point (p 3ph , q 3ph ) moves far away the islanding detection region and consequently inactivates the islanding detection signal. In case of faults with different inception angle, even the magnitudes and waveforms of fault current in Cases 6 and 7 were different from those in Case 5, as seen in Figures 8a and 9a, the fault currents were still considerably large. Finally, the islanding detection signal could not be activated as shown in Figures 8d and 9d.
It can be concluded that the proposed islanding detection method can remain stable no matter when a fault occurs.
Faults with Different Type
In this subsection, four kinds of fault types i.e., single line-to-ground (SLG, Case 8), double line-to-ground (DLG, Case 9), line-to-line (LL, Case 10), and three-phase (3P, Case 5) are considered. The fault occurs at the dedicated line, which is same with that of Case 5. The results of Cases 8−10 are shown in Figures 10-12. signal could not be activated as shown in Figures 8d and 9d.
It can be concluded that the proposed islanding detection method can remain stable no matter when a fault occurs.
Faults with Different Type
In this subsection, four kinds of fault types i.e., single line-to-ground (SLG, Case 8), double line-to-ground (DLG, Case 9), line-to-line (LL, Case 10), and three-phase (3P, Case 5) are considered. The fault occurs at the dedicated line, which is same with that of Case 5. The results of Cases 8−10 are shown in Figures 10-12. Figure 10 shows the results for Case 8, where an A-phase SLG fault occurs. From Figure 10a, the A-phase voltage decreases significantly since the fault occurs very close to the PCC, whilst the voltages of other phases do not change much. The A-phase current increases significantly whilst the currents of other phases increase slightly, compared with the faulted phase, due to the zero-sequence component of the fault current. Due to the unbalanced three-phase voltages and currents, p3ph and q3ph fluctuate even when the transient state is finished, as seen in Figure 10b. Hence, the trajectory of the point (p3ph, q3ph) cannot remain stable at one point and move into the islanding detection region in Figure 10c. In addition, the islanding detection signal is not activated. Figures 11b and 12b, and the trajectory of the point (p3ph, q3ph) does not move into the islanding detection region in Figures 11c and 12c. As expected, the islanding detection signal is not activated.
It can be concluded that the proposed islanding detection method can remain stable in fault conditions irrespective of the type of fault.
Faults with Different Fault Impedance
In this subsection, three SLG faults, whose inception angles are all 0°, with different fault impedance i.e., 0 Ω (Case 5), 1 Ω (Case 11) and 5 Ω (Case 12), are analyzed together. The results for Cases 11 and 12 are shown in Figures 13 and 14, respectively.
From the analysis in the previous subsection, when an unbalanced fault i.e., SLG, DLG and LL fault occurs, both p3ph and q3ph fluctuate and the trajectory of the point (p3ph, q3ph) cannot remain stable at one point and move into the islanding detection region. The similar results and analysis can be easily drawn even the fault resistance exists in these unbalanced fault conditions. In addition, fault resistance has no effect on the magnitude and waveform of the fault current in the case of a balanced fault (3P fault). Hence, it could be easily concluded that the fault resistance would not affect the performance of the proposed islanding detection method.
The results for all fault cases indicate that the proposed islanding detection method does not activate the islanding detection signal under various fault scenarios considering different position, inception angle, type and impedance of fault. Hence, the proposed islanding detection method can remain stable as expected in fault conditions, irrespectively of the position, type, inception angle and resistance of a fault. Figure 10a, the A-phase voltage decreases significantly since the fault occurs very close to the PCC, whilst the voltages of other phases do not change much. The A-phase current increases significantly whilst the currents of other phases increase slightly, compared with the faulted phase, due to the zero-sequence component of the fault current. Due to the unbalanced three-phase voltages and currents, p 3ph and q 3ph fluctuate even when the transient state is finished, as seen in Figure 10b. Hence, the trajectory of the point (p 3ph , q 3ph ) cannot remain stable at one point and move into the islanding detection region in Figure 10c. In addition, the islanding detection signal is not activated. Figures 11 and 12 show the results for Cases 9 and 10, where a DLG fault and an LL fault occurs at the dedicated line, respectively. Similar with the results for Case 8, p 3ph and q 3ph fluctuate in Figures 11b and 12b, and the trajectory of the point (p 3ph , q 3ph ) does not move into the islanding detection region in Figures 11c and 12c. As expected, the islanding detection signal is not activated.
It can be concluded that the proposed islanding detection method can remain stable in fault conditions irrespective of the type of fault.
Faults with Different Fault Impedance
In this subsection, three SLG faults, whose inception angles are all 0 • , with different fault impedance i.e., 0 Ω (Case 5), 1 Ω (Case 11) and 5 Ω (Case 12), are analyzed together. The results for Cases 11 and 12 are shown in Figures 13 and 14, respectively.
From the analysis in the previous subsection, when an unbalanced fault i.e., SLG, DLG and LL fault occurs, both p 3ph and q 3ph fluctuate and the trajectory of the point (p 3ph , q 3ph ) cannot remain stable at one point and move into the islanding detection region. The similar results and analysis can be easily drawn even the fault resistance exists in these unbalanced fault conditions. In addition, fault resistance has no effect on the magnitude and waveform of the fault current in the case of a balanced fault (3P fault). Hence, it could be easily concluded that the fault resistance would not affect the performance of the proposed islanding detection method. The results for all fault cases indicate that the proposed islanding detection method does not activate the islanding detection signal under various fault scenarios considering different position, inception angle, type and impedance of fault. Hence, the proposed islanding detection method can remain stable as expected in fault conditions, irrespectively of the position, type, inception angle and resistance of a fault.
Hardware Implementation
Practically, noise signals are contained in real measured three-phase voltage and current signals. As the currents flowing through the PCC in an islanding condition are extremely small, the effect of noise signals of the measured voltages and currents on the performance of the proposed method might not be ignored. Therefore, to verify the performance of the proposed method when noise signals are contained, the method is tested under practical conditions and this section shows the results of hardware implementation of the method into a TMS320C6701 DSP. Figure 15 shows the configuration of hardware implementation. Three-phase voltages and currents generated by PSCAD/EMTDC simulator are converted into analog signals using PCI 1724 U board and then injected into the Intelligent Electronic Device (IED) based on a TMS320C6701 DSP. The signals are then passed through first-order RC filter (f c = 960 Hz) to the 16-bit A/D converters operating at a sampling rate of 32 s/c. All calculation and process of islanding detection are done in the IED. Figures 16 and 17 show the results of Case 1, in which islanding incepts at 33.33 ms and the islanding detection signal should be activated, and Case 4, in which a 3P fault occurs at 33.33 ms and the islanding detection signal should not be activated. As shown in Figure 16, the point (p 3ph , q 3ph ) cannot remain stable at one point even when the transient state is over. This is because p 3ph and q 3ph slightly fluctuate due to the noise signals in the voltages and currents. To prevent mal-operation due to these noise signals, the islanding detection region is appropriately expanded and set to be a circle.
The results indicate that the proposed method can successfully and fast detect the islanding inception at 17.71 ms after the islanding inception. In Figure 17, even noise signals are contained in real voltage and current signals, the proposed islanding detection method does not activate the islanding detection signal due to large p 3ph and q 3ph after the fault inception.
Hardware Implementation
Practically, noise signals are contained in real measured three-phase voltage and current signals. As the currents flowing through the PCC in an islanding condition are extremely small, the effect of noise signals of the measured voltages and currents on the performance of the proposed method might not be ignored. Therefore, to verify the performance of the proposed method when noise signals are contained, the method is tested under practical conditions and this section shows the results of hardware implementation of the method into a TMS320C6701 DSP. Figure 15 shows the configuration of hardware implementation. Three-phase voltages and currents generated by PSCAD/EMTDC simulator are converted into analog signals using PCI 1724 U board and then injected into the Intelligent Electronic Device (IED) based on a TMS320C6701 DSP. The signals are then passed through first-order RC filter (fc = 960 Hz) to the 16-bit A/D converters operating at a sampling rate of 32 s/c. All calculation and process of islanding detection are done in the IED. Figures 16 and 17 show the results of Case 1, in which islanding incepts at 33.33 ms and the islanding detection signal should be activated, and Case 4, in which a 3P fault occurs at 33.33 ms and the islanding detection signal should not be activated. As shown in Figure 16, the point (p3ph, q3ph) cannot remain stable at one point even when the transient state is over. This is because p3ph and q3ph slightly fluctuate due to the noise signals in the voltages and currents. To prevent mal-operation due to these noise signals, the islanding detection region is appropriately expanded and set to be a circle.
The results indicate that the proposed method can successfully and fast detect the islanding inception at 17.71 ms after the islanding inception. In Figure 17, even noise signals are contained in real voltage and current signals, the proposed islanding detection method does not activate the islanding detection signal due to large p3ph and q3ph after the fault inception.
Conclusions
This paper proposes an islanding detection method for the MG based on the instantaneous active and reactive power delivered from the MG to the dedicated line. The instantaneous active and
Conclusions
This paper proposes an islanding detection method for the MG based on the instantaneous active and reactive power delivered from the MG to the dedicated line. The instantaneous active and reactive power are calculated and used to monitor whether the islanding incepts or not. When the circuit breakers at the grid side open, the monitored instantaneous active power from the MG to the dedicated line is converged to nearly zero, whilst the instantaneous reactive power from the MG to the dedicated line has some small value according to the shunt capacitance and the series inductance of the dedicated line. Therefore, the trajectory of the point (p 3ph , q 3ph ) moves into the islanding detection region, and the islanding detection signal is consequently activated. On the contrary, the trajectory of the point (p 3ph , q 3ph ) would move to another point or fluctuates in a fault condition. The islanding detection region can be pre-defined considering the parameters of the dedicated line, variation of the voltage, and the possible measurement errors of CTs and PTs.
The performance of the proposed islanding detection method is verified under various islanding conditions for the different outputs of the MG, as well as various fault conditions varying the position, type, inception angle and resistance of the fault, with the PSCAD/EMTDC generated data. The results indicate that the proposed method can successfully and fast detect the islanding operation irrespective of the relationship between the generating power and the local loads. In addition, the proposed method does not mal-operate irrespectively of the position, type inception angle and resistance of the fault. A prototype relay based on the described scheme successfully detects islanding inception.
The proposed method can correctly and quickly detect the islanding inception. Consequently, the strategies of operation and control of the MG could be re-decided. In addition, the threshold values for the protection relays in the MG could be properly re-set according to the mode of islanding or not, to increase the reliability of the protection system of the MG. | 7,193.2 | 2018-02-02T00:00:00.000 | [
"Engineering"
] |
New modeling of reconfigurable microstrip antenna using hybrid structure of simulation driven and knowledge based artificial neural networks Simülasyona ve bilgi tabanlı yapay sinir ağlarına dayalı hibrid yapı kullanarak yeniden yapılandırılabilir mikroşerit anteni yeni modelenmesi
Öz Knowledge-based modeling has a critical role to embed existing knowledge to improve modeling performance. Since reconfigurable antenna can provide more operational frequencies than the classical antennas, a knowledge-based hybrid structure is used in this work to obtain efficient model and producing optimum new models for a reconfigurable microstrip antenna. The hybrid structure consists of two phases. The first phase generates initial knowledge which is used in knowledge-based modeling structure to obtain design parameters. Artificial neural network based multilayer perceptron can generate necessary knowledge for a knowledge-based model after the training process. Knowledge-based modeling improves the accuracy of the initial model to determine design parameters corresponding to the design target. Source difference, prior knowledge Input and prior knowledge input with difference can be applied to realize an efficient knowledgebased strategy. 3D-EM simulation generates the new model in terms of the design parameters of the proposed application. It has three switching states for operating, which are organized by two resistor circuits representing ON/OFF states. Switch positions and geometrical parameters can be used for satisfying design targets between 1 GHz and 6 GHz for the efficient antenna design. Bilgi tabanlı modelleme, modelleme performansını geliştirmek için mevcut bilgiyi içine katmak için kritik bir role sahiptir. Yeniden yapılandırılabilir anten, klasik antenlerden daha fazla operasyonel frekans sağlayabildiğinden, bu çalışmada bilgi tabanlı bir hibrid yapı, verimli bir model elde etmek ve aynı zamanda yeniden yapılandırılabilir bir N-şekilli mikroşerit anten (RNSMA) için optimum yeni çözümler üretmek için kullanılmıştır. Hibrid yapı iki aşamadan oluşmaktadır. İlk aşama, tasarım parametrelerini elde etmek için bilgi tabanlı modelleme yapısında kullanılan başlangıç bilgisini üretmektedir. Yapay sinir ağı tabanlı çok katmanlı algılayıcı, eğitim sürecinden sonra bilgi tabanlı bir model için gerekli bilgiyi üretebilir. Bilgi tabanlı modelleme, tasarım hedefine karşılık gelen tasarım parametrelerini belirlemek için başlangıç modelinin doğruluğunu geliştirir. Verimli bilgi temelli bir stratejiyi gerçekleştirmek için Ön Bilgi Girişi (PKI), Kaynak Farkı (SD) ve Ön Bilgi Girişi ile Fark (PKID) uygulanabilir. 3B-EM simülasyonu, RNSMA'nın tasarım parametrelerine bağlı olarak yeni çözümü üretir. Önerilen anten, AÇIK/KAPALI durumlarını kullanarak iki direnç devresi tarafından kontrol edilen üç çalışma moduna sahiptir. Anahtar konumları ve geometrik parametreler verimli bir anten tasarımı için 1 GHz ve 6 GHz arasında tasarım hedeflerini karşılamak için kullanılabilir.
Introduction
The recent development of reconfigurable microstrip antennas is rapidly growing especially in telecommunication technologies [1]. They are gained considerable attention in the IEEE 802.11n standard, MIMO radar systems, portable computers and 5G cellular technologies such as WiMAX and long-term evolution (LTE) [2], [3]. Through changing the shape of the structure of the antenna by connecting/disconnecting some radiating parts, types of dielectric materials and feeding systems, different characteristics can be realized such as frequency bands, radiation patterns, directivity, etc. In addition, they have excellent properties as light weights, easily fabricated, small electrical dimensions (length, breadth and height.), low price and profile compared to conventional antennas [2]. Various switching mechanisms have been applied for designing reconfigurable antennas. Set of those (e.g. thin * Corresponding author/Yazışılan Yazar film microstrip, MEMS, resistors, PIN diodes, varactors and smart material) have played a major role for achieving different results as resonant frequencies, wideband and polarization diversity [1], [4]. In this study, three switching states have been researched with two resistor circuits which are ON-ON, ON-OFF and OFF-OFF. These states have a wide area of operating frequencies that make the proposed antenna more advisable when compared to other reconfigurable antennas controlled by different switching mechanisms studied in [5]- [8]. Therefore, the idea of using resistor circuits is for switching states that can minimize the complexity and non-linearity of the proposed antenna compared to today's wireless communication systems [4]. It is possible to model and optimize different types of antennas using different optimization methods, one of the most important is ANN methods [9], [10]. They provide a general structure for modelling non-linear links between various outputs and inputs related to the problem (control, remote sensing, pattern recognition, medical, telecommunications, speech processing, and more applications). They are also much faster than 3D-EM simulation for achieving results. Therefore, ANNs are considered as an optimization and modeling process for the antenna design, microwave device, electronic circuits and signal integrity analysis [9]- [11]. They are computational algorithms. It intended to simulate the action of different systems by data. In certain cases, data should be available from designed/modeled applications represented sometimes by data collected from equivalent, covariance, undeformed and deformed models, in addition to semi-analytical, math and empirical equations [10]. In knowledge-based/added ANN models, the present knowledge (data/any information) is integrated as input sometimes as target vectors to learn in the fine model [11], [12]. They have more accuracy, often faster, have better interpolation and extrapolation data process and require minimal training knowledge compared to traditional ANN (e.g. Multilayer perceptron (MLP)) [9], [13]. This paper supports the growing needs in the application of ANNs in reconfigurable antenna design areas. ANNs depend on enough training data for modelling and optimizing the results of any microwave application, in which their accuracy also depends on the data presented during the training process step. In this application, training data is generated by CST-EM simulation software that is based on the Finite Integration Technique (FIT).
In this study, novel models (solutions) are presented by using the hybrid structure for modeling the reconfigurable N-shaped microstrip antenna as an alternative technique to only use 3D-EM simulations [14]. The structure is developed to contain two phases, the first phase contains two processing steps: MLP as a first training step followed by means of knowledge-added methods as a second training step. Knowledge-added methods are: (1) prior knowledge input (PKI), (2) source difference (SD) and (3) prior knowledge input with difference (PKID) methods [9], [10], [14]. However, in the hybrid structure, the frequency samples are provided as an input and the geometrical characteristics of the model/proposed antenna as an output, and consequently the learning process is inversely achieved. Thus, the hybrid structure was reversed to start processing from right to left direction as explained in paragraph 3. It is significant to note that the output of MLP presents a coarse model/information in some cases, which is subsequently trained by the knowledge-added methods. In the last, the characteristic parameters have been taken out from the ANNs are designed by the 3D-EM CST software as a second phase to find new models as goaled hence new operational frequencies which can be contributed in different wireless communication applications that have operating frequency range between 1-6 GHz such as L-band, S-band, and C-band. The proposed models confirm to be especially useful in case of the frequency samples are only the input used to produce several new models with several geometrical dimensions. Improved accuracy is outlined by the normalized mean absolute error (NMARE) between the predicted output and the target of the model for the return loss (RL).
Reconfigurable antenna design
The antenna being studied introduces a new reconfigurable microstrip antenna (RNSMA) that the radiation conductor is configured in the shape of capital "N" letter. This RNSMA is made up of three different material sheets and a feeding system (Coaxial cable). The radiation conductor (first sheet) consists of two faced triangles separated by mid rectangular strip as shown in Figure 1(a). The length of parameters of 1 , 2 , 3 , 1 and 4 set to 0.8 cm while the width of mid rectangular strip set to 0.4 cm. They are typed on a dielectric substrate of FR-4 (mid sheet) with a thickness set to 0.2 cm and a relative permittivity of 4.3. The ground plane (last sheet) is typed on the back side of the dielectric. 2 and 3 are introduced as empty spaces between the two triangles and the mid conductor set to 0.2 cm that contains two resistor circuits ( 1 and 2 ). Each circuit contains two different resistors of value and in parallel mode, the main rule of 1 and 2 is to control the flow of the electrical current hence they might permit appearing new results. Ordinarily, resistors work by scattering power as heat and minimizing the flow of electricity through it. They are passive electrical components that act as variable material at radio frequency (RF) and microwave wireless systems. 2 ) in the circuits are switched off (OFF-OFF state/between 2 − 3 ), the current is only distributed on the mid conductor that is minimizing the non-linearity. In the case of both switches in the circuits are switched on (ON-ON state/between 0 − 1 ), the effective length of the RNSMA begins to be higher which maximizes the non-linearity. If one of the switches in the circuits is switched off while the other is switched on (ON-OFF state/between 1 − 2 ), the effective conductors are the mid rectangle and the triangle that is linked to it causing mid non-linearity (see Figure 2(a & b)).
The resistive value of the used resistors is limited by the forward biased DC current only. The resistor of has a large resistive value of 130 Ohms to not permit distributing the DC current to the surface of the connected conductors, the resistor of has a small resistive value of 5 Ohms when used in the ON state as shown in Figure 2(b). Therefore, the current flowing through each resistor will be different as specified by Ohm's Law. The feeding system is located at the center (0, 0) of the middle conductor with an inside radius of (65 x 10 −3 cm) as shown in Figure 1(b). Suppose the change in the resistive value of the shape of conductors and the resistors are R, the feeding source is V (V is the voltage) and they are independently verified based on switching states, thus I is a function of the two variables I(V, R). According to ohm's low, the resistive value of each resistor determines the amount of current flowing within that resistor as shown in Figure 2 To accomplish this nonlinear function, the boundary of the generated training data (minimum and maximum values) for the antenna parameters are ranged as shown in Table 1.
Proposed hybrid structure
The hybrid structure has been developed presently that uses neural networks and 3D-EM simulation as its basis. The proposed hybrid structure consists of two phases which is the first phase has two data training/processing steps in the neural network area, then followed by 3D-EM simulator to design the output of the first and second step of the first phase for obtaining the novel models ( and ) of the RNSMA as shown in Figure 3. The input for ANNs of the hybrid structure is only frequency samples ( = ), while the outputs are the required geometrical dimensions of the reconfigurable antenna model. (X MLP is the result from 1. st step and X KBNN is the result from 2. nd step) as shown in Figure 3 and Figure 4. Furthermore, the output of the first training step is the input of the second training step in addition to extra knowledge as in case of PKI and PKID while the extra knowledge will be added to the target in SD method. The 3D-EM simulator is given as a mutual simulation part for generating training data beforehand the first phase and designing the result ( and ) of the first phase to get novel models ( & ) through the second training phase of the structure. The result obtained from MLP that is known as the first training step will go through the knowledge-based neural network (KBNN) that is known as the second training step which is shown as , and .
During data processing of ANNs, applying the required training steps for adjusting synaptic weights and thresholds of their neurons. Therefore, the weight coefficient ( ) in the error function ( ) can be defined by Where and represent the iteration number and the training data number for ANN modeling, respectively. The error measurement function is defined as: Where represents the input and represents ANN models which is known here as and . Finally, after finishing the training process in the hybrid structure, the function can be defined as: (1), (2) and (3) equations are considered as a general concept to the following ANN methods.
The training data generated by 3D-EM-simulator was 312,500samples which are computed by (4) of the five trained parameters of and for the proposed application as shown in Table 1.
Where is the training sample numbers, is frequency sample numbers (which is equal to 100), | | and | | are the number of samples generated for proposed antenna parameters and (which is equal to 5). Therefore, = 100 x 5 5 = 312,500-samples. This large amount of training data has been minimized to be 3,125-samples. The obtained number of samples cover the data requirements. The minimization procedure depends on the artificial selection of optimum frequency samples available between 1-10 GHz of the frequency band which has the lesser values of return losses. The frequency samples are 100 which are considered the input of the presented models and the output is 5 parameters which are the geometrical parameters of the proposed RNSMA. Therefore, the connection between the inputs of Y vector and the outputs of X vector are multi-dimensional and non-linear related to the problem (RNSMA). In the testing development stage, three testing data sets are selected in ON-ON state. The first two testing data sets selected inside training data while the third is selected outside the training data which are applied to examine the accuracy, generalization capability and final model for interpolation and extrapolation data sets in addition to verify the actual predictive impact of the neural structure. In ON-OFF and OFF-OFF states, different methods of selection testing data have been made.
Multilayer perceptron modeling method (MLP)
MLP is an important and simplest modeling method. It is a class of feedforward neural networks, which is the processing method at the first step of the first phase as shown in the hybrid structure of Figure 3. In MLP, neurons are grouped into three layers interconnected as an input layer, one or more added hidden layers and the last is an output layer. It is in the first processing step of the hybrid structure, corresponding to model Y and X variables respectively [10]. The relationship between the input and the output parameters can be functionally presented as X = (Y). In the present research, the input parameter is = [ ] ( is 100-frequency samples) and the predicted output is = [ 1 , 2 , 3 , 1 , 4 ] .The input and the output response can be functionally presented as:
Source difference modeling method (SD)
Differently from the previous network architecture, SD is the method at the second training step of the first phase which is considered a knowledge-based neural network (KBNN) [10]. The concept of SD is integrating collected two of training datasets together to be the target for the new model. These data sets are 3D-EM simulation outputs of = [ 1 , 2 , 3 , 1 , 4 ] and X MLP = [L 1 , L 2 , L 3 , W 1 , W 4 ] T which represents the fine information and the information of the output of MLP ( ) respectively. Therefore, the input parameters for MLP in first training step and SD in second training step are only the frequency samples = [ ] .
represents predicted output, while the target is ∆ = − as shown in Figure 5. The relationship between the input and the output of the design case of 3D-EM simulation and ANNs are functionally presented as: where is the optimum frequency ( ) that is obtained by designing of the predicted model of . is the error measure expresses the absolute difference between and and it can be presented as:
Prior knowledge input modeling method (PKI)
In this method [10], the output of MLP ( X C ) is considered the input of PKI (second training step), in addition to the original input of which is an extra knowledgeY f . The target output is the fine output ( X f ). Therefore, the mapping of input/output is between the output of MLP ( X c ) and ( Y f ) in addition to the target of PKI. The special feature is enhanced by including input parameters of the fine model ( ) as additional inputs to the PKI in the second training step. Therefore, the PKI input parameters can be presented as Like the previous modeling case, the relationship between the input and the output of the design case of 3D-EM simulation and ANNs modeling are, respectively, presented as:
Prior knowledge input with difference modeling method (PKID)
PKID totalizes the advantages of PKI and SD knowledge-based modeling methods as introduced in [9] [13]. The quality of the mapping here is enhanced by entering the knowledge obtained from MLP ( ) with the knowledge of ( ) to be PKID's inputs in the second training step. Therefore, the input parameter is
Preprocessing and training of neural networks
Neural model development starts by generating and collecting data for training and testing process. Therefore, there are two data sets simulated: "Training data set" and "interpolation/extrapolation testing data sets". Two hidden layers are used for all networks. However, the number of hidden neurons is (60-40) used for MLP and KBNNs. A feedforward network computes the outputs of the 100-sample inputs as compositions of (60-40) hidden neurons. Number of hidden layers and their neurons based on the nature, nonlinearity and complexity of the function/problem which are mapped by the network structure. Highly nonlinear problems need more hidden neurons and regular problems need fewer hidden neurons. The output neurons are responsible for presenting results (models), which result from the processing performed by the neurons in the previous hidden layers. ANNs have been developed by applying the Levenberg-Marquardt algorithm (LMA) which is used for adapting weights, with tangent-sigmoid transfer functions (TFs) that is used for mapping the input layer to the output layer within certain bounds as in the case in the biological neuron. Both are inside neurons of the hidden layer. Purely linear function is inside the output layer for calculating a layer's output from its net input. The training parameters of the ANNs are optimized as shown in Table 2.
Results and discussion
The first phase of the hyper structure is for training ANN models. For stable results, the training process has been achieved by 50-iterations. The geometrical parameters of the RNSMA are calculated on the testing data sets. The first phase of the hybrid structure is trained with 3,125-samples to achieve the accuracy that is like that have been obtained by 3D-EM simulation. The accuracy of the models is presented by the optimum value of the S-parameters (frequency and the return loss) of which are the results of simulating geometrical parameters that obtained by the hybrid structure for interpolation and extrapolation test data sets. Numerical results that are shown in the following tables represent the value of the antenna parameters obtained by running of ANNs (MLP, SD, PKI and PKID) at different ON/OFF switching states.
In the next sub-paragraphs, each switching sate contains numerical tables for antenna parameters, new geometries (models) and S-parameters for RNSMA. In addition to the result of measuring the normalized mean absolute relative error (NMARE) as shown in Table 10 . For a RL of the fine model as a target and a prediction RL of ANNs, the NMARE is In order to get an optimal model, model simulations have been completed under the same conditions of the design mentioned in section 2. Geometric transformations are shown on the radiation conductors of models depending on switching states, while no change on the substrate material and the feeding system.
ON-ON switching state
In this state, two of testing data sets are internally chosen from training data which is called interpolation and the third is externally chosen from training data which is called extrapolation. Therefore, the results shown in Table 3, Table 4 and Table 5 are designed by 3D-EM simulator in the second phase to produce new models illustrated in new forms of. with their S-parameters as shown in the following Figures (Figure 8, Figure 9, Figure 10, Figure 11, Figure 12 and Figure 13). Table 3 and similar following tables.
ON-OFF switching state
Two testing data sets are derived by choosing new adjoining samples next to the middle samples of the training data while keeping minimum and maximum values of parameters as shown in Table 1 [9], a similar process is realized in OFF-OFF state as well. Table 6 and Table 7 summarize the new models of RNSMA which their results are shown in the following Figures (Figure 14, Figure 15, Figure 16 and Figure 17).
OFF-OFF switching state
Two Interpolation testing data sets falling within the range of the existing training data are chosen. New models of RNSMA are also obtained as shown in the last following tables (Table 8 and Table 9) and figures ( Figure 18, Figure 19, Figure 20 and Figure 21). These results show that the model of OFF-OFF state has less non-linearity than the previous states. According to the results previously presented, (fine) model (before training ANNs) and predicted model from hybrid structure (after training ANNs) are in a good agreement in the ON-ON state. However, they are in an excellent agreement in ON-OFF and OFF-OFF states with a difference of the antenna forms and return losses. Table 10 shows the prediction accuracy and proves the agreement between ANNs and the proposed models. The complexity and non-linearity are clearly shown when all radiating parts of the antenna are connected by the resistor circuits (ON-ON state) but minimized slowly in ON-OFF and OFF-OFF state. This explains the convergence of numerical values of the parameters shown in the result's tables and curves (S-parameters). This minimum difference in convergence between simulated results is created by the switches during controlling states. But there is a big noticeable difference in antenna shapes between simulated results obtained from training ANNs and fine models. It is also noticed that the value of the optimum frequency ( ) of SD, PKI and PKID models is closer to the fine model ( ) than MLP models. As a result, increasing the data in the knowledge-based training step is necessary for getting close model to the fine model. As shown the change of the operating frequency depends on switching states which change the configuration of the antenna.
From the above, ANNs can be used to generate new models (solutions) for communication applications. The results illustrate the benefits of the hybrid structure and support the use of this type of structures for designing of reconfigurable and other types of antennas that operate in the obtained frequency bands. 1-6 GHz band is largely harmonized globally for shortdistance licensed/unlicensed antenna applications. As such, an antenna designer/researcher can develop for marketing the same 1-6 GHz module throughout the world with minimal tuning states or changing the type of control switches.
Conclusion
This study presents new models of the reconfigurable antenna, where any frequency samples between 1-10 GHz can be the input of the proposed hybrid structure that includes ANN methods to obtain new models. 3D-EM simulation results agree with ANN results but have different configurations. The proposed application is introduced as a single reconfigurable microstrip antenna, operating at a wide range of different frequencies. The studied antenna can be used for several wireless communication applications, ranging from 1 GHz to 6 GHz. The hybrid structure can be further applied to different configurations of microstrip antennas. | 5,542.6 | 2020-10-23T00:00:00.000 | [
"Computer Science"
] |
First observation of the decay $D^{0}\rightarrow K^{-}\pi^{+}\mu^{+}\mu^{-}$ in the $\rho^{0}$-$\omega$ region of the dimuon mass spectrum
A study of the decay $D^{0}\rightarrow K^{-}\pi^{+}\mu^{+}\mu^{-}$ is performed using data collected by the LHCb detector in proton-proton collisions at a centre-of-mass energy of 8 TeV, corresponding to an integrated luminosity of 2.0 fb$^{-1}$. Decay candidates with muon pairs that have an invariant mass in the range 675--875 MeV$/c^2$ are considered. This region is dominated by the $\rho^{0}$ and $\omega$ resonances. The branching fraction in this range is measured to be ${\cal B}$($D^{0}\rightarrow K^{-}\pi^{+}\mu^{+}\mu^{-}$) = $(4.17 \pm 0.12(stat) \pm 0.40(syst))\times10^{-6}$. This is the first observation of the decay $D^{0}\rightarrow K^{-}\pi^{+}\mu^{+}\mu^{-}$. Its branching fraction is consistent with the value expected in the Standard Model.
Introduction
Rare charm decays may proceed via the highly suppressed c → uμ + μ − flavour changing neutral current process. In the Standard Model such processes can only occur through loop diagrams, where in charm decays the GIM cancellation [1] is almost complete. As a consequence, the short-distance contribution to the inclusive D → Xμ + μ − branching fraction is predicted to be as low as O(10 −9 ) [2], making these decays interesting for searches for new physics beyond the Standard Model. However, taking into account long-distance contributions through tree diagrams involving resonances such as D → X V (→ μ + μ − ), where V represents a φ, ρ 0 or ω vector meson, the total branching fraction of these rare charm decays can reach O(10 −6 ) [2][3][4]. Their sensitivity to new physics therefore is greatest in regions of the dimuon mass spectrum away from these resonances, where the main contributions to the branching fraction may come from short-distance amplitudes. Angular asymmetries are sensitive to new physics both in the vicinity of these resonances and away from them [4][5][6][7][8] and could be as large as O(1%).
This Letter focuses on the measurement of the decay 1 D 0 → K − π + μ + μ − . This will provide an important reference channel for measurements of the c → uμ + μ − processes D 0 → π + π − μ + μ − and D 0 → K + K − μ + μ − : precise branching fractions are easier to obtain if they are compared with a nor- 1 The inclusion of charge conjugate decays is implied. malisation mode that has similar features. When restricted to the dimuon mass range 675 < m(μ + μ − ) < 875 MeV/c 2 , where the ρ 0 and ω resonances are expected to dominate, it can also be used to normalise the decays D 0 → K − π + η ( ) (→ μ + μ − ). Measuring their branching fractions allows the coupling η ( ) → μ + μ − to be determined. This contains crucial information for various low energy phenomena, and is an input to the prediction of the anomalous magnetic moment of the muon [9][10][11]. Focusing on this dimuon mass range also simplifies the analysis, which does not have to account for the variation of the selection efficiency as a function of m(μ + μ − ). From previous measurements the most stringent 90% confidence level upper limits on the decay D 0 → K − π + μ + μ − are set by the E791 experiment [12]: B(D 0 → K − π + μ + μ − ) < 35.9 × 10 −5 in the full K − π + mass region and B(D 0 → K − π + μ + μ − ) < 2.4 × 10 −5 in the region of the The study presented here is based on data collected by the LHCb detector in proton-proton collisions at a centre-of-mass energy of 8 TeV, corresponding to an integrated luminosity of 2.0 fb −1 . A subsample corresponding to an integrated luminosity of 1.6 fb −1 has been used to measure B(D 0 → K − π + μ + μ − ).
The remainder of the data set was used to optimise the selection.
The branching fraction B(D 0 → K − π + μ + μ − ) is measured relative to that of the normalisation decay D 0 → K − π + π + π − . The most accurate recent measurement of this branching fraction is used,
Detector and simulation
The LHCb detector [14,15] is a single-arm forward spectrometer covering the pseudorapidity range 2 < η < 5, designed for the study of particles containing b or c quarks. The detector includes a high-precision tracking system consisting of a silicon-strip vertex detector surrounding the pp interaction region, a large-area silicon-strip detector located upstream of a dipole magnet with a bending power of about 4 Tm, and three stations of silicon-strip detectors and straw drift tubes placed downstream of the magnet.
The tracking system provides a measurement of momentum, p, of charged particles with a relative uncertainty that varies from 0.5% at low momentum to 1.0% at 200 GeV/c. The minimum distance of a track to a primary vertex, the impact parameter (IP), is measured with a resolution of (15 + 29/p T ) μm, where p T is the component of the momentum transverse to the beam, in GeV/c. Different types of charged hadrons are distinguished using information from two ring-imaging Cherenkov detectors. Photons, electrons and hadrons are identified by a calorimeter system consisting of scintillating-pad and preshower detectors, an electromagnetic calorimeter and a hadronic calorimeter. Muons are identified by a system composed of alternating layers of iron and multiwire proportional chambers [16].
The online event selection is performed by a trigger [17], which consists of a hardware stage, based on information from the calorimeter and muon systems, followed by a software stage, which applies a full event reconstruction. In the offline selection, requirements are made on whether the trigger decision was due to the signal candidate or to other particles produced in the pp collision. Throughout this Letter, these two non-exclusive categories of candidates are referred to as Trigger On Signal (TOS) and Trigger Independent of Signal (TIS) candidates.
Simulated samples of and D 0 → K − π + π + π − decays have been produced. In the simulation, pp collisions are generated using Pythia [18] with a specific LHCb configuration [19]. Decays of hadronic particles are described by EvtGen [20], in which final-state radiation is generated using Photos [21]. The interaction of the generated particles with the detector, and its response, are implemented using the Geant4 toolkit [22] as described in Ref. [23]. No theoretical model or experimental measurement provides a reliable decay model for D 0 → K − π + μ + μ − . This decay mode is therefore modelled as an incoherent sum of resonant and non-resonant contributions, such as K * 0 → K − π + and ρ 0 /ω → μ + μ − , motivated by the resonant structure observed in D 0 → K − π + π + π − and D 0 → K − π + π + π − π 0 decays [24], and by the theoretical predictions of Ref. [4]. In the case of D 0 → K − π + π + π − , a decay model reproducing the data was implemented using the MINT software package [25].
Event selection
The criteria used to select the D 0 → K − π + μ + μ − and D 0 → K − π + π + π − decays are as similar as possible to allow many systematic uncertainties to cancel in the efficiency ratio. At trigger level, only events that are TIS with respect to the hadron hardware trigger, which has a transverse energy threshold of 3.7 GeV, are kept. In the offline selection, the only differences between the signal and normalisation channels are the muon identification criteria.
The first-level software trigger selects events that contain at least one good quality track with high p T and χ 2 IP , where the latter is defined as the difference in χ 2 of the closest primary pp interaction vertex (PV) reconstructed with and without the particle under consideration. The offline selection requires that at least one of these tracks originates from either the D 0 → K − π + μ + μ − or the D 0 → K − π + π + π − decay candidates. The second-level software trigger uses two dedicated selections to reconstruct from the PV. These combine good quality tracks that satisfy p T > 350 MeV/c and p > 3000 MeV/c. A muon is required to form a good quality secondary vertex that is significantly displaced from the PV. In events where such a pair is found, two charged hadrons are subsequently added. The resulting four-particle candidate must have a good quality vertex and its invariant mass must be consistent with the known D 0 mass [24]. The momentum vector of this D 0 candidate must be consistent with having originated from the PV. A preselection follows the trigger selections. Four charged particles are combined to form D 0 candidates. Tracks that do not correspond to actual trajectories of charged particles are suppressed by using a neural network optimisation procedure. To reject the combinatorial background involving tracks from the PV, only high-p and high-p T tracks that are significantly displaced from any PV are used. This background is further reduced by requiring that the four decay products of the D 0 meson form a good quality vertex that is significantly displaced from the PV and that p T (D 0 ) > 3000 MeV/c. These three criteria also reject candidates formed from partially reconstructed charm hadron decays, combined with either random tracks from the PV or with tracks from the decay of another charmed hadron in the same event. This type of background is further reduced by requiring the D 0 momentum vector is within 14 mrad of the vector that joins the PV with the D 0 decay vertex, ensuring that the D 0 candidate originates from the PV. Finally, the invariant mass of the D 0 candidate, which is reconstructed with a resolution of about 7 MeV/c 2 , is required to lie within 65 MeV/c 2 of the known D 0 mass. In the case of D 0 → K − π + μ + μ − , m(μ + μ − ) is restricted to the range 675-875 MeV/c 2 . The two backgrounds described above are referred to as the non-peaking background throughout this Letter.
After the preselection, a multivariate selection based on a boosted decision tree (BDT) [26,27] is used to further suppress the non-peaking background. The GradBoost algorithm is used [28].
The BDT uses the following variables: the p T and χ 2 IP of the final state particles; the p T and χ 2 IP of the D 0 candidate as well as the χ 2 per degree of freedom of its vertex fit; the significance of the distance between this vertex and the PV; the largest distance of closest approach between the tracks that form the D 0 candidate; the angle between the D 0 momentum vector and the vector that joins the PV with its decay vertex. The cut on the BDT response used in the selection discards more than 80% of the non-peaking candidates and retains more than 80% of the signal candidates that have passed the preselection.
Finally, the information from the RICH, the calorimeters and the muon systems are combined to assign probabilities for each decay product to be a pion, a kaon or a muon, as described in Ref. [15]. A loose requirement on the kaon identification probability rejects about 90% of the backgrounds that consist of π + π − μ + μ − or π + π − π + π − combinations while preserving 98% of the signal candidates. In the case of D 0 → K − π + μ + μ − decays, the muon identification criteria have an efficiency of 90% per signal muon and reduce the rate of misidentified pions by a factor of about 150. In the absence of muon identification, D 0 → K − π + π + π − decays with two misidentified pions would outnumber signal decays by four orders of magnitude. After these particle identification requirements, this background is reduced to around 50% of the signal yield and is dominated by decays involving two pion decays in flight (π + → μ + ν μ ). It is referred to as the peaking background throughout this Letter.
In addition to D 0 → K − π + π + π − decays with two misidentified pions, backgrounds due to the decays of D + , D + s , D * + , τ , Λ + c and 0 c are considered. These are studied using simulated events and found to be negligible.
The selection is optimised using data and simulated samples.
The BDT is trained using simulated D 0 → K − π + μ + μ − events to model the signal. The sample used to represent the background consists of candidates with m( drawn from 2% of the total data sample. Candidates on the lowmass side of the signal peak are not used due to the presence there of peaking background decays, whose features are very close to those of signal decays. Optimal selection criteria on the BDT response and muon identification are found using another independent data sample corresponding to 20% of the total dataset. The fit described in Sect. 4 is used to background (B pk ) and non-peaking background (B npk ) present in this sample in the region of the signal peak, defined as The requirements on the muon identification and BDT response are chosen to maximise S/ S + B pk + B npk .
The two samples described above consist of events chosen randomly from the 2012 data and are not used for the subsequent analysis. The remainder of the dataset (78%), which corresponds to an integrated luminosity of 1.6 fb −1 , is used to measure tained with this selection consists of 5411 candidates. In the case allows us to use a small sample (3 pb −1 ), drawn randomly from the total dataset. The final D 0 → K − π + π + π − sample consists of 121 922 candidates.
Determination of the
In each sample, the probability density function (PDF) fitted to the signal peak is a Gaussian function with power law tails. It is defined in the following way: where m D 0 and σ are the mean and width of the peak, and α L , n L , α R and n R parameterise the left and right tails. This function was found to describe accurately the m(K − π + μ + μ − ) and m(K − π + π + π − ) distributions obtained with the simulation, which exhibit non-Gaussian tails on both sides of the peaks. The tail on the left-hand side is dominated by final-state radiation and interactions with matter, while the right-hand side tail is due to non-Gaussian effects in the reconstruction.
The non-peaking background in the D 0 → K − π + π + π − sample is described by a first-order polynomial. In the case of D 0 → K − π + μ + μ − , a second-order polynomial is used.
Three
peaking backgrounds due to misidentified D 0 → K − π + π + π − decays are categorised by the presence of candidates involving misidentified pions that did not decay in flight before reaching the most downstream tracking stations, or candidates where one or two pions decayed upstream of these tracking stations. Candidates from the first category are described by a one-dimensional kernel density estimate [29]. This PDF is derived from the m(K − π + μ + μ − ) distribution obtained using simulated D 0 → K − π + π + π − decays reconstructed under the D 0 → K − π + μ + μ − hypothesis. Candidates from the remaining two categories appear as tails on the lower-mass side of the m(K − π + μ + μ − ) distribution and must be accounted for to avoid biases in the non-peaking background and in the signal yield measured by the fit. Due to the small number of such candidates in the simulated sample, simulated D 0 → K − π + π + π − candidates where no pion decays in flight are altered to reproduce the effect of such decays, and the corresponding m( termined. This is achieved by modifying the momentum vectors of either one or two of the pions present in the D 0 → K − π + π + π − final state according to the kinematics of π + → μ + ν μ decays. The m(K − π + μ + μ − ) distributions obtained after this modification are converted into one-dimensional kernel density estimates. The fit model involves 5 yields: the signal yield, N sig , the yield of normalisation decays, N D 0 →K − π + π + π − , the peaking and non-peaking background yields, N pk and N npk , and the yield of They are all free parameters in the fit. It also involves 15 parameters to define the shapes of the PDFs. The parameters describing the widths and upper-mass tails are free parameters in the fit but are common between the PDFs for the D 0 → K − π + μ + μ − and D 0 → K − π + π + π − peaks. The lower-mass tail parameters are determined separately. Those used for D 0 → K − π + π + π − candidates are allowed to vary in the fit. This is not possible for D 0 → K − π + μ + μ − candidates because of the overlap between the signal and the D 0 → K − π + π + π − peaking background and therefore the parameters are fixed to the values obtained from the simulated sample. In total, there are 15 free parameters in the fit.
The relative yields of the three peaking background categories described above are fixed to values obtained by a fit to a large control sample. It consists of D 0 → K − π + μ + μ − candidates that are in the TOS category with respect to the muon hardware trigger, in contrast to the signal and normalisation samples that are in the TIS category with respect to the hadron trigger. All of the other selection requirements are the same as those described in Sect. 3. This TOS signal control sample consists of 28 835 candidates and contains approximately six times more D 0 → K − π + μ + μ − decays than the nominal TIS sample.
The fit results are summarised in Table 1 and the observed mass distributions are shown in Fig. 1, with fit projections overlaid. The main difficulties in this procedure are the similarities in the shape of the signal, peaking background and nonpeaking background, and the overlap between their distributions in m(K − π + μ + μ − ). However, their impact on the measurement presented in this Letter is limited, as can also be seen in Table 1.
Branching fraction measurement
The branching fraction of the decay D 0 → K − π + μ + μ − is obtained by combining the quantities presented in Table 2 with the branching fraction of the D 0 → K − π + π + π − decay according to Table 1 Summary of the results of the fit described in Sect. 4. The yields measured in the D 0 → K − π + μ + μ − sample and the correlations between them, the yields measured in the normalisation sample, the common width fitted to the D 0 → K − π + μ + μ − and D 0 → K − π + π + π − yields, and the relative uncertainty on B(D 0 → K − π + μ + μ − ) are presented. Uncertainties on the fitted parameters are statistical. The variation of the uncertainty on B(D 0 → K − π + μ + μ − ) when the background yields are fixed indicates to what extent it is enhanced by the need to separate contributions in overlap and which shapes present some similarities.
The data are shown as points (black) and the total PDF (blue solid line) is overlaid. In (a), the two corresponding components of the fit model are the D 0 → K − π + π + π − decays (red solid line) and the non-peaking background (violet dashed line). In (b), the components are the D 0 → K − π + μ + μ − (long-dashed green line), the peaking background due to misidentified D 0 → K − π + π + π − decays (red solid line), and the non-peaking background (violet dashed line). (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) Table 2 Measured efficiencies and yields for the decay D 0 → K − π + μ + μ − in the dimuon mass range 675-875 MeV/c 2 , and for the decay D 0 → K − π + π + π − . The uncertainties are statistical. In the case of efficiencies, it stems from the finite size of the simulated samples.
Signal shape parameters 0.8 Peaking background tails 1.5 Signal PDF 0.6 Non-peaking background shape 2.1 Quadratic sum 9.6 where where the uncertainty is statistical.
Systematic uncertainties
The systematic uncertainties on B(D 0 → K − π + μ + μ − ) are summarised in Table 3. Those related to reconstruction and selection efficiencies are minimised thanks to the efficiency ratio in Eq. (1) and to the similarities between D 0 → K − π + μ + μ − and D 0 → K − π + π + π − decays. This is illustrated in Fig. 2, which shows the distributions of the BDT response for the D 0 → K − π + μ + μ − and D 0 → K − π + π + π − decays, both in data and simulated samples. In data, the background contributions are removed using the sPlot technique [30]. Also shown in this figure are the ratios between the D 0 → K − π + μ + μ − and D 0 → K − π + π + π − distributions. The BDT response, which combines all the offline selection variables (with the exception of muon identification criteria), is very similar for both kinds of decay and the differences are well described by the simulation. In cases where selection criteria depend on the nature of the decay products, data-driven methods are used, as described below. The uncertainty on the charged hadron reconstruction inefficiency is dominated by the uncertainty on the probability to undergo a nuclear interaction in the detector. This inefficiency is evaluated using simulated events. The corresponding uncertainty is derived from the 10% uncertainty on the modelling of the detector material [31].
The selection efficiencies based on the kinematical and geometrical requirements are derived from simulation. A systematic uncertainty to take into account imperfect track reconstruction Fig. 2. Distributions of the BDT response of D 0 → K − π + μ + μ − (circles) and D 0 → K − π + π + π − decays (triangles) in data (full markers) and simulation (open markers). In data, the background contributions are removed using the sPlot technique. The lower plot shows the ratio between the D 0 → K − π + μ + μ − and D 0 → K − π + π + π − distributions in data (full squares) and simulation (open squares). modelling is estimated by smearing track properties to reproduce those observed in data. Similarly, a systematic uncertainty on the efficiency of the BDT selection is assigned as the difference between the efficiency obtained in data and simulation.
The uncertainties in the decay models are estimated separately for the signal and normalisation channels. For the signal, this is carried out by reweighting simulated D 0 → K − π + μ + μ − decays to reproduce the distributions of m(K − π + ) and m(μ + μ − ) observed in data, with the difference in efficiency relative to the default being assigned as the systematic uncertainty. For D 0 → K − π + π + π − , the sensitivity to the decay model is studied by comparing the default efficiency with that obtained in an extreme case in which the decay model provided by the MINT package is replaced by an incoherent sum of the resonances involved in the decay, as given in Ref. [24].
To avoid dependence on the modelling of the hardware trigger in simulation, its efficiency is determined in data. The efficiency to be TIS with respect to hadron hardware trigger is determined as the fraction of D 0 → K − π + μ + μ − decays that fulfil this requirement among D 0 → K − π + μ + μ − candidates that are TOS with respect to the muon hardware trigger. It is measured in 12 different regions defined in the (p T (D 0 ), N t ) plane, where N t is the track multiplicity of the event. The overall hardware trigger efficiency for D 0 → K − π + μ + μ − decays is the average of these 12 efficiencies weighted according to the distributions of D 0 → K − π + μ + μ − candidates observed in data. The efficiency of the normalisation mode is obtained by weighting the same 12 efficiencies according to the distributions of D 0 → K − π + π + π − candidates. This procedure assumes that the probability for D 0 → K − π + μ + μ − decays to fulfil the TIS requirement is not enhanced by the requirement to also be in the TOS category and that this TIS efficiency is the same in every region for D 0 → K − π + μ + μ − and D 0 → K − π + π + π − decays. No difference is found in simulation between the ε D 0 →K − π + π + π − /ε D 0 →K − π + μ + μ − ratio obtained with this method and the ratio of true efficiencies, obtained by directly counting the number of simulated D 0 → K − π + μ + μ − and D 0 → K − π + π + π − decays that fulfil the hadron trigger TIS re-quirement. To determine the systematic uncertainty associated with the hardware trigger efficiency, the uncertainty on this comparison is combined with the statistical uncertainties on the 12 measurements performed in data in (p T (D 0 ), N t ) regions.
A similar approach is employed in the case of the first level of the software trigger. A sample of D 0 → K − π + π + π − candidates is selected from data that satisfied the trigger requirements independently of these candidates. The fraction of D 0 → K − π + π + π − decays where at least one of the decay products also satisfies the requirements of this trigger is measured using this sample. This efficiency is measured in regions of p T (D 0 ) and weighted according to distributions of this variable in simulated D 0 → K − π + μ + μ − and D 0 → K − π + π + π − events. The variation in the efficiency ratio when these distributions are corrected to match the data is used to evaluate the corresponding systematic uncertainty.
The efficiency of the second-level software trigger for the signal decay is calculated relative to that of the normalisation decay. This ratio is measured using D 0 → K − π + μ + μ − decays in data and simulation and consistent results are obtained. The uncertainty on this comparison is therefore assigned as the systematic uncertainty on this trigger efficiency.
The efficiency of the muon identification criteria is determined in data using a large and pure sample of B → J /ψ(→ μ + μ − )X decays. Efficiencies measured in several regions of p T (μ), η (μ) and N t are weighted according to the distribution observed for the muon candidates from D 0 → K − π + μ + μ − decays. Several definitions of these domains are considered, with varying binnings. The different efficiencies obtained this way, as well as the efficiencies obtained in simulated samples, are compared to evaluate the corresponding systematic uncertainty. The same approach is used to evaluate the efficiency of the kaon identification requirement. In this case, the calibration kaons are provided by D * + → D 0 (→ K − π + )π + decays in data.
In the fit outlined in Sect. 4, the parameters of the function that describe the lower-mass tail of the D 0 → K − π + μ + μ − peak are fixed to values obtained from simulation. The corresponding systematic uncertainty is determined by repeating the fit using the values obtained by a fit to the signal TOS control sample. A similar difference is observed when the corresponding test is performed The systematic uncertainty related to the description of the peaking background is determined by the change observed in B(D 0 → K − π + μ + μ − ) when the components due to the decay of one or two pions in flight are neglected, and when their yields relative to the rest of the peaking background are enhanced by twice their uncertainty.
Two other systematic uncertainties have been evaluated. To estimate the impact of the signal PDF employed, the fit is repeated using the Cruijff function [32] instead. Potential effects arising from non-peaking backgrounds are assessed by repeating the fits with the non-peaking backgrounds assumed to be linear in m(K − π + μ + μ − ). The values of the systematic uncertainties associated with the choice of fit model and its parameters were also further validated using pseudoexperiments.
The impact on the fit of the similarities between the shapes of the signal and background components was further controlled in two ways. First, fixing the background yields decreases the relative uncertainty on B(D 0 → K − π + μ + μ − ) from 2.9% to 2.4%. This variation is far lower than the total systematic uncertainty due to the yield determination (2.8%). Moreover, another study is performed based on pseudoexperiments, generated with realistic values of the yields and PDFs shape parameters. The fit proved able to return unbiased measurements of the generated value of B(D 0 → K − π + μ + μ − ) and an accurate estimation of the statistical uncertainty, consistent with the uncertainty obtained in data.
As can be seen in Table 3, the systematic uncertainties are dominated by the uncertainty on the D 0 → K − π + μ + μ − to D 0 → K − π + π + π − efficiency ratio, which is larger than the 2.9% statistical uncertainty on B(D 0 → K − π + μ + μ − ). As expected, this systematic uncertainty is primarily due to the different final state particles of the two decays. The trigger efficiencies, and the muon identification and track reconstruction efficiencies, are responsible for about 90% of this uncertainty. The uncertainties due to the yield determination and the knowledge of B(D 0 → K − π + π + π − ) represent secondary contributions.
This branching fraction can be compared to the Standard Model value calculated in Ref. [4], B(D 0 → K − π + μ + μ − ) = 6.7 × 10 −6 , in the full dimuon mass range. This is the first observation of this decay. The branching fraction is measured with an overall precision of 10% and is one order of magnitude lower than the previous most stringent upper limit. Precise measurements of the D 0 → π + π − μ + μ − and D 0 → K + K − μ + μ − decays are now possible in all regions of the dimuon invariant mass since they can be compared with a normalisation mode that has similar features and a precisely known branching fraction. This will allow more stringent constraints on new physics to be obtained using data already collected by the LHCb detector, and the sensitivity of future experiments to angular asymmetries to be assessed.
The distributions of the K − π + and μ + μ − invariant masses in D 0 → K − π + μ + μ − decays are shown in Fig. 3, where the background contribution is removed using the sPlot technique [30], taking the m(K − π + μ + μ − ) invariant mass as the discriminating variable. An amplitude analysis would be required for a full understanding of the decay dynamics. The distributions in Fig. 3 suggest the presence of additional contributions, including the ω resonance, beyond the K * 0 ρ 0 intermediate state that, according to Ref. [4], should strongly dominate the decay amplitude. | 7,959 | 2015-10-28T00:00:00.000 | [
"Physics"
] |
Explore Awareness of Information Security: Insights from Cognitive Neuromechanism
With the rapid development of the internet and information technology, the increasingly diversified portable mobile terminals, online shopping, and social media have facilitated information exchange, social communication, and financial payment for people more and more than ever before. In the meantime, information security and privacy protection have been meeting with new severe challenges. Although we have taken a variety of information security measures in both management and technology, the actual effectiveness depends firstly on people's awareness of information security and the cognition of potential risks. In order to explore the new technology for the objective assessment of people's awareness and cognition on information security, this paper takes the online financial payment as example and conducts an experimental study based on the analysis of electrophysiological signals. Results indicate that left hemisphere and beta rhythms of electroencephalogram (EEG) signal are sensitive to the cognitive degree of risks in the awareness of information security, which may be probably considered as the sign to assess people's cognition of potential risks in online financial payment.
Introduction
Today's society is an information society. More and more people use information technologies in daily life and work. They are facilitated by increasingly diversified portable mobile terminals, online shopping, and social media in information exchange, social communication, and e-business. However, when people are enjoying the convenience from information technology, it is also facing the new severe challenges of information security, such as internet intrusion, sensitive information leak, and online payment fraud.
It is well known that information security is a complicated and systematic problem associated with technology, management, economy, and behavioral culture. Up to now, there are a lot of researches on this issue. Cavusoglu et al. studied risks related to information security; they pointed out that risks may have dire consequences, including corporate liability, monetary damage, and loss of credibility [1]. Ensuring information security has become one of the top managerial priorities in many organizations [2][3][4]. Kuner et al. took the PRISM project as an example which showed that both the offline and online activities had been reported to be related with extensive privacy; they argued that both privacy and security should be protected with individuals' confidence in the rule of law [5]. Numerous studies have shown that the biggest hidden danger of enterprise information security is the internal staff, rather than software vulnerabilities, and employees are often the weakest link in information security [6,7].
In fact, many information security incidents are not all caused by technology, which happened often due to management oversights or people's weak awareness of information 2 Computational Intelligence and Neuroscience security. For example, behavior of weak password, neglecting the operating system patch, and free use of unsafe mobile devices are related to the lack of recognition of the potential risks on information security. Since the awareness of information security depends on brain cognition of potential risk, it is very important to study brain cognition. A lot of scholars have made great achievements in cognitive research based on cognitive neuromechanism. Qin and Han assessed the neurocognitive processes involved in environmental risk identification by using event-related potential (ERP) and functional magnetic resonance imaging (fMRI); their findings show that an early detection in the ventral anterior cingulate cortex and a late retrieval of emotional experiences in posterior cingulate cortex can help identify dreadful environmental risks [8]. Wang et al. designed and evaluated the vocal emotion of humanoid robots based on brain mechanism; they found that stimulation from audio is related to some brain regional [9]. Dai studied the mechanism of public cognitive emotions when emergencies burst; he pointed out that it needs to consider the public psychology and cognitive ability and that it is easy to accept the way when the city emergency incident bursts out [10]. In addition, some scholars have done the research of brain cognition on investment behavior, framing effect, and microblog information spreading [11][12][13].
In our study, in order to explore the new technology for the objective assessment of people's awareness and cognition on information security, this paper takes the online financial payment as example and conducts an experimental study based on the analysis of electrophysiological signals.
This paper is organized as follows. In Section 2, the theory and method of cognitive model and EEG are presented. Then, trial is introduced in Section 3. Analysis and results are shown in Section 4. Finally, we provide a summary and discussion about our work in Section 5.
Theory and Methodology
Awareness is the human mind to reflect the objective material world, and it is the comprehension of feeling, thinking, and other psychological processes. In other words, awareness is a response to a stimulus of human brain. In order to study the information security awareness, cognitive psychology and EEG were used as the research theory and methods.
Cognitive Mechanism of Information Security
Cognition refers to all processes by which the sensory input is transformed, reduced, elaborated, stored, recovered, and used [14]. Cognitive psychology usually takes human cognitive process as its major subject. It studies the cognitive activities from the viewpoint of information processing, including how humans learn, percept, imagine, memorize, and think of problems. So cognitive psychology is also called information processing psychology. Gagne is a famous scholar in the information processing theory, well-known for his outstanding contribution to information processing model of learning theory. In Gagne's theory, the learning processing was divided into eight stages, and each stage requires different information processing. Firstly, environmental stimuli affected learners; then these stimuli were encoded and were stored as image in the sensor register. These memory images can only store hundredths of a second. Then information entered short-term memory and was encoded again. It can maintain 2.5∼3 seconds in here. However, short-term memory is limited to about seven "chunks" of information for most people. Once it exceeds this number, new information will replace the original information. In order to keep the original information, you can repeat it continuously. In this way, information in short-term memory can keep for a long time, but not more than one minute. Finally, the information entered long-term memory and it was encoded again. The majority of people believe that the long-term memory can be stored for a long time. Once you need to use this information, you can retrieve it from long-term memory. In here, information can directly enter response generator, or it can go back to the shortterm memory. Meanwhile, expectation and executive control also affected this learning model [15]. After Gagne proposed information processing model, Model Human Processor (MHP) was presented and was used in cognitive modeling. Due to the fact that MHP can calculate the processing time after performing a certain task, it is especially suitable for our study. The processing of MHP is shown in Figure 1 [16]. It can be seen that MHP includes three subsystems, and each subsystem has its own processors and memories.
Cognitive Framework for Information Security Awareness.
We know information cognition can be viewed as a process of information processing from the previous section. Previous research shows that visual stimuli can produce perceptual awareness [17][18][19]. Then, visual stimulation of information security was used in our study. And cognitive framework for information security awareness is shown in Figure 2.
From Figure 2, it can be seen that brain cognitive mechanism is closely related to selective attention. For example, when a person feels stimulation from field of information security, such that someone is surfing the internet with the public WiFi or somebody's computer does not install firewall, in the above scene, his brain starts to extract object features of the scene, and the selective attention mechanism begins running, which includes feeling, imagination, perception, and memory. Meanwhile, awareness is also accompanied by brain cognition mechanism which starts running.
EEG Signals Analysis
2.2.1. EEG Waves. The living human brain will continue to discharge, known as electroencephalogram (EEG) [20]. Brain and changes of electricity are the real time performance of brain activity. Generally, the level of volatility reflects brain excitability, and latency reflects the mental activities and processing speed and time evaluation. Human's brain waves frequency range is 0.1∼100 Hz, and the frequency and amplitude of four basic brain waves are shown in Table 1 [21]. EEG is closely related to human consciousness, and amplitude of EEG rhythm will increase or decrease when the brain activity increases. Previous research has suggested that rhythms will appear in a relaxed state, rhythms will appear in excited state, rhythms will appear in drowsy state, and rhythms usually appear in deep state [21].
EEG Signal
Process. EEG signal process mainly includes data cleaning, denoising signal, feature extraction, and classification process. Among them, denoising signal and feature extraction algorithms include power spectrum density estimation, wavelet transform (WT), public space model, multidimensional statistical analysis, and model descriptor. Classification methods include Fisher's linear discriminant, Bayesian method, back-propagation neural network [22], and support vector machine. In our study, WT was used.
WT is a multifunctional multiscale analysis and filter based on combination of time-frequency analysis tool. It has the characteristic of multiresolution and can observe different detail by choosing different basic wavelet, which makes where ( ) ∈ 2 ( ), , ∈ , ̸ = 0, then ( ) is called basic wavelet, and means expansion factor and means translation factor.
For the discrete case, DWT can be defined as follows: where ( ) ∈ 2 ( ), , ∈ . In order to get high quality EEG signals for analysis, we adopt Discrete Wavelet Transform method and Mallat algorithm to renoise initial EEG signals. Mallat decomposition algorithm is shown as follows: where 0 means initial signal, is the result of the approximation signal after decomposition (low frequency components), and is the result of the error signal after decomposition (high frequency components).
Experiment
The formation process of EEG in our trial is shown in Figure 3.
From Figure 3, we can see experimenter watching specific scene and EEG device collecting EEG signals from experimenter. Once collecting signals finishes, the signal process begins to work, and EEG would be shown finally. EEG signal acquisition settings are as follows: (i) sampling frequency: 128 Hz; (ii) amplitude-frequency characteristic: 0.53 Hz-60 Hz; (iii) electrode placement criteria: electrodes were placed according to the international 10-20 system [23], which is shown in Figure 4; (v) using a single-stage lead.
Experimental Overview.
Our research involved human subjects, and we recruited 12 healthy adults to participate in our trail; among them, four had received information security awareness training, and eight had not received training. All of their education degrees are bachelor degree or above, with no history of mental illness. They were right-handed with an average age of 27.1 years and they represented 5.69 of the variance. The testing process was told to them before the experiment, and the agreement was signed.
Experimental Design.
In order to research the human awareness of information security, nine experiment scenes were designed in our trial. Testers would make a choice when they take note of information security related pictures or hear fraud words. Tester may encounter fraud information in instant messaging, or access fishing website, or receive fraud text message in his mobile phone, or receive fraud message while using the online payment, and so forth. All of the above scenarios can be used as experimental scene, and sample pictures of trail are shown in Figure 5.
Computational Intelligence and Neuroscience 5 The above website has two suspicions. Left graph uses this link http://www.shbillow.cn/index.mobile.cc.htm, to which the suffix "mobile.cc" was added, and it may be a fishing site. Right graph attracts customers with low price, and the price is too low for the normal price. Tester's information safety awareness may be arousing when he/she notices these scence.
Our experimental procedures are as follows: (i) Tester wears electrode cap and puts electrode well. (ii) Tester connects to the computer and opens the EEG signal processing software and then checks whether the software works correctly. If there is no problem, then the experiment begins.
(iii) Tester closes his/her eyes, sits and rests, and calms him/herself, when the brain waves are smooth and then begins to record his/her brain waves signal.
(iv) Picture will be shown on the screen. Tester watches picture and listens to the sound with distance of 1 meter, and he or she responds to the prompt. After testing the current scene, another stimulus will appear at random intervals between 1000 ms and 2000 ms. During the interval, the screen background color is black, and the middle of the screen shows the symbol "+" with white color.
Experimental Records.
In our experiment, records include tester number, event number, duration, eight-electrode value, and baseline electrode value. Sample of experimental records is shown in Table 2.
EEG Signal Process.
Due to the fact that initial EEG signals include a lot of noise, they need to be processed. The process usually includes denoising and characteristics analysis [25]. In order to remove noise signals from the collected EEG signals, we adopted two processes. Firstly, baseline electrode voltage was replaced by the average electrode voltage, and it was recalculated for every electrode voltage. Some noise will be removed after the above steps. Contrast of initial EEG signal and denoising EEG signal is shown in Figure 6. Secondly, wavelet transformation method was used for these EEG signals. Because the EEG signal below 30 Hz is worth studying, then we use wavelet filtering to filter above 30 Hz EEG signals. We select the db5 as wavelet packet and decompose EEG signals into four layers. In the process of wavelet decomposition, the best wavelet decomposition tree is shown in Figure 7.
Characteristic Analysis.
In order to analyze the correlation of EEG signal and safety awareness, four types of rhythm signal are extracted from wavelet transformation, which are shown in Figure 8.
In the selection of characteristic parameter, the rhythm energy and energy ratio of four types of rhythm were calculated, and both of them were used for characteristic analysis. Sample of rhythm energy and energy ratio of two test tasks (online payment and online chat) is shown in Table 3.
It can be seen from Table 3 that the alpha rhythm energy and energy ratio are relatively low in two test tasks, which is consistent with previous studies. Previous biomedical research results show that the alpha rhythm became inhibited or disappeared when people are feeling the external stimuli [26]. Our experiment proved that the beta rhythm is consistent with the distribution characteristics of the scalp. It also suggests that beta rhythms are easy to appear when the brain is thinking or exciting. Since information security awareness related to people's focus of attention who remain alert to stimuli for a prolonged period of time, and the beta rhythm is more active, then it can be used to research different brain cognition.
In addition, in order to do a comparative analysis, we choose energy ratio of beta rhythm of two test tasks as comparison; the results are shown in Figure 9. From Figure 9, we can clearly see that energy ratio of beta rhythm of left hemisphere (FP1, T3, C3, O1) is higher than that of the right hemisphere, which shows that the left hemisphere is more involved in reading related tasks.
From Figure 9, we also found that energy ratio of beta rhythm of test task 1 (online payment) is higher than that of test task 2 (online chat). The reasonable explanation is that the tester needs more attention and feels nervous in the online payment than those of the online chat. That is to say, visual stimuli are more likely to arouse the awareness of information security than aural stimuli. Furthermore, the energy ratio of parietal region (T3, C3) is higher than other regions, which showed that the parietal region was involved in awareness of information security related tasks. In our experimental results, another finding showed that the EEG signals of tester who has been trained on information security were more active than those of untrained tester.
Discussion
Promotion of people's awareness of information security is the foundation and the precondition of information security of organization. In order to explore the new technology for the objective assessment of people's awareness of information security, this paper conducted cognitive study of information security awareness based on the analysis of EEG signals. We firstly discussed the theory and methodology of EEG signals on cognitive study and then presented a framework for the description of awareness and cognition of information security according to the brain mechanism. On this basis, an experiment was designed to test the reaction of EEG signals to the awareness of hidden problems in information security. This finding showed that the EEG signals could provide a good method for the objective assessment of people's awareness of information security.
In the future studies, we suggest that it can be combined with fMRI (functional magnetic resonance imaging) [27], PET (Positron Emission Tomography), and other measuring equipment to research cognition of individual information security. | 3,955 | 2015-10-26T00:00:00.000 | [
"Computer Science",
"Psychology"
] |
Collective Dynamics and Homeostatic Emergence in Complex Adaptive Ecosystem
We investigate the behaviour of the daisyworld model on an adaptive network, comparing it to previous studies on a fixed topology grid, and a fixed small-world (Newman-Watts (NW)) network. The adaptive networks eventually generate topologies with small-world effect behaving similarly to the NW model – and radically different from the grid world. Under the same parameter settings, static but complex patterns emerge in the grid world. In the NW model, we see the emergence of completely coherent periodic dominance. In the adaptive-topology world, the systems may transit through varied behaviours, but can self-organise to a small-world network structure with similar cyclic behaviour to the NW model. Introduction In this paper, we examine connectivity changes in a complex adaptive ecosystem based on the daisyworld model, combining coupled map lattice (CML) and complex adaptive network models. Daisyworld, proposed by Watson and Lovelock (1983), is a simple mathematical system demonstrating planetary homeostasis – self-regulation of the environment by biota and self-sustainability of life through interaction with the environment. Daisyworld topologies in the literature are static, with only local connections (Wood et al., 2008). In our previous work (Punithan et al., 2011; Punithan and McKay, 2013), we have investigated ecological homeostasis in preconstructed static topologies with local and non-local long range couplings –small-world networks. But complex networks in nature and society are adaptive, in that they exhibit feedback between the local dynamics of nodes (state) and the evolution of the topological structure (Gross and Blasius, 2008; Gross and Sayama, 2009). Examples include genetic, neural, immunity, ecological, economic and social networks, complex game interactions etc. The topology of our ecosystem evolves in response to local habitat states, and the evolved topology in turn impacts the habitat states. Our adaptive and self-maintaining ecosystem, based on CML consists of a set of diffusively coupled habitats incorporating logistic growth of life with bidirectional biota-environment influences. Thus our ecosystem incorporates three kinds of feedback: 1. Life-environment feedback via the daisyworld model 2. State-topology feedback via an adaptive network model 3. Density-growth feedback via a logistic growth model The topology of the our ecosystem evolves with a simple local rule – a frozen habitat is reciprocally linked to an active habitat – and self-organises to complex topologies with small-world effect. In this paper, we focus on the emergent collective phenomena and properties that arise in egalitarian small-world ecosystems, constructed from a large number of interacting adaptively linked habitats. Background Our model has three feedback loops determining its dynamics. We next detail the relevant background. Daisyworld (homeostatic self-regulation of the environment by the biota) Daisyworld (Watson and Lovelock, 1983) is an imaginary planet where only two types of species live – black and white daisies. These biotic components interact stigmergically via an abiotic component – temperature. The different colours of the daisies influence the albedo (reflectivity) of the planet. In the beginning, the atmosphere of the daisyworld is cooler and only black daisies thrive as they absorb all the energy. As the black daisy population expands, it warms the planet. When it is too warm for black daisies to survive, white daisies start to bloom since they reflect all the energy back into space. As the white daisy cover spreads, it cools the planet. When it is too cold for the survival of white daisies, again black daisies thrive. This endless cycle, owing to the bi-directional feedback loop between life and the environment, self-regulates the temperature and thereby allows life to persist. Adaptive Networks (dynamics on the network interacting with dynamics of the network) In most real-world networks, the topology itself is a dynamical system which changes in time and in response to the dynamics of the states of the nodes (dynamics of the network). The evolved topology in turn influences the dynamics of the states of the nodes ECAL General Track
Introduction
In this paper, we examine connectivity changes in a complex adaptive ecosystem based on the daisyworld model, combining coupled map lattice (CML) and complex adaptive network models.Daisyworld, proposed by Watson and Lovelock (1983), is a simple mathematical system demonstrating planetary homeostasis -self-regulation of the environment by biota and self-sustainability of life through interaction with the environment.Daisyworld topologies in the literature are static, with only local connections (Wood et al., 2008).In our previous work (Punithan et al., 2011;Punithan and McKay, 2013), we have investigated ecological homeostasis in preconstructed static topologies with local and non-local long range couplings -small-world networks.But complex networks in nature and society are adaptive, in that they exhibit feedback between the local dynamics of nodes (state) and the evolution of the topological structure (Gross and Blasius, 2008;Gross and Sayama, 2009).Examples include genetic, neural, immunity, ecological, economic and social networks, complex game interactions etc.
The topology of our ecosystem evolves in response to local habitat states, and the evolved topology in turn impacts the habitat states.Our adaptive and self-maintaining ecosystem, based on CML consists of a set of diffusively coupled habitats incorporating logistic growth of life with bidirectional biota-environment influences.Thus our ecosystem incorporates three kinds of feedback: 1. Life-environment feedback via the daisyworld model 2. State-topology feedback via an adaptive network model 3. Density-growth feedback via a logistic growth model The topology of the our ecosystem evolves with a simple local rule -a frozen habitat is reciprocally linked to an active habitat -and self-organises to complex topologies with small-world effect.In this paper, we focus on the emergent collective phenomena and properties that arise in egalitarian small-world ecosystems, constructed from a large number of interacting adaptively linked habitats.
Background
Our model has three feedback loops determining its dynamics.We next detail the relevant background.
Daisyworld (homeostatic self-regulation of the environment by the biota) Daisyworld (Watson and Lovelock, 1983) is an imaginary planet where only two types of species live -black and white daisies.These biotic components interact stigmergically via an abiotic component -temperature.The different colours of the daisies influence the albedo (reflectivity) of the planet.In the beginning, the atmosphere of the daisyworld is cooler and only black daisies thrive as they absorb all the energy.As the black daisy population expands, it warms the planet.When it is too warm for black daisies to survive, white daisies start to bloom since they reflect all the energy back into space.As the white daisy cover spreads, it cools the planet.When it is too cold for the survival of white daisies, again black daisies thrive.This endless cycle, owing to the bi-directional feedback loop between life and the environment, self-regulates the temperature and thereby allows life to persist.
Adaptive Networks (dynamics on the network interacting with dynamics of the network) In most real-world networks, the topology itself is a dynamical system which changes in time and in response to the dynamics of the states of the nodes (dynamics of the network).The evolved topology in turn influences the dynamics of the states of the nodes (dynamics on the network), creating a feedback loop between the dynamics of the nodes and the evolution of the topology.Networks exhibiting such a feedback loop (mutual evolution of structure and state values) are called adaptive or coevolutionary networks (Gross and Blasius, 2008;Gross and Sayama, 2009).In road networks, the topology of the road influences the traffic flow, while traffic congestion influences the construction of new roads.In the vascular system, the topology of the blood vessels controls blood flow, while restrictions in blood flow influence the formation of new arteries (arteriogenesis).Numerous other examples are discussed in Gross and Blasius (2008).
Logistic Growth Model (density-dependent growth rate)
The discretised logistic growth model (Verhulst model) is key to population ecology.
where P ∈ [0, κ] is the population size (at times t and t + 1), r is the intrinsic growth rate (bifurcation parameter), κ is the carrying capacity (maximum sustainable population beyond which P cannot increase).The parameter r amplifies population growth and the component [1 − Pt κ ] dampens the growth due to over crowding.Thus population density selfregulates population growth rate.It is also well-known that chaos emerges from this growth model (May, 1976) in spite of the built-in regulatory mechanism.
Coupled Map Lattice
The coupled map lattice (Kaneko, 1985(Kaneko, , 1992;;Kaneko and Tsuda, 2001) incorporates discrete time evolution (map) in a discrete space (lattice or network) as in cellular automata (CA), but takes continuous state values as in partial differential equation (PDE) models.CML is governed by the temporal nonlinear reaction (maps -f ) and the spatial diffusion (coupling -ǫ).
If f (x) is a reaction function of a dynamical variable (x), the update of the variable is computed by combining that reaction with discrete Laplacian diffusion.For a regular network with Moore neighbourhoods (k = 8), the update of x is computed as: where x (i,j,t) is the spatio-temporal distribution of a dynamical variable, ǫ ∈ [0, 1] is the coupling parameter (diffusion rate), k is the number of interacting neighbours, f (x ) is a local non-linear function and x ′ (i,j,t) is the value after diffusion.
Denoting the set of neighbours of (i, j) as < l, m >, we can simplify equation 2 to: Small-world Phenomena The co-occurence of high clustering (as in regular networks) and low characteristic path length (as in random networks) define a small-world structure (Watts and Strogatz, 1998).These small-world network properties, giving rise to the well-known "six degrees of separation" phenomenon (Milgram, 1967), are quantified by two statistical measures: the clustering co-efficient (measuring local cliqui-ness) C, and the characteristic path length L (measuring global connectedness).Their average values C and L for a network with n nodes are defined by: where Γ v is the neighbourhood of a node v, |E(Γ v )| is the number of actual links in the neighbourhood of v, k v is the number of nodes in the subnetwork Γ v and kv 2 is the number of possible links in Γ v ; and where d uv is the shortest path between a pair of nodes u, v.
Degree Distribution
The degree of a node is the number of neighbours it is connected to.The degree distribution is defined as the normalised frequency distribution of degrees over the whole network.The degree distribution of a network is a simple property which helps to classify networks.The regular network with Moore neighbourhoods have the same degree (k = 8) for all the nodes.The degree distribution of small-world networks (p in the small-world regime (Punithan and McKay, 2013) follows a Poisson distribution with exponential tail.Networks in which most nodes have approximately the same number of neighbours are known as "egalitarian" networks (Buchanan, 2003).
Model
Our ecosystem is a complex dynamic system in which the continuous state habitats diffusively interact with their neighbours (coupled), evolve in discrete time (map) and are distributed on a discrete space (lattice).Initially, we construct a 2-lattice with Moore neighbourhoods and periodic boundary conditions.Each point in the lattice represents a habitat with a maximum carrying capacity of 10, 000 daisies.Each habitat in our ecosystem is a system.The elements such as life (black and white daisies) and environment (temperature) are interconnected and interdependent via reinforcing and balancing feedback loops.At each succession of a habitat, we compute the population of black and white daisies, and the temperature, based on equation 3.
Albedo:
The albedo (A) at a lattice point (i, j) and at time (t) is i.e. the average of the albedos A b of ground covered by black daisies, A w of ground covered by white daisies and A g of bare ground, weighted by α b , α w , α g (= 1 − α w − α b ) ∈ [0, 1], the relative areas occupied by black, white daisies and bare ground at time t.We assume that A w > A g > A b , with corresponding values of 0.75, 0.5, 0.25.
Growth:
The growth curve of daisies (β c ) is an inverted parabola: T (i,j,t) is the local temperature and T opt c is the optimal temperature of the species.The optimal temperature of the daisies depends on their petal colour 'c' (phenotype).The optimal temperature for black daisies is lower than for white; the mean optimal temperature is assumed to be 295.5K.
Temperature:
The temperature (T (i,j,t+1) ) is computed as the sum of temperature after Laplacian diffusion (T ′ (i,j,t) ), the difference between solar absorption and heat radiation incorporating T ′ (i,j,t) , and Gaussian white noise: where T (i,j,t) is local temperature , < l, m > represents the set of neighbours of (i, j) and D = D T /C is the thermal diffusion constant normalised by heat capacity C. g(T ′ (i,j,t) ) is the temperature update function (Wood et al., 2008), in which T ′ (i,j,t) is the temperature after diffusion: where S is the solar constant, L is the luminosity, A (i,j,t) is the albedo, σ B is the Stefan-Boltzmann constant and ξ is additive Gaussian white noise (with mean 0 and standard deviation 1.0) multiplied by the noise level (NL).
Population size:
The local population update depends on dispersion, densitydependant growth rate and the feedback coefficient:
(10) where P c(i,j,t) is the population size at location (i, j) and time step t, D c is the fraction of the population being dispersed to its neighbours, c stands for colour of daisies and k is number of neighbours.h(P ′ c(i,j,t) ) is the population growth function and P ′ c(i,j,t) is population size after dispersion: κ (11) where r is population increase rate, β c (T (i,j,t) ) is the feedback due to temperature and κ is the carrying capacity.
The Small-world Network Model
Small-world networks can be modelled in various ways -Watts and Strogatz (1998) (WS) model, Newman and Watts (1999) (NW) model, etc.Although the WS model was a breakthrough in network science, it may not guarantee connectivity owing to the rewiring process -deleting connections in the underlying network may result in disjoint nodes.Hence we use the later NW model, where we only add longrange connections.For each connection in the underlying ecosystem, a new reciprocal connection is added to a randomly chosen non-local habitat with probability p ∈ [0, 1].In this work, we have chosen p = 0.05, since it is in the small-world regime and has proven to have interesting dynamics (Punithan and McKay, 2013).
Adaptive Network Model
Adaptive networks are a class of dynamical networks whose topologies and states coevolve.Dynamic Linking (DL) is the key feature of adaptive networks, and can be modelled in a number of ways: 1. Active nodes grow and inactive nodes lose links 2. Active nodes lose links and inactive nodes grow them 3. Nodes never lose links; the network evolves by either: (a) Adding new links to active nodes from inactive nodes (b) Adding new links to inactive nodes from active nodes (c) Adding reciprocal links between active and inactive nodes By means of DL, we model the topology of our ecosystem itself as a dynamical system, changing in time according to a simple local rule (dynamics of networks).Each habitat, representing a dynamical system (dynamics on networks), is dynamically coupled according to the evolved topology.
In our ecosystem, we never remove connections between habitats; we add new reciprocal connections between frozen habitats and active habitats (i.e.method 3c).This simple rule gives rise to a complex topology.
In our model, only black and white daisies disperse via both local and long-range connections, created either statically or dynamically (by water, air, animal pollinator transport etc.), while temperature diffuses only locally.
Experiments Experiment Settings
The habitats are randomly initialised with a population size in [0, 100] for both species and with the temperature in [280, 310]K.We permit both species of daisies to coexist, hence we allow an overlap of 10% in the growth response to temperature.The overlap chosen determines the optimal temperature values of daisies.The parameter and their values are described in Table 1.
We have investigated daisyworld phenomenon in three different topological scenarios: 1. We start with an ecosystem where habitats are only locally connected (regular CML with Moore neighbourhood).
2. We add random non-local links to the underlying regular lattice, which introduces small-world effects in ecosystem (Newman-Watts model in CML) 3. Each frozen node in the underlying regular lattice is dynamically and reciprocally linked to a randomly chosen active node (adaptive CML).
A node is said to be frozen when its local dynamics are static -black and white daisies maintain the same population size for six consecutive epochs.The links added either statically (NW) or dynamically (adaptive) are reciprocal links (mutual links).We ran 25 realisations of each network model, and present a typical example of a run of each model.Scenarios 1 and 2 were previously analysed in Punithan et al. (2011) and Punithan and McKay (2013), though with different overlaps (0% and 5%) respectively.
Visualisations
We capture snapshots from the evolution of daisyworld to inspect its spatio-temporal dynamics.Each snapshot represents the population structure of the ecosystem at the particular epoch.As it is impractical to show all the snapshots over 5, 000 epochs, we plot the temporal dynamics of daisy populations and temperature at a particular habitat as well as the temporal dynamics of the average daisy populations and temperature of the whole ecosystem.These plots reflect the behaviour of the daisyworld.
In the visualisations, a habitat is shown as black if black daisies alone occupy that habitat and correspondingly for white.If both daisies coexist at a habitat but black dominates, it is shown as dark grey; if white dominates, it is presented as light grey; and if the populations are equal, it is represented as medium grey.Typically we observe periodic behaviour (Figure 10) similar to NW-CML.In the corresponding time series plots, the dynamics of both local and global temperature (Figure 11), local population (Figure 12) and global population (Figure 13) exhibit cyclic behaviour.The dynamically adapted reciprocal links are shown in Figure 9.
I. Daisyworld with Static Local Couplings (Regular Networks)
Why are NW-CML and adaptive CML similar?We saw very similar limit behaviours from NW-CML (Subsection II) and adaptive CML (Subsection III).We can gain understanding through analysing the topological quantifiers (degree distribution, clustering coefficient and characteristic path length) for their network topologies.nential tail reaching zero as shown in the Figure 14(b).
The CC for NW-CML, and that for the final epoch of adaptive CML are almost the same, as are CPs.Figure 14 and Table 2 show the results.The finally-converged adaptive CML is an egalitarian small-world network.This is why we observe a drastic change in the dynamics of the system compared to the regular lattice.It also shows that the topology, constructed statically or dynamically, influences the collective behaviour of the system: relatively small changes in the linkage structure can generate vastly different dynamics.The sections II and III illustrate typical scenarios of NW-CML and adaptive CML.Table 3 shows averages over 25 realisations of adaptive CML model and 25 of NW-CML (p = 0.05) model.We ran 100 realizations of adaptive CML and picked 25 that fell in small-world regime (Punithan and McKay, 2013) for comparison purposes -CC in [0.98, 0.7] and CP in [0.3, 0.16].CC and CP are normalised by the values for a regular lattice as proposed in Watts and Strogatz (1998).The evolution of the topology continues until the stationary attractor (frozen local dynamics) of all habitats reach a dynamical attractor -here a limit cycle.Some samples adapt quickly, reaching a stable topology around the 500 th epoch (Figure 15 (a) -Quick Adaptation), while a few evolve almost until the 5000 th epoch (Figure 15 (b) -Slow Adaptation).Their degree distribution (Figure 16), clustering coefficient and characteristic path (Table 4) show that both evolve to small-world networks, although at different rates.
Topological Evolution
The collective dynamics in both quick and slow adaptations (Figures 18 and 19) show the shift in dominance is not so abrupt as in Figures 5 and 10
Conclusion
We have analysed the connectivity changes in a complex adaptive ecosystem combining life-environment, statetopology and density-growth feedback loops.The results illustrate the capacity of the adaptive ecosystem to selforganise to a complex ecosystem (small-world network) Even a small change in the connectivity, with almost no effect on the mean degree of the ecosystem, leads to a drastic behaviour change from the grid network.It is much more like the real-world behaviour we see in social systems (seasonal rise and fall of fads), economic systems (booms and busts) etc..This "small cause, large effect" behaviour draws analogies with popular metaphors black swan (low probable but high-impact events) (Taleb, 2010), butterfly effect (sensitive dependence on initial conditions) (Hilborn, 2004) and tipping point (little things make a big difference) (Gladwell, 2006).Though the collective dynamics change in varying ways, we still observe the emergent property -selfregulation of the temperature at around 295.5K.
Figure 1 :Figure 9 :
Figure 1: Regular CML: D = 0.2 and N L = 0.001 in 2D 100 × 100 With only local couplings, we observe the formation of complex static patterns.The whole ecosystem freezes after epoch 1710.This scenario is clearly seen in the snapshots (Figure 1), in global population dynamics (Figure 4) and in global temperature dynamics (sub Figure 2 (b)).The local population dynamics (Figure 3) and local temperature (sub Figure 2 (a)) at a typical habitat (57, 50) shows that the dynamics freezes even quicker (epoch 1055).All trajectories show initial fluctuations but evolve to complete stationarity.
Figure 15: Adapted Reciprocal Links
Table 4 :
Clustering Coefficients and Characteristic Path | 4,951.8 | 2013-09-02T00:00:00.000 | [
"Environmental Science",
"Physics"
] |
Non-classical disproportionation revealed by photo-chemically induced dynamic nuclear polarization NMR
Abstract Photo-chemically induced dynamic nuclear polarization (photo-CIDNP) was used to observe the light-induced disproportionation reaction of 6,7,8-trimethyllumazine starting out from its triplet state to generate a pair of radicals comprising a one-electron reduced and a one-electron oxidized species. Our evidence is based on the measurement of two marker proton hyperfine couplings, Aiso (H(6 α )) and Aiso (H(8 α )), which we correlated to predictions from density functional theory. The ratio of these two hyperfine couplings is reversed in the oxidized and the reduced radical species. Observation of the dismutation reaction is facilitated by the exceptional C–H acidity of the methyl group at position 7 of 6,7,8-trimethyllumazine and the slow proton exchange associated with it, which leads to NMR-distinguishable anionic (TML - ) and neutral (TMLH) protonation forms.
2 affords riboflavin by a mechanistically unique dismutation that is catalyzed by the enzyme riboflavin synthase and does not require any cosubstrates or cofactors (Plaut and Harvey, 1971).Even more surprisingly, the dismutation affording 4 from 2 can also proceed non-enzymatically in neutral or acidic aqueous solution under an inert gas atmosphere (Rowan and Wood, 1963;Kis et al., 2001).
found in two members of the photolyase/cryptochrome protein family, namely cryptochrome B (CryB) from Rhodobacter (R.) sphaeroides (Geisselbrecht et al., 2012) and the photolyase-related protein B (PhrB) from Agrobacterium (A.) tumefaciens (Zhang et al., 2013).Both proteins belong to the new subclass of FeS bacterial cryptochromes and photolyases (BCP), also called CryPro.R. sphaeroides CryB controls light-dependent and singlet-oxygen-dependent gene expression of the photosynthetic apparatus (Frühwirth et al., 2012).Similar to A. tumefaciens PhrB, CryB has also repair activity for (6-4) photoproducts in photodamaged DNA (von Zadow et al., 2016).It has been speculated that 2 acts as antenna chromophore in this protein class (Geisselbrecht et al., 2012).This additional cofactor absorbs at shorter wavelengths (λ max = 420 nm) than the essential FAD cofactor (λ max around 450 nm), which is the origin of lightinduced one-electron transfer that initiates radical-pair spin chemistry in photolyases and cryptochromes (Biskup et al., 2009;Sheppard et al., 2017).The precise role of 2 in this photolyase/cryptochrome subclade needs to be evaluated, in particular in its interplay with the FAD cofactor.
6,7-Dimethyl-8-ribityllumazine and certain structural analogs, e.g., 6,7,8-trimethyllumazine (3), exhibit anomalously high C-H acidity of the methyl group at position 7.For 2 and 3, pK a values of 8.3 (Pfleiderer et al., 1966;Bown et al., 1986) and 9.9 (Pfleiderer et al., 1966;McAndless and Stewart, 1970;Bown et al., 1986) have been reported, respectively.Using 1 H and 13 C NMR, compound 3 has been found to form an anionic species under alkaline conditions, which has been assigned a 7α-exomethylene motif.Compound 2 forms additionally several tricyclic ether anion species under the participation of the OH groups of the ribityl side chain attached at position 8 (Bown et al., 1986).Interestingly, riboflavin synthase selectively binds the 7α-exomethylene anion of 2, which is believed to be crucial for the dismutation of 2 affording a stoichiometric mixture of riboflavin (4) and 5-amino-6-ribitylaminouracil.This reaction has been shown to proceed via a pentacyclic intermediate which was isolated using an inactive mutant of riboflavin synthase (Illarionov et al., 2001).Various pathways have been proposed for the riboflavin synthesis from 2 (Truffault et al., 2001;Gerhardt et al., 2002;Kim et al., 2010).For the non-enzymatic reaction, a quantum mechanical simulation favors a nucleophilic addition mechanism, which was calculated as the lowest energy pathway yielding riboflavin (Breugst et al., 2013).
In this contribution, we report on a process between the neutral and the anionic 6,7,8-substituted lumazine species 3. Studies along these lines may ultimately shed light on the role of the related compound 2 in light-induced redox reactions of proteins from the CryPro subclade of photolyases and cryptochromes.
NMR and photo-CIDNP spectroscopy
NMR and photo-chemically induced dynamic nuclear polarization (photo-CIDNP) experiments were performed as described previously (Pompe et al., 2019), using a Bruker Avance III HD 600 MHz NMR spectrometer (Bruker BioSpin GmbH, Rheinstetten, Germany) operating at 14.1 T. Light excitation was achieved by coupling the output of a nanosecond-pulsed laser system, comprising an Nd:YAG laser source (Surelite I, Continuum, Santa Clara, CA, USA) in combination with a broadband optical parametric oscillator (OPO) (Continuum OPO PLUS), into an optical fiber with a diameter of 1 mm (Thorlabs, Dachau, Germany).The optical fiber was inserted into the NMR tube via a coaxial insert (Wilmad WGS-5BL).Photo-CIDNP difference spectra were recorded directly by using a pre-saturation pulse train to destroy thermal polarization prior to the laser flash (Goez et al., 2005).This avoids errors involved with the subtraction of light and dark spectra from separate experiments.A destructive phase cycle was additionally applied in which every second scan contained light excitation to avoid residual thermal NMR signals, especially contributions from the solvent peak (HDO) at 4.8 ppm.
Photoexcitation of the TMLH / TML − solutions for 1 H photo-CIDNP experiments was performed using 6 ns pulses of an Nd:YAG-laser-pumped OPO adjusted to 425 or 470 nm (pulse energies of 8 mJ at 425 nm and 30 mJ at 470 nm).TMLH absorbs preferentially at these wavelengths because the long-wavelength absorbance of TML − (364 nm) is blueshifted with respect to that of TMLH (402 nm) (Pfleiderer et al., 1966); see Fig.The occurrence of nuclear hyperpolarization upon photoexcitation of alkaline 6,7,8-trimethyllumazine samples provides clear evidence for a photochemical reaction involving radical pair intermediates.Since 6,7,8-trimethyllumazine is the only organic species present that absorbs light in the visible range, we suggest disproportionation to take place under the given conditions; see Fig. 4: light-initiated electron transfer from the anionic TML − to the neutral TMLH generates the neutral radical TML ox q and initially the anionic radical TMLH red q − as short-lived products.In the dark, backward electron transfer takes place to regenerate the initial species TML − and TMLH, respectively.
To corroborate this notion, we analyzed intensities and signs of the hyperpolarized NMR resonances.In the highfield approximation, the applied time-resolved photo-CIDNP scheme using a pulsed laser as a light source (Closs et al., 1985;Goez et al., 2005;Kuhn, 2013) renders signal intensities that are proportional to the isotropic hyperfine coupling constant A iso of the respective nucleus (Adrian, 1971;Morozova et al., 2011).In the present case with only two weakly coupled 1 H spins per species, the relative enhancement factors can, in principle, be extracted by signal integration.Kaptein (1971) introduced a simple rule for the net polarization i of a hyperpolarized resonance.i results from the product of four signs and yields either "+" or "−" for an absorptive or an emissive signal, respectively: (1) The parameter µ is either "+" in case the radical pair is formed from a triplet precursor or "−" in case of a singlet precursor.The reaction route following the formation of the intermediate radical pair determines the sign of ε, either "+" for recombination/re-encounter or "−" for dissociation, the latter leading to so-called escape products.The sign of the difference of the two (isotropic) g factors of the two involved radicals, g = g 1 − g 2 , depends on which of the two radical moieties comprising the radical pair is observed (see below).
Finally, the sign of the isotropic hyperfine coupling constant (A iso,i ) of the respective nucleus i is of relevance.
To apply Kaptein's sign rule for rationalizing the polarization of a particular resonance in the photo-CIDNP spectrum of 6,7,8-trimethyllumazine, we consider the following aspects: (i) since the time delay between pulsed laser excitation and the radio-frequency pulse applied for the detection of the FID was chosen rather short (∼ 80 ns), the detection of recombination products is supposed to be more likely than that of escape products, thus ε → "+".(ii) Little is known about the g factors of paramagnetic lumazine species.Early EPR studies were focused on analyses of hyperfine patterns in EPR spectra from radicals of 6,7,8trimethyllumazine (Ehrenberg et al., 1970) and derivatives thereof (Westerling et al., 1977), but they did not report on their g values.A more recent high-field EPR study considered 6,7-dimethyl-8-ribityllumazine bound to lumazine protein (Paulus et al., 2014).This cofactor was photo-reduced to yield the neutral 6,7-dimethyl-8-ribityllumazine radical, from which the isotropic g factor of 2.0032 ± 0.0001 was obtained by averaging the principal values of the g matrix.The lack of respective experimental data for the specific 6,7,8-trimethyllumazine radicals involved in disproportionation (see Fig. 4) prompted us to perform quantum-chemical computations at the density-functional theory (DFT) level to calculate the necessary values.Starting point for geometry optimization was a previously published structure model of 6,7,8-trimethyllumazine along with six coordinating water molecules (Schreier et al., 2011).Calculated g iso values were 2.0031 for the neutral radical TML ox q and 2.0034 for the anionic radical TMLH red q − .Tentatively, the g iso value of TMLH red q − may be compared to the measured value of protein-bound 6,7-dimethyl-8-ribityllumazine radical because both species result from one-electron reduction of their aromatic moiety; nevertheless, the substituents bound to the 8-position and the protonation states of both radicals are different.Neglecting the unequal substitution at position 8, the neutral 6,7-dimethyl-8-ribityllumazine radical may be considered as the species obtained by protonation of TMLH red q − to yield TMLH red q 2 (see Fig. 4).Extrapolated to the realm of the related flavins that share similar hydrogen-bonding motifs with the respective lumazines (see Scheme 1), the couple TMLH red q − / TMLH red q 2 is expected to behave like anionic (Fl q − ) / neutral (FlH q ) flavin semiquinone radicals.For the latter, slightly larger g iso values were observed for anion radicals than for neutral radicals (Schleicher and Weber, 2012): ∼ 2.0035 (Barquera et al., 2003;Okafuji et al., 2008) versus ∼ 2.0034 (Fuchs et al., 2002;Barquera et al., 2003;Schnegg et al., 2006), respectively, but the difference is quite small.This seems to also hold for the one-electron reduced lumazine radicals: g iso (TMLH red q − ) > g iso (neutral 6,7-dimethyl-8-ribityllumazine radical).(iii) Absolute values of most proton hyperfine couplings have been determined for a cationic (one-electron reduced) 6,7,8-trimethyllumazine radical species (Ehrenberg et al., 1970) and for the neutral 6,7-dimethyl-8-ribityllumazine radical (Paulus et al., 2014).However, the signs of the A iso values have not been determined experimentally.Since the available hyperfine data from the literature are only of limited value for the interpretation of our photo-CIDNP spectra, we performed quantumchemical computations also of the hyperfine structure of the 6,7,8-trimethyllumazine radicals under discussion.The 1 H hyperfine couplings relevant for the interpretation of the photo-CIDNP NMR data are compiled in Table 1, together with the respective g iso values.A full set of hyperfine couplings from all other protons as well as from 13 C and 15 N nuclei can be found in the Supplement (Tables S2 to S5).Additionally, the respective data for two further one-electron reduced species have been included that potentially result from protonation of the anionic TMLH red q − at N(1) or N(5): TMLH red q 2 (H(1)) and TMLH red q 2 (H( 5)), respectively.Importantly, for all one-electron reduced 6,7,8-trimethyllumazine radicals, DFT predicts isotropic hyperfine couplings of the H(8α) protons that are much larger than those of H(6α): A iso (H(8α)) A iso (H(6α)) (see Table 1); EPR data are consistent with this finding (Ehrenberg et al., 1970).For the TML ox q radical that results from TML − by withdrawal of one electron, DFT predicts A iso (H(6α)) > A iso (H(8α)).
Photo-excitation of an alkaline 6,7,8-trimethyllumazine solution (4 mM; TMLH : TML − ratio of 1 : 10) with 425 nm laser pulses resulted in a photo-CIDNP spectrum with both TML − resonances in emission (see Fig. 2a).The ratio of the integrals of the signals assigned to H(6α) and H(8α) is 1 : 0.439.The situation is different for the signals assigned to TMLH: whereas the H(8α) resonance exhibits enhanced absorption, the one of H(6α) does not show significant hyperpolarization.Virtually the same polarization pattern is observed for a less alkaline 6,7,8-trimethyllumazine solution (4 mM) with a TMLH : TML − ratio of 1 : 1, i.e., pH ≈ pK a (see Fig. 2b).However, to obtain a discernible photo-CIDNP spectrum in this case, the excitation wavelength of our laser system had to be tuned to 470 nm.The higher wavelength was useful because of the rather high TMLH concentration and the high absorbance associated therewith, which did not allow for sufficient photo-excitation of the active sample volume given the available output power of our laser source at 425 nm.Nevertheless, the obtained signal-to-noise ratio remained rather low and it even decreased upon further decreasing the amount of TML − relative to that of TMLH.Ob- servation of the photo-CIDNP effect at 470 nm is clear evidence for photo-excitation of TMLH rather than TML − .The latter has very low absorbance at 425 nm and does not virtually absorb at 470 nm (Pfleiderer et al., 1966; see Fig. 3).
By far the highest signal-to-noise ratio of the photo-CIDNP data was obtained in the alkaline range at about one pH unit above the pK a of TMLH / TML − .We could not conduct NMR experiments under more basic pH conditions because the high ion strength of our sample precluded proper tuning of the probe head.Additionally, we varied the 6,7,8trimethyllumazine concentration in a range between 1.0 and 4.0 mM.In all cases, photo-CIDNP revealed a hyperpolarization pattern similar to the one shown in Fig. 2a.
To rationalize our findings, we correlated the relative intensities of the hyperpolarized NMR resonances obtained by photo-CIDNP to DFT predictions of hyperfine couplings of the various paramagnetic one-electron oxidized or reduced lumazine species involved in the suggested disproportionation scheme along a procedure introduced by Ivanov and Yurkovskaya (Morozova et al., 2011; see Table 1, Fig. 5 and Supplement).Upon backward electron transfer in the dark, i.e., radical pair recombination (ε = "+"), hyperpolarization generated on the intermediate oxidized species TML ox q and the reduced species TMLH red q − (or a protonated neutral species TMLH red q 2 thereof) is transferred to the diamagnetic products TML − and TMLH, respectively.
Plotting the photo-CIDNP intensities with respect to the hyperfine couplings obtained using DFT (see Fig. 5) reveals nearly perfect correlation: a linear regression fit constrained to go through the origin yields a slope of 0.0672 MHz −1 and R 2 = 0.9996.Observation of hyperpolarized resonances of H(6α) and H(8α) from TML − both in emission and in an intensity ratio that correlates well with hyperfine coupling computations from DFT provides clear evidence for the existence of the oxidized TML species TML ox q , which is a redox state of lumazine that up to now had not been substantiated by experiments.
If we retain the signs of ε ("+", recombination) and µ ("+", triplet precursor) and reverse the sign of g to "+" (because g = g(TML red q − )−g(TML ox q ) > 0), then we expect for the H(8α) resonance of TMLH because of i = "+" (absorptive resonance) a positive isotropic hyperfine coupling constant of its paramagnetic precursor state.A very small hyperfine coupling near or equal to zero is expected for H(6α) as hardly any hyperpolarization is observed for these nuclei in the photo-CIDNP spectrum (Fig. 2).This situation is reversed as compared to that of TML − , for which the H(6α) resonance experiences much stronger hyperpolarization than H(8α).Our DFT calculations confirm a large and positive value for A iso (H(8α)) of TMLH red q − but also predict a negative hyperfine coupling of substantial absolute value for A iso (H(6α)).This latter finding is clearly not supported by the photo-CIDNP data shown in Fig. 2. Therefore, we have extended our DFT studies of 6,7,8-trimethyllumazine radicals to protonated variants of TMLH red q − , namely the neutral species TMLH red q 2 (H(1)) (protonated at N(1)) and TMLH red q 2 (H( 5)) (protonated at N(5)); see Tables 1, S3 and S4.
Protonation of TMLH red q − at N(1) to yield TMLH red q 2 (H(1)) does not significantly alter the isotropic hyperfine couplings of H(6α) and H(8α); also g iso is virtually unaffected (see Tables 1 and S4).However, addition of a proton at N(5) to yield TMLH red q 2 (H( 5)) does shift both hyperfine couplings to more positive values.A iso (H(6α)) even changes its sign and assumes a small positive value, but more than 10 times smaller than that of A iso (H(8α)); see Tables 1 and S3.Our photo-CIDNP data thus support rapid protonation of TMLH red q − at N(5) to yield the neutral radical species TMLH red q 2 (H( 5)).When considering a linear combination of the hyperfine data of TMLH red q 2 (H(5)) and TMLH red q − (see TMLH red q 2 (H( 5))/TMLH red q − in Fig. 5) with a ratio of 0.772 : 0.228, we obtain A iso (H(6α)) = 0 and A iso (H(8α)) = 18.1 MHz and consequently, of course, perfect correlation of DFT-calculated hyperfine couplings and relative photo-CIDNP intensities with R 2 = 1 and a slope of 0.0262 MHz −1 .Such an approach has been successfully applied previously to photo-CIDNP studies of other systems; see, e.g., Morozova et al. (2018) and Torres et al. (2021).The obtained ratio of TMLH red q 2 (H(5)) to TMLH red q − should be treated with caution as the photo-CIDNP intensities were correlated with hyperfine data predicted from DFT because experimental values are unavailable.The accuracy of DFT hyperfine predictions however strongly depends on the choice of functional basis set and molecular geometry (Kirste, 2016;Witwicki et al., 2020).
The slopes of the straight lines through the origin in the correlation plots of TML − and TMLH in Fig. 5 are clearly different: 0.0672 MHz −1 versus 0.0262 MHz −1 (for TMLH red q 2 (H(5))/TMLH red q − ), respectively.Furthermore, even though A iso of H(8α) in TMLH red q 2 has a larger value than A iso of H(6α) in TML ox q , the corresponding photo-CIDNP intensity of the resonance in the recombination product is significantly smaller than that of the most intense signal of the respective counter radical.These findings may have several reasons: (i) introduction of a proton at N(5) adds a further large hyperfine coupling (A iso (H(5)) of substantial anisotropy (see Table S3).Protonation dynamics of H(5) could enhance relaxation by which hyperpolarized spinstate population decays to the population at thermal equilibrium.(ii) Hyperpolarization could also be dissipated into the solvent upon radical-pair recombination.Backward electron transfer from TMLH red q 2 yields the diamagnetic TMLH + 2 , a species that will certainly deprotonate quickly to regenerate TMLH, especially given the alkaline conditions.Hence, in- termediate electron-spin redistribution leading to buildup of hyperpolarization at H(5) will likely be transferred to the surroundings on release of this proton.(iii) Despite the fact that electron exchange has been observed in other systems leading to a decay of hyperpolarization (Closs and Sitzmann, 1981), we consider such a mechanism along the scheme # TMLH red q − +TMLH→ # TMLH+TMLH red q − (" # " denotes nuclear spin polarization) less likely given that at the elevated pH values under consideration neutral TMLH is present only at rather low concentrations.
Figure 6 shows the singly occupied molecular orbitals (SOMOs) of the neutral radicals TMLH red q 2 (H( 5)) and TML ox q .Positive and negative signs of the frontier orbitals are depicted with light and dark grey shading, respectively.Mere inspection gives a hint for the quite different ratios of isotropic hyperfine couplings of H(8α) and H(6α) in both species: the amplitude of the SOMO at the 6-methyl group of TMLH red q 2 (H(5)) is small as compared to that of TML ox q .For the 8-methyl group the opposite trend is observed.Considerable SOMO amplitudes are observed for H(5) in TMLH red q 2 (H( 5)), which leads to a strong and anisotropic hyperfine coupling of this exchangeable proton.This explains the dissipation of hyperpolarization into the solvent on backward electron transfer leading from TMLH red q 2 (H(5)) to TMLH and H + .
The SOMOs of TMLH red q 2 (H(5)) and TML ox q significantly differ in terms of wavefunction amplitudes and signs, in particular at the respective π system of the pyrazine ring.Very high amplitude is observed at C(7α) of TML ox q .This gives a hint that the electronic structure of the related oneelectron reduced TML − may be better represented by a C(7α)-carbanion rather than a 7α-exomethylene motif at position 7 (see Fig. 1).Clearly, the two observable proton resonances per radical species and their hyperpolarization detected using photo-CIDNP NMR are insufficient to draw a precise picture of the delocalization of the unpaired electron spin density over the carbons and nitrogens in the heterocyclic cores.To learn more about the electron-spin distributions in the two radicals generated by disproportionation of 3 and to further corroborate the existence of the oxidized species TML ox q , we plan further photo-CIDNP experiments on specifically designed 13 C and 15 N isotopologs of 2 and 3.
Conclusions
Using photo-CIDNP NMR we have discovered a disproportionation reaction upon photoexcitation of alkaline solutions of 6,7,8-trimethyllumazine.In its classical definition, disproportionation refers to a "reversible or irreversible transition in which species with the same oxidation state combine to yield one of higher oxidation state and one of lower oxidation state" (McNaught and Wilkinson, 1997).This includes redox reactions of the following type: 2X → X ox q + +X red q − .From the perspective of oxidation states, this scheme applies to 6,7,8-trimethyllumazine, which upon photo-induced electron transfer generates a pair of radicals comprising a species devoid of one electron (TML ox q ) and another with an excess electron (TMLH red q − ).However, our proposed mechanism deviates from the classical disproportionation scheme in so far as (i) the reaction is initiated directly by light and (ii) that a redox reaction takes place between two different protonation states of 6,7,8-trimethyllumazine, i.e., between TMLH and TML − .Clearly, disproportionation starts out from photoexcitation of TMLH (notably, only TMLH has significant absorption at 470 nm).Once the triplet state of TMLH is formed by intersystem crossing from an excited singlet state of this molecule, it abstracts an electron from TML − , thereby generating a pair of interacting radicals (see Fig. 4).Interestingly, one quite unusual radical species is generated by this disproportionation that has not been reported before: TML ox q .The existence of a species in such a high redox state was speculated upon in triplet quenching of the unsubstituted lumazine (1) (Denofrio et al., 2012); one should however keep in mind that the aromatic moiety of 6,7,8trimethyllumazine (3) differs from that of the unsubstituted lumazine.A similar species was proposed for triplet quenching of the related flavins (Görner, 2007), but convincing experimental evidence on the existence of a species in such a high oxidation state was still lacking for both cases until now.By detecting two important hyperfine couplings, we provide strong evidence for the existence of 6,7,8-trimethyllumazine in a further high redox state.Clearly, we owe this success to two peculiarities of 6,7,8-trimethyllumazine: (i) the extraordinary acidity of its 7-methyl group which compares to that of the ammonium ion and (ii) its proton exchange on a timescale that is slow compared to that of NMR and which consequently leads to distinguishable anionic (TML − ) and neutral (TMLH) protonation forms in terms of NMR properties.Flavins by comparison do not exhibit a corresponding acidity of their methyl groups.Therefore, our photo-CIDNP detection scheme is not readily extendable to the realm of flavins for a proof of the existence of the speculative FAD ox q + species. https://doi.org/10.5194/mr-2-281-2021 Magn.Reson., 2, 281-290, 2021 Our data on 6,7,8-trimethyllumazine provide evidence for an extended range of redox states of lumazines in general.Further studies on lumazine-mediated photocatalysis will show whether the existence of species of the type of TML ox q will be involved that shed new light on the role of 6,7dimethyl-8-ribityllumazine as chromophore, e.g., in the new class of recently discovered CryB cryptochromes (Geisselbrecht et al., 2012) and PhrB photolyases (Oberpichler et al., 2011;Zhang et al., 2013) of the CryPro subclade.Further studies will be conducted on the suitability of TML as a photosensitizer.
Figure 1 .
Figure 1.CH acidity of the methyl group attached to position 7 in 6,7,8-trimethyllumazine.The deprotonated (anionic) form TML − can be drawn in various mesomeric structures, two of which are depicted on the right-hand side.
3. Typically, 8 free induction decays (FIDs) were collected and averaged.Selected photo-CIDNP data are shown in Fig. 2. Three resonances exhibit substantial nuclear spin polarization: the position 6 and 8 methyl group signals of TML − and the position 8 methyl group signal of TMLH.The protons of the position 6 methyl group of TMLH do not show appreciable hyperpolarization.As in NMR, the position 7 methyl group does not exhibit any resonances in photo-CIDNP due to proton exchange with deuterons from the solvent.
Figure 3 .
Figure 3. UV/vis spectra of TMLH and TML − recorded in water and in 1 M NaOH, respectively.The vertical dashed lines indicate the wavelengths chosen for sample irradiation (425 and 470 nm).
Table 1 .
Isotropic g values and selected isotropic methyl proton hyperfine couplings for various oxidized and reduced 6,7,8trimethyllumazine radicals. | 5,839.4 | 2021-02-24T00:00:00.000 | [
"Chemistry"
] |
Japanese subject-oriented adverbs in a scope-based theory of adverbs
While English exhibits a clausal-manner alternation that is sensitive to where adverbs occur in clausal structure (e.g., Rudely, John left vs. John left rudely), it has not been clear to what extent Japanese behaves the same way. The present study argues, in the spirit of a scope-based theory of adverb licensing, that there is evidence that the Japanese adverbial system is scope-based similarly to its English counterpart. Focusing on mental attitude adverbs, the paper argues that Ernst’s (2002) generalization holds for Japanese: that subject-oriented adverbs lose their otherwise available clausal readings when pure manner adverbs c-command them in the same clause. The paper also claims that clausal mental attitude adverbs must be clause-mates of Tense, which is not reduced to the scope-based theory.
surface position relative to the verbal head of the clause: an adverb always precedes the verbal head, if special cases like those involving right dislocation are put aide. Furthermore, adverbs' position relative to grammatical arguments also appears to be freer in Japanese than in English. Let us illustrate it with agent-oriented (AO) adverbs, another subclass of SO adverbs. When an AO adverb occurs in the low range of a clause, its clausal reading disappears in English: foolishly in (3) only has a manner reading. In Japanese, no such restriction seems to be present at a first glance at least. In (4), orokani-mo 'stupidly' is a pure clausal adverb (Kubota 2015). No noticeable difference in acceptability is found among the versions of (4). 2 (3) The senator has been talking foolishly to reporters.
(cf. Foolishly, the senator has been talking to reporters.) (Ernst 2002: 54) (4) (Orokani-mo) Taro-wa (orokani-mo) masukomi-ni (orokani-mo) stupidly Taro-TOP stupidly media-DAT stupidly sono koto-o (orokani-mo) morasi-ta. that thing-ACC stupidly leak-PST 'Taro stupidly leaked the information to the media.' If a structure-meaning correlation like the one found in English were found in Japanese as well at a deep level, empirical evidence for the correlation must be hard to find in the Japanese primary linguistic data. Then this would naturally lead to the question of how native speakers of Japanese learn that correlation. In this sense, close comparison of the adverbial systems of the two languages may shed light on rich prior linguistic knowledge if not innate (Chomsky 1975).
The second claim mentioned above (i.e., that clausal adverbs require a clause-mate Tense) concerns cases where adverbs are embedded in different types of complements. We will observe that when MA adverbs are adjoined to v¢, they can be assigned clausal readings in root and finite complement clauses, but not in reduced tenseless complement clauses. This aspect of the adverb distribution, if correct, is something that cannot be explained by the SB theory of adverb licensing.
The paper is structured as follows. After we review the SB theory quickly in Section 2, we support the generalization that MA adverbs lose their otherwise available clausal readings when pure manner adverbs c-command them, which originates in Ernst (2002). It is also shown that this generalization is indeed a structural generalization by observing that manner adverbs may precede c-command clausal adverbs without c-commanding them. Section 4 discusses data pertaining to the clause-mate condition. We start with some observations on adverbs' orientation in passives and use them to see where MA adverbs are adjoined in a tree. It is shown that v¢-adjoined MA adverbs can yield clausal readings in principle but require a local Tense. We present evidence that when embedded in bare vP-complements, v¢-adjoined MA-adverbs only receive manner interpretations, suggesting the above-mentioned clause-mate condition. Section 5 concludes the paper.
2. Scope-based theory of adverb licensing. This section quickly reviews the SB theory of adverb licensing proposed by Ernst (2002Ernst ( , 2007Ernst ( , 2015, focusing on how the theory derives clausal-manner ambiguities. Take the AO adverb cleverly as an example (Ernst 2002: 42). The data to explain is given in (5). The major assumptions of the SB theory can be summarized as in (6). 3 (5) a.
Alice has cleverly answered the question. (Clausal; Manner) b.
Alice has answered the question cleverly. (*Clausal; Manner) (6) i. Syntactic constituents are interpreted as semantic objects called fact-event objects (FEOs) such as internal events, external events, propositions, and so forth. These FEOs are ordered in FEO hierarchy in the following way: . . . proposition > external event > internal event, where > is "higher than". ii.
An FEO can freely be converted to the next higher one, but this type conversion process cannot apply in the opposite direction (fact-event-object calculus). Thus, a constituent representing an internal event may undergo type raising to become an external event, but it can never be converted back to an internal event. iii. V-projections (i.e., V¢ and VP) are subject to a constraint. They always represent internal events and cannot be converted to external events and any higher types. (We assume that T, Aux, v, and V are generated in this c-command order.) iv. Unlike V-projections, v-projections (i.e., v¢ and vP) can represent internal events and can be type-raised to external events. v.
T and Aux take an external event as their sister (and return an external event). Therefore, the highest v-projection is required to be an external event. It can be type-raised to a Proposition, but selection clash would take place. vi. Manner adverbs must take internal events as their arguments, returning internal events. vii. Clausal adverbs must take external events as their arguments, returning external events.
Let us start with (5b), where it is clear that cleverly is adjoined to a T-projection (We assume that have is an Aux and gets raised to T.) No T-projections can be mapped onto internal events. Because manner adverbs' argument needs to be an internal event, only the clausal interpretation is obtained.
Next, the fact that (5c) only has a manner reading can be explained by appealing to (6iii). See Ernst (2002: Ch. 6) for detailed discussion.
Finally, why is (5a) ambiguous? Note that the sentence is structurally ambiguous. In one parse, cleverly is adjoined to an Aux-projection while in the other parse, it is adjoined to a v-projection. The former parse always gives a clausal interpretation of the adverb since an Aux projection represents an external event. The latter parse, in contrast, may yield two interpretations. When the v-projection sister of cleverly denotes an internal event, a manner interpretation is obtained. If the sister projection has type-raised to become an external event, then a clausal interpretation is obtained.
This way, the Ernstian SB theory explains the basic paradigm in (5).
The ban on manner adverbs scoping over clausal MA adverbs.
Having laid out the core assumptions of the SB adverb-licensing theory, let us introduce a generalization that is defended in Ernst (2002Ernst ( , 2007Ernst ( , 2015. The generalization can be formulated as in (7). 4
(7) Ernst's generalization
Subject-oriented adverbs lose their otherwise available clausal readings when manner adverbs c-command them in the same clause.
Ernst (2015) shows that Japanese AO adverbs, exemplified by orokani-mo, are subject to this generalization. We show that (7) holds for MA adverbs in Japanese as well.
(8) a. Taro-wa iyaiya tanosigeni situmon-ni kotae-ta. Taro-TOP reluctantly happily question-DAT answer-PST Lit. 'Taro reluctantly happily answered questions.' b. # Taro-wa oogoe-de iyaiya tanosigeni situmon-ni kotae-ta. Taro-TOP big.voice-with reluctantly happily question-DAT answer-PST Lit. 'Taro loudly happily reluctantly answered questions.' We argue that, as the generalization in (7) predicts, (8a) allows a clausal reading of iyaiya while (8b) does not, and only allows a manner reading of it. After showing that, we will quickly review how this generalization follows from the SB theory. 5 Let us examine the pair of examples in (8) in more detail. First, (8a) has a coherent reading of the following sort: Taro did not want to talk happily when answering questions, but he did (because, for example, his boss told him to do so at a meeting). Second, this coherent reading is absent in (8b): Taro would have to look happy and reluctant at the same time when answering questions. (8b) sounds contradictory. 6 The symbol # indicates this judgment.
It is crucial to note that the coherent reading in (8a) is an instance of what Ernst calls the phenomenon of "event-layering" (Ernst 2002: 60-61, 65-66). Clausal adverbs may take larger events than the most basic event denoted by verb phrases. In (8a), iyaiya takes as its argument tanosigeni situmon-ni kotaeta 'answered questions happily,' not the basic event denoted by situmon-ni kotaeta 'answered questions.' This is an indication of a clausal reading of iyaiya being available in (8a). 4 For Ernst, (7) is an instantiation of the more general constraint, which can be described as follows. Suppose that FEOi is higher than FEOj in the FEO hierarchy. Within a given clause, the generalization says, an adverb that combines with FEOi cannot c-command an adverb that combines with FEOj. 5 It is also critical to observe that scrambling of adverbs should be restricted in some way. Example (i) could be analyzed as involving short scrambling of tanosige over iyaiya. Importantly, however, (i) does not have what we call the coherent reading in the discussion of (8a) in the text. This strongly suggests that tanosige in (i) is base-generated above iyaiya. (i) Taro-wa tanosigeni iyaiya situmon-ni kotaeta. Taro-TOP happily reluctantly question-DAT answered Lit. 'Taro happily reluctantly answered questions.' 6 In (8a), the coherent reading seems to be easier to get when pronouncing the sentence with iyaiya being prosodically prominent, i.e., pronouncing it with a high pitch. To our judgement, however, it is harder to get a noncontradictory reading from (8b) even if we put prosodic prominence on iyaiya.
Why is the coherent clausal reading absent in (8b) then? This question can be answered easily if iyaiya lacks it and only has a manner reading in this example. The contradictoriness of (8b) suggests that (in addition to oogoe-de) iyaiya and tanosige are interpreted intersectively here. (9) is a paraphrase of this interpretation.
When more than one manner adverb occurs in a clause, they modify the same event intersectively (Morzycki 2016; see also Davidson 1967, Parsons 1990, Piertroski 2000. If so and if iyaiya should be interpreted as a manner adverb in (8b), then it follows that (8b) must receive the interpretation shown in (9). 7 Incidentally, it can be shown more directly that MA adverbs such as iyaiya have manner readings as well as clausal readings. The example in (10) is acceptable if a little awkward.
(10) Taro-wa iyaiya(-nagara) iyaiya situmon-ni kotae-ta. Taro-TOP reluctantly(-while) reluctantly question-DAT answer-PST 'Taro reluctantly answered questions reluctantly.' The sentence requires a scenario like the following: Taro's boss told him to answer questions glumly, say, at an interview and Taro was not happy about this order but followed it. If this intuition is correct, then the second instance of iyaiya should be interpreted as a manner adverb here.
Having presented empirical evidence for (7), let us turn to the task of accounting for (7), i.e., explaining why the clausal reading of iyaiya has to go away in (8b). According to the SB theory, attaching oogoe-de forces an Internal-Event interpretation on its sister because it is a pure manner adverb. Given the FEO calculus given in (6ii), this means the arguments of the other two adverbs are also Internal Events, which makes the clausal readings of these adverbs unavailable. 8 This state of affairs can be schematically represented as in (11b). (11a) is intended to represent the FEO derivation of (8a), where the Internal Event designated by tanosigeni situmon-ni kotaeta is converted to an External Event.
[ This way, once a manner adverb is adjoined higher than an MA adverb, no clausal reading can be assigned to the MA adverb.
(12) is another set of examples that makes the same point. Here, the MA adverb yorokonde 'delightedly' is used. (12a) has a coherent reading: Taro was delighted to talk sadly (e.g., when his boss asked him to do so at a meeting). In (12b), where yukkurito 'slowly' is added on the left 7 This is at odds with Kubota's (2015) position that iyaiya entirely lacks a manner use. Ernst (2002: 67) observes that "manner readings require overt manifestation but not the actual mental state." Kubota claims that for iyaiya, the actual mental state of being reluctant is required of the experiencer. If Ernst were right, this would be an issue for our position: the issue has to be left for future investigations. 8 We suspect that tanosigeni is a pure manner adverbial unlike happily in English. The existence of the suffix -ge 'appearance' here is characteristic of pure manner adverbials in Japanese, which is not an element of a proposition thereby mediating an objective description of an entity (Nakau 1980:183-184). Other expressions that have the same function include -soo-ni, yoosu-de, omoi-de, etc. of the MA adverb, this reading becomes difficult to obtain, if not completely out. (12b) sounds like a contradiction.
Taro-wa yukkurito yorokonde sabisigeni hanasi-ta. Taro-TOP slowly delightedly sadly talk-PST Lit. 'Taro slowly delightedly sadly talked.' The next subsection shows that the notion c-command in (7) cannot be replaced with precedence.
EVIDENCE FOR THE HIERARCHICAL NATURE OF ERNST'S GENERALIZATION.
Recall that the generalization in (7) is stated in structural terms, not in terms of linear order. It says that SO adverbs cannot have clausal readings when they are c-commanded by local manner adverbs. We then expect that if the pure manner adverb precedes but does not c-command SO adverbs, clausal readings will be available. We present an argument for this particular aspect of the generalization.
Yukkurito hanasi-sae Taro-wa yorokonde si-ta. slowly talk-even Taro-TOP delightedly do-PST 'Even talk slowly, Taro delightedly did.' It is easy to see that the MA adverbs in these sentences allow clausal readings, although a manner adverb precedes them. The availability of clausal readings can readily be explained. It suffices to say that none of the manner adverbs c-commands the MA adverb that follows it. The structure for (13a) is something like (14) below.
oogoe-de situmon-ni Taro iyaiya ti sita kotae-sae
Note that the stranded MA adverbs semantically take scope over the manner adverbs in these examples. This fact also automatically follows in this analysis if the fronted VP undergoes reconstruction into its original position at LF, as indicated by the arrow in (14) 3.3. SUBSECTION SUMMARY. In this subsection, we have shown that Ernst's generalization about the distribution of clausal readings of SO adverbs hold for Japanese MA adverbs. It is shown that the interpretation obtained when MA adverbs occur higher than another adverb is an event-layering effect, which is an indicator of their clausal readings. Also, an intersective modification analysis has been given to the contradictory interpretation (e.g., "talking happily sadly") obtained by adding of a manner adverb before the two adverbs. This analysis could not be instantiated if the MA adverb did not lose its clausal interpretation in the new configuration.
The clause-mate condition on clausal MA adverb placement.
This section discusses a rather different aspect of Japanese MA adverbs. The central issue is whether there is any restriction on where they can be adjoined in a tree.
PASSIVE-SENSITIVITY AS A PROBE INTO SUBJECT-ORIENTED ADVERBS' SYNTACTIC POSITION.
As well known in the literature (McConnell-Ginet 1982, for instance), SO adverbs are sensitive to active-passive alternation in such a way that passivization leads to the ambiguity with respect to their "orientation," namely, which participant's attitudes or mental states the adverbs express. In (15), while the a-sentence does not involve any ambiguity about orientation, the b-sentence does. What we might call the surface subject reading of (15b) is understood to mean: the patient was careful while being examined by the doctor. Under what we might call the deep subject reading, (15b) is understood to mean: the doctor was careful while examining the patient, which is truth-conditionally equivalent to the sole meaning of (15a). Interestingly enough, when carefully is attached to a T projection as in (15c), the deep subject reading goes away (McConnell-Ginet 1982).
The doctor carefully examined the patient.
The patient was carefully examined by the doctor.
The patient carefully was examined by the doctor.
Essentially following Ernst (2002), we can make an assumption like (16) as the generalization that captures the basic facts about passive sensitivity. 9 (See Kubota (2015), who investigates Japanese SO adverbs with this generalization.) (16) If an adverb allows a deep subject reading in passives, it indicates that it can be adjoined to v¢, i.e., the position right below the Spec,vP where the logical subject is base-generated.
(17) is a more concrete syntactic representation of passives. Here, VP contains an NP-trace created by passivization. The deep subject in Spec,vP may itself be realized as a by-phrase, or it may be phonologically null and anaphoric to the by-phrase separately occurring somewhere in the tree (Baker, Johnson, andRoberts 1989, Collins 2005). These details are not crucial here. Against this backdrop, let us now turn to our main point. (18) is a passive example containing the MA adverb iyaiya followed by the manner adverb teineini 'carefully'.
(18) Kono ronbun-wa (hissya-niyotte) iyaiya teineini kak-are-ta-yooda. this paper-TOP author-by reluctantly with.care write-PASS-PST-seem Lit. 'This paper seems to have reluctantly been written carefully (by the author).' An 'event-layering' reading (i.e., what the author was reluctant about is to write the paper with care) is clearly available. This strongly suggests that the clausal MA adverb is allowed to hang from a v projection and be c-commanded by the (possibly implicit) deep subject argument. (19) is another example making the same point.
(19) Atarasii kooka-ga seito-tati-niyotte yorokonde heta-ni utaw-are-ta. new school.song-NOM student-PL-by delightedly poorly sing-PASS-PST 'The new school was delightedly sung poorly by students.' One possible scenario is: the students, who may be acting silly, enjoy singing their new school song poorly.
In the next subsection, we see cases where clausal MA adverbs fail to be licensed even though they appear to be adjoined to a v-projection. 4.2. EMBEDDABILITY. Clausal MA adverbs seem to only occur in a limited set of embedded clauses. Consider (20a), (20b), and (20c), which involve a CP complement of yurusu 'allow', a te-complement of hosii 'want' (which is often analyzed as TP; Nakatani 2013, Hayashi and, and a causative complement headed by a bare verb (which is standardly analyzed as vP; Murasugi andHashimoto 2004, Harley 2008), respectively. It should be noted (i) that the judgments given here are those for the readings where iyaiya is oriented to the embedded subject and (ii) that the judgments are about whether non-contradictory (i.e., event-layering) readings can be obtained.
Not only does this condition successfully rule out (20c) and rule in (20a) and (20b), but also it is compatible with the grammaticality of the passives given in (18) and (19). We assume that these passive sentences are mono-clausal.
Finally, it should be stressed that this restriction on clausal MA adverb licensing cannot be reduced to the Ernstein SB theory, which does not say anything than clausal SO adverbs selecting for External Events.
4.3. MORE ON PASSIVE-SENSITIVITY. Before concluding the paper, let us return to passive-sensitivity. We saw above that clausal MA adverbs can be adjoined to a v-projection, making it possible for them to yield deep-subject-oriented readings. This subsection discusses an apparent counterexample to the analysis. As Kubota (2015) notes, MA adverbs do exhibit passive-sensitivity. (22a) and (22b) are adapted from Kubota (2015). Mary-wa John-ni iyaiya dakishime-rare-ta.
(Ambiguous) Mary-TOP John-by reluctantly hug-PASS-PST 'Mary was reluctantly hugged by John.' (22b) has two readings: Mary is reluctant in one reading, and John is so in the other. Under (15), the availability of the deep subject reading may be taken to mean that iyaiya can be adjoined to v¢.
This conclusion, however, is unwarranted, because we now know that there is a possibility that the MA adverb receives a manner reading here (Section 3.1). To find out whether the instance of iyaiya in (22b) can have a clausal reading, we use the event-layering effect as a test; (23) is obtained when a manner adverb uresisooni 'happily' is added after iyaiya.
(23) Mary-wa John-ni iyaiya uresisooni dakishime-rare-ta. Mary-TOP John-by reluctantly happily hug-PASS-PST Lit. 'Mary was reluctantly happily hugged by John.' The sentence is certainly complicated, and therefore we need to interpret the data with care. The following preliminary observations nevertheless can be made about the sentence.
• When Mary is taken as the attitude holder for iyaiya (and uresisooni), the contradictory reading is not difficult to access. • A clausal reading of iyaiya is also possible under the surface subject reading. For example: Mary and John are actors, and a scene is being shot for a film. The director of the film asks Mary to be hugged happily. Having followed the director's suggestion, Mary tries to be hugged happily though she is reluctant in her mind.
• When John is taken as the attitude holder for the two adverbs, the contradictory reading is clearly accessible. • However, the sentence is harder to understand in a way that is associated, e.g., with the following film shooting scenario. When a scene is being shot, Mary is psychologically affected by John's way of hugging her. He has appeared happy, but she has noticed that he is actually reluctant to hug her in such a manner, which affects her emotion.
The final bullet point suggests that iyaiya lacks a clausal reading under its deep-subject-oriented interpretation. 12 This result first appears to be odds with the conclusion we have arrived at about the passives in (18) and (19). That is only apparent, though. These so-called ni-passives found in (22) and (23) are often analyzed as bi-clausal (Hoshi 1994(Hoshi , 1999; see also Kuroda 1979, Inoue 1976). If this is correct and their complements lack Tense as they appear to, (23) should be treated on a par with the causative example in (20c).
Conclusion.
We have argued that Japanese MA adverbs are by and large well-behaved from a perspective of an Ernstian SB theory of adverb licensing. We have shown that they interact with manner adverbs in ways in which the theory expects. Furthermore, we have argued that MA adverbs require a clause-mate Tense. We need that condition to account for embeddability of clausal MA adverbs in different types of complements and somewhat complicated facts pertaining to passive sensitivity. Because the clause-mate condition does not follow from the SB theory, we have to stipulate it at this point. | 5,206.6 | 2021-03-20T00:00:00.000 | [
"Linguistics"
] |
Magnetic Field Signatures of Tropospheric and Thermospheric Lamb Modes Triggered by the 15 January 2022 Tonga Volcanic Eruption
Intense eruptions of the Tonga volcano activated prominent traveling atmospheric disturbances (TADs) at 04:05UT on 15 January 2022. Himawari‐8 satellite images depict that TADs of the tropospheric Lamb wavefront propagate with a speed of 315 m/s and arrive in Taiwan at 11:30UT. Networks of 98 barometers, 28 tide gauges, an ionosonde, and 10 magnetometers are used to study the responses of magnetic fields to TADs in Taiwan. The horizontal components in magnetic field changes of the Taiwan magnetometers all point toward and away from the Tonga volcano at 11:00–12:00UT upon the tropospheric Lamb wavefront arrival and at 22:00–23:00UT when the thermospheric Lamb wavefront with speeds of 487 m/s coming, respectively. Analyses of the raytracing and beamforming techniques on the horizontal components in magnetic field changes of 69 INTERMAGNET magnetometers show that both tropospheric and thermospheric Lamb waves efficiently activate traveling ionospheric disturbances and modify ionospheric currents of the globe.
• Tropospheric and thermospheric
Lamb waves of the Tonga volcanic eruption activate dynamo currents and electric fields • Traveling atmospheric disturbances of the Tonga volcanic eruption significantly uplift the ionosphere • Tropospheric Lamb waves of the Tonga volcanic eruption modulate ground-based air pressures and sea levels Supporting Information: Supporting Information may be found in the online version of this article.
At 04:05UT, intense eruptions of the Tonga volcano generated prominent TADs of atmospheric shocks, pressure disturbances, and tsunamis on 15 January 2022.Himawari-8 satellite images depict prominent TADs of a Lamb wavefront (cf., Liu et al., 1982) propagating worldwide (Figure 1a), which provides a good chance to study tele-volcanic magnetic signatures induced by TADs in the ionospheric E-layer.Networks of 10 three-component magnetometers, 98 barometers, 28 tide gauges, and an ionosonde are employed to examine the atmosphere-ionosphere coupling upon the arrival of TADs/TIDs, especially Lamb waves, over Taiwan.Meanwhile, 69 global magnetometers of INTERMAGNET (https://www.intermagnet.org/index-eng.php)(Kerridge, 2001) are used to study responses of the horizontal component magnetic fields to TIDs on the globe.
Observations and Data Analyses
Himawari-8 is a new generation of Japanese geostationary meteorological satellite, which carries state-of-theart optical sensors with significantly high radiometric, spectral, and spatial resolution (Bessho et al., 2016).The wavelength of 6.2 μm infrared band (#8) with a spatial resolution of 2 km and the time resolution of 10 min clearly observes worldwide TADs of upper-level tropospheric water vapor at 344 hPa (http://cimss.ssec.wisc.edu/goes/OCLOFactSheetPDFs/) at about 8.2 km altitude (https://www.weather.gov/epz/wxcalc_pressurealtitude)(Otsuka, 2022) during the Tonga volcanic eruption.Figure 1a displays TADs of the tropospheric Lamb wavefront in the double difference of Himawari-8 band#8 images, which traveled with an average speed of 315 m/s away from the Tonga volcano and arrived in Taiwan at about 11:30UT, as well as locations of the Taiwan magnetometers, barometers, tide gauges, and an ionosonde.
Figures 1b-1d display ionograms, ground-based atmospheric pressures, and tide-removed sea level fluctuations on 15 January 2022, respectively.When the tropospheric Lamb wavefront in Himawari-8 images arrives in Taiwan at 11:30UT, the pressures start to increase and reach their maximums at about 11:50UT (Figure 1c); the sea levels begin to fluctuate and become prominent after 14:00UT and reach their maximums by 14:30-17:30UT (Figure 1d).The maximums of ground-based pressures and sea level fluctuations lag the tropospheric Lamb wavefront by 20 min and 3-6 hr, respectively.The short lag of 20 min could be due to the ground friction to the tropospheric Lamb wavefront, while the long one of the 3-6 hr may be caused by atmospheric pressure-sea surface interaction and/or tsunami waves.Based on the maximums, the horizontal speed of ground pressures is about 286 m/s (Figure 1c), while due to the maximums being complex, the horizontal speed of sea level fluctuations is difficult to estimate (Figure 1d). Figure 1b shows ionograms on top each hour that the F-layer appears clearly at about 220-300 km virtual height during 00:00-11:00UT, reaching a maximum altitude at about 10:00UT.After the tropospheric Lamb wavefront arrival at 11:30UT, the range-spread-F appears at 280-300 km at 12:00UT, 375-395 km at 13:00UT, 190-220 and 275-300 km at 14:00UT; reaches the highest altitude of about 275-520 km at 15:00-16:00UT; starts descending to 270-330 km at 17:00UT and 220-290 km at 18:00UT; becomes very faint after 19:00UT; finally disappear at 21:00UT; and typical F-layer with non-physical fluctuation traces appears at 250-350 km during 23:00-24:00UT (Figure S1 in Supporting Information S1, Movie S1).
Figure 2a from top to bottom illustrates the condensed rapid-run ionograms with 6 min resolution (Movie S1), magnetic field changes, subtracting the reference from the observation of ΔB x and ΔB y , the azimuth of magnetic horizontal changes (ΔH), the upper/lower envelops of sea level fluctuations, and those envelopes of atmospheric Let the northward, eastward, southward, and westward be 0, 90, 180, and 270°, respectively.The azimuth θ of magnetic horizontal changes of ΔH = ( Δ 2 + Δ 2 ) 1/2 can be expressed as, Figure 2a displays that ΔB x yields a rapid decrease at Zone A, a maximum at Zone H, a minimum at Zone B, and a maximum at Zone C, while ΔB y depicts a minimum at about 03:00UT, a maximum at Zone H, and a prominent minimum at Zone C. The azimuths of ΔH of the 10 magnetometers lie between 120° and 330° on 15 January 2022.We further compute the median of azimuths of ΔH with 3,600 (=60 × 60) datapoints for each magnetometer every hour on 15 January 2022 (Figure S2 in Supporting Information S1 and Movie S2). Figure 2b shows that the 10 median azimuths pointing of 120.0-130.5°(290.1-307.0°)at Zone H (Zone C), which is about the direction toward (away from) the Tonga volcano at 124.8° (304.8°).
To see whether the toward-pointing signature specifically occurs in Taiwan or not, we select 10 out of 69 magnetometers of INTERMAGNET to examine the horizontal magnetic field changes in the eastern Asia region and globe upon the tropospheric Lamb wavefront arrival.The sample rate of the INTERMAGNET magnetometers is 1 Hz. Figure 3a illustrates the locations of INTERMAGNET and the tropospheric Lamb wavefront at 11:30UT on 15 January 2022, while Figure 3b depicts ΔB x , ΔB y , and azimuth angles of the 10 selected magnetometers, 5 in regional (KAK, KNY, MMB, CYG, and DLT) and 5 in global (GNG, TUC, VIC, HER, and CNB) areas.At Zone A, while Figures 1e and 2a depict that ΔB x in the Taiwan magnetometers suddenly decreases at about 06:30UT, Figure 3b shows similar changes in the 10 selected magnetometers, which confirms that global effects have been detected.Note that the Dst index reaches a local maximum of −43 nT at 06:00UT during the storm period.At Zone H (10:30-12:00UT), while the ΔB x and ΔB y and azimuth angles of the magnetometers in Taiwan respectively reach the maximums and point toward the Tonga volcano (Figure 2a), those in the regional magnetometers at KAK, KNY, MMB, CYG, and DLT also yield the similar maximums and pointing toward the volcano (Figure 3b).For the globally selected magnetometers, at Zone H, despite no extrema in ΔB x or ΔB y , upon the tropospheric Lamb wavefront arrival, the horizontal components at GNG, TUC, and VIC also point toward the Tonga volcano, while those at HER change rapidly.It is interesting to find that the horizontal component at CNB points away from the Tonga volcano, which might result from the site being in the southern hemisphere, where the magnetic vertical component (B z ) is upward.The horizontal components pointing toward and away from the Tonga volcano as well as changing rapidly show that the tropospheric Lamb wavefront plays an important role in the magnetic field perturbations.At Zone B, the ΔB x component of the 10 Taiwan magnetometers, the five regional magnetometers, and one global magnetometer (GNG, around the conjugate point of KNY) yield very similar tendencies, simultaneously reaching significant large reductions at 15:40UT, which suggests that regional and conjugate effects have been observed.It is surprising that at Zone C, the ΔB x and ΔB y of both Taiwan and regional magnetometers reach the maximum and the minimum, respectively, which further results in the azimuth angles suddenly changing and pointing away from the Tonga volcano (Figures 2, 3b, and 3c).
To find responses of the magnetic field to the tropospheric Lamb wavefront of the globe, we examine the azimuth of ΔH within 1.5 hr before and after the wavefront arrival recorded by the INTERMAGNET magnetometers.
When the azimuth points within 15° of the toward or away Tonga volcano direction for more than 15 min, or fast changes more than 90° in 5 min, we then can consider that TADs/TIDs induced by the tropospheric Lamb wavefront have been observed.Upon 1.5 hr before and after the tropospheric Lamb wavefront arrival, ΔH azimuth angles of the 10 Taiwan magnetometers all (100% = 10/10) point toward the Tonga volcano, while those of 25.4% (=16/63), 9.3% (=6/63), and 25.4% (=16/63) of the INTERMAGNET magnetometers yield toward, away, and fast change signatures (Figure 3a; Figures S3 and S4 in Supporting Information S1, Movies S3 and S4).Note We further adopt the ray tracing and the beam forming techniques (Liu et al., 2006(Liu et al., , 2010(Liu et al., , 2019(Liu et al., , 2020b) ) on the INTERMAGNET data associated with Zone H and Zone C signatures (Figures 3a and 3c), and construct two global grid searches to see whether the tropospheric Lamb waves and the 487 m/s TIDs are triggered by the volcanic eruption or not.Figures 4a and 4b illustrate that when the tropospheric Lamb wavefront speed of 315 m/s (the volcanic eruption time of 04:05UT) is given to the ray tracing (beamforming) technique, the location at the minimum standard deviation in the travel time of ±34 min with the average eruption time of 03:53UT (the minimum standard deviation of ±27 m/s with the average travel speed of 329 m/s) appears near the Tonga volcano, which confirms the tropospheric Lamb wavefront triggered by the Tonga volcanic eruption can prominently change the horizontal magnetic field in the globe.Similarly, Figures 4c and 4d depict that when the speed of 487 m/s and volcanic eruption time of 04:05UT are set, locations of the minimum standard deviation in the travel time of ±37 min with the average eruption time of 04:15UT and that in the travel speed of ±22 m/s with the average travel speed of 482 m/s are near the Tonga volcano, which again confirms that the Tonga volcanic eruption can trigger the 487 m/s TIDs and prominently disturb the horizontal magnetic field on the globe.
Discussion and Conclusion
Scientists find that the atmospheric response to excitations at tropospheric heights, such as by volcanic eruptions, is dominated by Lamb waves because their wave energy is mainly distributed at lower heights of the atmosphere (Francis, 1973;Jones, 1970;Lin et al., 2021;Lindzen & Blake, 1972).These modes can propagate a long distance with a horizontal speed slightly above 300 m/s and little attenuation.Figure 1a shows that the tropospheric Lamb wavefront activated by the Tonga volcanic eruption at 04:05UT travels 8,500 km with a horizontal speed of 315 m/s and arrives in Taiwan by 11:30UT, which agrees with the characteristics of Lamb waves induced by volcanic eruptions (Kubota et al., 2022;Liu et al., 1982;Zhang et al., 2022).Figures 1c and 1d show that upon the tropospheric Lamb wavefront arriving in Taiwan at 11:30UT, the ground-based pressures and sea levels start to increase and fluctuate.The pressures reach their maximum at about 11:50UT.The 20-min lag of the pressures suggests that the tropospheric Lamb waves mainly travel in the upper-level troposphere.In contrast, when the Taiwan and regional magnetometers register the passages of 487 m/s TADs/TIDs at 08:50UT and Zone C, no fluctuations can be detected by the barometers and tide gauges (Figure 1), which suggests the 487 m/s TADs/ TIDs traveling in a higher atmosphere at about 150 km altitude and being related to thermospheric Lamb waves (Forbes et al., 1999;Meyer & Forbes, 1997).
The median azimuths of the 10 Taiwan magnetometers, each obtained with 3,600 datapoints, of 120.0-130.5°together with that of the five regional magnetometers almost exactly point toward the Tonga volcano (124.8°,azimuth in Taiwan), which strongly suggests that intense dynamo ionospheric currents in southwestward during the of the tropospheric Lamb wavefront arrival at 10:30-12:00UT (Zone H).The beginning of the eastward electric field at 11:30UT lags that of the dynamo current at 10:30UT by about 1 hr.Based on Kelley (2009), the most usual form of the current equation can be expressed as where σ, E, U, and B E are the conductivity, electric field, neutral wind, and Earth's magnetic field in the Earth-fixed coordinates, respectively.B E consists of horizontal (B H ) and vertical (B z ) components.Here, σ is a 3 × 3 tenser and functions of electron/ion density, gyro frequency, mass, and collision frequency.Around the Lamb wavefront arrival, the current can be expressed as When the conductivity is very high, the dynamo current, J d = σ U d × B E , will be quickly canceled by the motor current of the dynamo electric field, σE d .However, if an impact very suddenly occurs, for example, a Lamb wavefront, the dynamo current could lead the dynamo electric field by minutes to hours.The median azimuths of the 10 Taiwan magnetometers and the five regional magnetometers pointing toward the Tonga volcano indicate that the intense J d in southwestward occurs around the tropospheric Lamb wavefront arrival in Taiwan and the regional area at 10:30-12:00UT (Zone H in Figures 2a and 3b).The intense J d in the southwestward results in the dynamo electric field, E d , in the northeastward.It is the E × B upward drift owing to the eastward component of E d and the Earth's magnetic field, B E , causing the prominent ionosphere ascendence during 11:30-15:30UT (Figures 1b and 2a).These indicate that the dynamo current leads the dynamo electric field by about 1-2 hr in Taiwan and Japan after the tropospheric Lamb wave arrival.Figure 5 sketches that due to the cross product of the neutral wind velocity activated by the tropospheric Lamb wavefront and the Earth's magnetic field, the dynamo currents result in the azimuth angle pointing toward and away from the Tonga volcano in the northern and southern hemisphere (e.g., CNB), respectively.Moreover, owing to the competition and time delay between dynamo currents (J d ) and motor currents of the dynamo electric field (σE d ), the azimuth angle could point toward/away from the Tonga volcano or abruptly fluctuate.In total, 100% of Taiwan and 60% of INTERMAGNET magnetometers experience the tropospheric Lamb wavefront disturbing the magnetic field in the ionosphere (Figure 3a, Figures S3 and S4 in Supporting Information S1).
Each INTERMAGNET magnetometer could experience one tropospheric and two thermospheric front passages on 15 January 2022.In total, the magnetometers experience 63 tropospheric and 113 thermospheric front passages (Figures S3b and S4-S6, Table S1 in Supporting Information S1).About 60% and 58% of front passages register the tropospheric and thermospheric Lamb mode signatures, respectively.Approximately, 65% and 58% (55% and 60%) of the front passages register the tropospheric (thermospheric) Lamb mode signatures in daytime and nighttime, respectively.Meanwhile, about 35% of the first pass and 86% of the second pass register the thermospheric Lamb mode signatures.In nighttime, more than 96% of the second pass registers the thermospheric Lamb mode signatures.In short, regardless of daytime/nighttime; the first/second pass, more than 55% of front passages register the tropospheric and thermospheric Lamb mode signatures.Almost all the second passages in nighttime register the thermospheric Lamb mode signatures.
The nighttime conductivity at midlatitudes is generally low, and however, the Es layer with foEs of 2.0-5.0MHz at 120-150 km altitude (Movie S1) shows that the conductivity could be much greater than its typical value during 14:30-16:30UT (=22:30-00:30LT-8 hours).Figures 1b and 2a reveal that the ionosphere monotonically descends during 14:30-18:00UT.This indicates that the ionosphere experiences prominent westward electric fields which further activate westward motor currents, and cause significant decreases in the northward magnetic field of B x in the Taiwan, regional, and conjugate areas during 14:30-16:30UT (Zone B in Figures 1e, 2a, and 3b).Figures 2a and 3b depict that at Zone C, the azimuth angles point away from the Tonga volcano, which suggests that northeastward dynamo currents and southwestward dynamo electric fields have been induced by the thermospheric Lamb waves toward the Tonga volcano (i.e., away from the antipode).The Es-layer descends to the lowest altitude of about 90 km around 22:20-22:30UT, confirming the westward component of the southwestward dynamo electric field.We further examine the azimuth angles of the INTERMAGNET magnetometers and find that in total, 86% (=43/50) of the magnetometers register the thermospheric Lamb front signatures of the magnetic field and currents in the ionosphere.The source origins derived by the ray tracing and beamforming techniques are near the Tonga volcano, which confirms that the eruptions trigger both tropospheric and thermospheric Lamb waves traveling worldwide.It is interesting to find that the thermospheric Lamb waves propagate almost 1.5 times faster than the tropospheric Lamb waves do.In conclusion, TADs of the tropospheric Lamb wavefront result in the ground-based atmospheric pressure peak and cause sea level prominent fluctuations.The signatures of pointing toward, away, or fast changes indicate that the TIDs triggered by the tropospheric and thermospheric Lamb waves can further induce dynamo currents and the associated dynamo electric fields on the globe.
Figure 1 .
Figure 1.(a) Locations of Overhauser magnetometers (red triangles), tide gauges (dark blue asterisks), barometers (blue diamonds), and an ionosonde (magenta square, 121.0°E, 25.0°N) as well the Himawari-8 satellite image at 11:30UT on 15 January 2022.The magenta star presents the Tonga volcano and red arrows denote the variations of the horizontal component of the Earth's magnetic fields during the 14:00-15:00UT on 15 January 2022.The (b) ionogram, (c) differential pressure disturbances recorded by 98 barometers, and (d) sea surface heights recorded by the 28 tide gauges and those fluctuations after the tidal removal have been displayed.The time sample rates of the ionograms, atmospheric pressures, and sea levels are 6, 1, and 1 min (6-min interpolation), respectively.The red vertical line denotes the tropospheric Lamb wavefront arrival time in Taiwan at 11:30UT.Note that the trace around 480 km altitude is the "double hop" of the trace at about 220 km altitude at 11:00UT in the ionogram.(e-f) Variations of the magnetic northward (B x ), eastward (B y ), and downward (B z ) components on 15 January (red curves) as well as on the reference days of 16-18 January (thin gray curves) 2022 and their associated median (dashed curves).The sampling rate is 1 Hz.The red vertical lines denote the arrival time of the tropospheric Lamb wavefront arrival in Taiwan at 11:30UT on 15 January 2022.The black, magenta, black, and blue dotted squares denote the time zone of Zone A, H, B, and C, respectively.
Figure 2 .
Figure 2. (a) From top to bottom show that the virtual height of the E-, F-, and sporadic Es-layers, deviations of the magnetic northward (ΔB x ) and eastward (ΔB y ) component, and the azimuth of the deviated magnetic horizontal component (ΔH) of arctan(ΔB y /ΔB x ) of each magnetometer, the sea level and pressure with detrended data as well as their upper/lower boundary.ΔB x , ΔB y , and the azimuth angle are denoted by blue, green, and red curves, respectively.Sea level and pressure fluctuations with the detrended data, upper and lower boundary, and the envelope between upper and lower boundary are indicated by gray curves, red curves, magenta curves, and black curves, respectively.(b) The azimuth angles at 11:00-12:00UT (Zone H) and 22:00-23:00UT (Zone C) on 15 January 2022 are represented in red and blue arrows, respectively.The black dashed arrows indicate the toward/away directly to the Tonga volcano.
Figure 3 .
Figure 3. (a) The map of 10 Taiwan magnetometers (combine into one) and 69 global magnetometers with red, blue, green, and black squares denote the toward, away, fast changes, and none signatures on each station, respectively.Tonga volcano is pointed out with magenta stars, and the red curve indicates the tropospheric Lamb wavefront at 11:30UT.The 10 selected global magnetometers are specified with cyan circles.(b)The ΔB x , ΔB y , and azimuth angles are denoted by blue, green, and red lines, respectively.The vertical red lines indicate the arrival of the tropospheric Lamb wavefront on each magnetometer, and the precise location of each magnetometer is labeled vertically in the form of (Latitude (°N), Longitude (°E)) on the left, with the distances (km) to Tonga notes on the right-top.The right-hand-side table denotes whether the critical variation can be observed in the corresponding time zone, symbol "○" and "×" represent "Yes" and "No," respectively.At Zone H, the different time zones depend on the distance between the Tonga volcano and the magnetometers.The notations of "T," "A," and "F" indicate the toward, away, and fast changes signatures.(c) The map with the same illustration as (a), but at 22:00UT.The blue curve denotes the thermospheric Lamb wavefront.
Figure 4 .
Figure 4.The ray tracing (a and c) and beamforming (b and d) techniques apply to Zone H and Zone C. The magenta solid star, halo star, and white cross indicate the location of Tonga volcano, the antipode of Tonga volcano, and the location of minimum standard deviation.The minimum standard deviation, associated mean, and distance from the minimum to Tonga are listed on top of each panel.
𝐉𝐉
= + = + ( + × ) = + + (3) which consists of the ambient J 0 and the triggered J t currents.U d , E d , and J d are the neutral wind disturbed by the Lamb wavefront, dynamo electric field, and dynamo current, respectively.U = U 0 + U d , where U 0 denotes the background neutral wind velocity.The dynamo current and the dynamo electric field are in opposite directions.
Figure 5 .
Figure 5.A sketch of neutral winds (U d ) activated by the tropospheric Lamb wavefront, dynamo-generated electric fields (E d ), dynamo current (J d ), and the induced magnetic fields (ΔH, red bold arrows) in the lower ionosphere.Magenta arrows denote the Earth's magnetic field (B E ). | 5,118.4 | 2023-10-13T00:00:00.000 | [
"Physics"
] |
A Collaborative Multi-agent Reinforcement Learning Framework for Dialog Action Decomposition
Most reinforcement learning methods for dialog policy learning train a centralized agent that selects a predefined joint action concatenating domain name, intent type, and slot name. The centralized dialog agent suffers from a great many user-agent interaction requirements due to the large action space. Besides, designing the concatenated actions is laborious to engineers and maybe struggled with edge cases. To solve these problems, we model the dialog policy learning problem with a novel multi-agent framework, in which each part of the action is led by a different agent. The framework reduces labor costs for action templates and decreases the size of the action space for each agent. Furthermore, we relieve the non-stationary problem caused by the changing dynamics of the environment as evolving of agents’ policies by introducing a joint optimization process that makes agents can exchange their policy information. Concurrently, an independent experience replay buffer mechanism is integrated to reduce the dependence between gradients of samples to improve training efficiency. The effectiveness of the proposed framework is demonstrated in a multi-domain environment with both user simulator evaluation and human evaluation.
Introduction
Dialog policy optimization is one of the most critical tasks of task-oriented dialog modeling. Recently, it has shown great potentials for using reinforcement learning (RL) based methods to formulate dialog policy learning Peng et al., 2017). However, most of these methods learn a centralized agent based on the joint action space that covers predefined atomic action (Budzianowski et al., 2018), which is the concatenation of domain name, intent type, and slot name, e.g. 'restaurant-inform-address', or both atomic actions and the top-k most frequent atomic action combinations (Lee et al., 2019a). The elaborate concatenated actions may achieve acceptable performance in simple cases, however, continuously suffer from being laborious to engineers and struggled with edge cases in multi-domain or complex scenes. Another drawback of the centralized agent is its exponential growth in the observation and actions spaces with the growing number of domains (Lee et al., 2019b).
To alleviate the problem of large user-agent interaction requirements caused by the large action space, a hierarchical reinforcement learning framework was proposed to learn the dialog policy that operates at different temporal scales (Peng et al., 2017). It has achieved promising results, however, is still up against some challenges. Firstly, their setting requires a rule-based critic to provide the intrinsic reward for the low-level agent. However, creating such a critic is not easy, especially in intricate scenarios. The man-made critic, somewhat inadvertently, may bias the convergent optimal. Moreover, the action space composed of intent and slot for the low-level agent can be still large, especially when there are a lot of intent types and slot names. Drawing the structural features of dialog actions, we address the above problems with a proposed collaborative multi-agent reinforcement learning framework, where the concatenated dialog action space is decomposed into subspaces corresponding to the domain, intent type, and slot name. Furthermore, each subspace is assigned to different agents, which cooperate to make the final joint action without any human knowledge. The agents concatenate together and pass the output to the next agent. To relieve the non-stationary problem (Claus and Boutilier, 1998;Hu and Wellman, 2003) caused by unexpected changes in the dynamics of the environment as evolving of the agents' policies and to reduce the dependence of the gradients due to the non-independent data, we propose a new approach which allows Joint Optimization based on Independent Experience replay buffers for all agents, termed as JOIE. Our experiments show that such a multi-agent framework reduces the state-action space size significantly and make exploration more efficient. Furthermore, JOIE leads to a better performance benefit from the proposed optimization mechanism.
To the best of our knowledge, this is the first work that strives to develop a multi-agent RL-based dialog action decomposition framework. Our main contributions are three-fold: • We formulate dialog policy learning in the mathematical framework of collaborative multi-agent reinforcement learning.
• We propose an efficient and effective multiagent-based approach factoring the action space size and learning each part by different agents with joint optimization and independent experience replay.
• We validate the effectiveness of the proposed method in a multi-domain task with both user simulators and human users.
Related Work
Many studies have been dedicated to optimizing dialog policy with reinforcement learning, most of which learn a centralized agent that maps the observation to a joint action (Young et al., 2013;Su et al., 2016;Williams et al., 2017;Peng et al., 2018a,b;Lipton et al., 2018;Li et al., 2020a;Zhu et al., 2020;Li et al., 2020b;Wang et al., 2020). For more efficient exploration, (Peng et al., 2017) factor the centralized spaces into hierarchical reinforcement learning paradigms. Meanwhile, cooperative multi-agent reinforcement learning methods have started moving from tabular methods to deep learning methods and are widely applied especially on computer games (Sunehag et al., 2017;Rashid et al., 2018;Jhunjhunwala et al., 2020). Towards multi-agent taskoriented dialog policy, a lot of progress is being made in modeling the interaction as a stochastic collaborative game, where dialog agent and the user simulator are jointly optimized with their objectives (Liu and Lane, 2017;Papangelis et al., 2019;. Building a user simulator in this way is more flexible. However, different from existing frameworks, our multi-agent framework is devoted to decompose concatenated actions in order to reduce the large action space size to improve the performance of dialog agents. Figure 1: Illustration of the collaborative multi-agent framework for dialog policy learning.
Approach
Different from the previous methods that learn a centralized agent or that adopt hierarchical RL paradigms, we cast the policy learning as a multiagent RL framework, as shown in Figure 1. It integrates three agents specified to be responsible for the domain a d , intent type a i , and slot name a s , respectively. They share reward r and make decisions cooperatively based on the state s from the user. Consequently, a concatenation A a from the three agents is passed to the user.
Multi-agent Dialog Policy
Specifically, Agent1 perceives the state s and learns the domain policy π d that selects a domain category a d ∈ A d . Meanwhile, Agent2 equipped with the intent policy π i , takes as input the state s and the selected domain a d , and decides the intent type a i ∈ A i . Then, Agent3 receives s, a d and a i , and determines the slot names a s ∈ A s based on the slot policy π s . Where A d , A i , and A s are the sets of all possible domain names, intent types, and slot names, respectively.
Naturally, we aim to simultaneously optimize all policies that achieve the maximal shared cumulative rewards. Specifically, Agent1 aims to learn the domain policy π d that maximizes the expected sum of rewards condition on s and a d that where r t denotes the reward from the user at turn t, and γ ∈ [0, 1] is a discount factor. Similarly, the intent policy π i is trained to maximize E π i ,st=s,a d neural network parameterized by θ d that satisfies the following: is the target state-action value function that is only periodically updated. Similarly, the intent policy π i estimates the optimal Q-function parameterized by θ i that satisfies the following: Where Q θ i (.) is the target value function, and || is the tagger of concatenation. Meanwhile, the slot policy estimates the optimal Q-function parameterized by θ s that satisfies the following:
JOIE for Policy Learning
To alleviates the dependence of the gradients caused by the non-independent data, the agents maintain their independent experience replay buffer, set as D d , D i and D s for the domain policy, the intent policy, and the slot policy respectively. Consequently, the Q-function Q θ d for the domain policy is learned by minimizing the following loss function: (4) Similarly, the intent policy tries to minimize the following loss function: Meanwhile, the loss function for the slot policy is: , (a s ) ) (6) As shown in Figure 1 and Equation 4, 5 and 6, all agents can observe the global state and the previous agents' actions during training. This setting stabilizes the training procedure by alleviating the non-stationary environment caused by unexpected changes in the dynamics as evolving of the agents' policies. Besides, we proposed to utilize a joint optimization process by adding up each agent's losses represented as Equation 7 based on a shared hidden network. With the joint optimization, the agents do not experience unexpected changes in the environment because different agents can exchange policy information through the shared hidden layers φ.
A detailed summary of the learning algorithm of the collaborative multi-agent reinforcement learning for dialog policy based on joint optimization and independent experience replay buffer (JOIE) is provided in Algorithm 1 in Appendix D.
Experiments
Comparison is on MultiWoz (Budzianowski et al., 2018) with a public available agenda-based user simulator (Zhu et al., 2020). The detail of the user simulator and implementation is in Appendix B, C. We first evaluate 2-agent based models that factor the centralized spaces into two subspaces of the domain and joint intent-slot on 3 different domains sizes of 2, 4, and 7 on MultiWoz. Then we compare 3-agent based models that decompose the action spaces into three subspaces of the domain, intent, and slot. The dataset contains 7 domains, 13 intents, and 28 slots totally. Details of the dataset are provided in Appendix A.
Baseline Agents
We compare JOIE with DQN, Hierarchical DQN (H-DQN), and two multi-agent RL agents. Note that, we do not consider any other methods that use demonstrations because our motivation is to improve learning in a large action space without human knowledge.
• H-DQN (Peng et al., 2017) is a hierarchical deep RL approach consists of: (1) a top-level agent that selects domain (sub-goal), (2) a low-level agent that determines intent-slot to complete the sub-goal. • JOIE is our proposed collaborative multiagent framework factoring the joint action space and learning each part by a different agent with joint optimization and independent experience replay, as described in Section 3.2.
• VDN (Sunehag et al., 2017) is a multi-agent method that combines each agent's state action-value function as a simple sum for optimization with shared transitions.
• QMIX (Rashid et al., 2018) is a variant of VDN which contains a mixing network that centralizes each agent's state action-value function for optimization.
Main Results
All agents are evaluated with the success rate (Succ.) at the end of the training, average turn (Turn), average reward (Reward). The main simulation results are shown in Table 1 and Figure 2, 3. The results show that the proposed JOIE learns much faster and performs consistently better in cases with a statistically significant margin. Figure 2 shows the learning curves of 2-agent based models. Firstly, JOIE achieves the best Succ. (on average 0.98) with the highest learning efficiency for all domain sizes. Qmix and VDN adopt an optimization fashion that estimates a concatenated action values, which is originally for partial observability. JOIE abandons this step to avoid the extra cost since we assume the state is fully observed by all agents. Additionally, the advantages of joint optimization that relieves non-stationary problems and independent experience replay buffer that reduces gradient dependence make JOIE better-learning performance.
Results of 2-agent based Models
The improvement is slight on domain = 2, but remarkable and impressive as the increasing sizes of the domains. Besides, multi-agent-based models outperform H-DQN, indicating that the proposed collaborative multi-agent framework, which decomposes the joint action space and is led each part by a different agent, can alleviate the exploration obstacles brought by large action space without human knowledge. Finally, DQN is consistently the worst, which is not surprising since it explores and learns from a flat and large action space without any guidance. Noticed that, the performance of DQN increase as the number of domains decreases, which depicts that the growth of action space hinders the learning speeds of RL agent. Meanwhile, as illustrated in Table. 1, the comparison results of Turn and Reward are consistent with that of Succ. Figure 3 shows the learning curves of 3-agent based models. It can be seen that JOIE3 learns faster and performs significantly better with a clear margin compared with VDN3 and Qmix3, which depicts that the decentralized policy with joint optimization and independent experience replay buffer is more capable of and robust to dialog policy learning. JOIE3 factors the concatenated intent-slot action space and assigns them to two agents, which further reduces the action space and balance load for each agent. As a consequence, JOIE3 learns faster than JOIE that based on joint intent-slot action space. Moreover, compared with VDN3 applying a simple sum centralization, Qmix3 adopts a trainable network centralization and achieves better performance.
Human Evaluation
User simulators are not sufficient to fully mimic the complexity of real users (Dhingra et al., 2017), therefore human evaluation is given to further assess the feasibility of JOIE in real scenarios. we deploy the agents in Figure 2 and 3 to interact with human users in 2-agent based models and 3-agent based models 1 trained on all (seven) domains for 2.0 × 10 5 simulation epochs. In each evaluation session, each human user is assigned with a goal sampled goal and instructed to communicate with a randomly selected agent to achieve the goal. Users can end the session at any time if the agent Keeps repeating or they believe the dialog is going to be a failure. At the end of each session, users are required to give explicit feedback on whether the dialog succeeded with all the user constraints satisfied. Moreover, evaluators rate the dialog session on a scale from 1 to 5 about the quality (5 is the best, 1 is the worst). We collect 50 dialogues for each agent. The results are listed in Table 2, which reflects JOIE of both 2-agent based and 3-agent based models perform consistently better than other baselines, which is consistent with what we have observed in simulation evaluation.
Conclusion and Future Work
We presented JOIE, a generally applicable collaborative multi-agent framework for policy learning. It factors action space and learning each part by a different agent with joint optimization and independent experience replay. The experiment results of the simulation show that the proposed agents are efficient and effective in multi-domain with large action space settings. Directions of future work include: (1) extending JOIE to multi-action policy. (2) improving JOIE with demonstration. Table. 3 lists all annotated dialog domains, intents, and slots for MultiWoz at a different number of domains in detail. Noted that, we didn't count the "General" and the "Booking" as a domain for they cannot define a task independently.
B User simulator
During training, the simulator initializes with a goal and takes system acts as input and outputs user acts with reward, which is set as -1 for each turn, and a positive (2 · T ) for successful dialog or a negative of −T for failed one, where T (set as 40) is the maximum number of turns in each dialog. A dialog is considered successful only if the agent helps the user simulator accomplish the goal and satisfies all the user's search constraints (Wang et al., 2020).
C Hyperparameters and Implementation
Set m ∈ 2, 4, 9 as the numbers of domains. We adopt 2-layer MLP with 100 hidden dimensions and Relu as the activation function for all m. Inputting state with dimension as 393, DQN's output dimension is m * 364. Where 364 is the number of action concatenating intent and slot. 2-agent based models with combined intent and slot action space, i.e. H-DQN, VDA, Qmix, JOIE, utilize two networks with different output heads of m and 364 dimensions. Noted that, VDA, Qmix, JOIE share input, and hidden layers. 3-agent based models with separated domain, intent, and slot action space, i.e. VDA3, Qmix3, JOIE3, apply three different output heads of m, 13, and 28 dimensions and share input and hidden. -greedy is utilized for policy exploration. We set the discount factor as γ = 0.9. The target networks are updated at every 1000 training epochs. To mitigate warm-up issues, We apply the rule-based agent of ConvLab (Lee et al., 2019a) to provide experiences at the beginning, the warm_start epoch for all agents is 1000. The learning rate is set as 0.001 for DQN, 0.0005 for JOIE3, and 0.00005 for the other models. The decay rate and step size are 0.95 and 1000.
D Algorithms
Algorithm 1 outlines the full procedure for training multi-agent-based dialogue policies based on joint optimization and independent experience replay buffers.
Algorithm 1 JOIE for dialog policy learning | 3,908.8 | 2021-01-01T00:00:00.000 | [
"Computer Science"
] |
Cascaded Random Raman Fiber Laser With Low RIN and Wide Wavelength Tunability
Cascaded random Raman fiber lasers (CRRFLs) have been used as a new platform for designing high power and wavelength-agile laser sources. Recently, CRRFL pumped by ytterbium-doped random fiber laser (YRFL) has shown both high power output and low relative intensity noise (RIN). Here, by using a wavelength- and bandwidth-tunable point reflector in YRFL, we experimentally investigate the impacts of YRFL on the spectral and RIN properties of the CRRFL. We verify that the bandwidth of the point reflector in YRFL determines the bandwidth and temporal stability of YRFL. It is found that with an increase in the bandwidth of the point reflector in YRFL from 0.2nm to 1.4nm, CRRFL with higher spectral purity and lower RIN can be achieved due to better temporal stability of YRFL pump. By broadening the point reflector’s bandwidth to 1.4nm, the lasing power, spectral purity, and RIN of the 4th-order random lasing at 1349nm can reach 3.03W, 96.34%, and −115.19 dB/Hz, respectively. For comparison, the spectral purity and RIN of the 4th-order random lasing with the point reflector’s bandwidth of 0.2 nm are only 91.20% and −107.99dB/Hz, respectively. Also, we realize a wavelength widely tunable CRRFL pumped by a wavelength-tunable YRFL. This work provides a new platform for the development of ideal distributed Raman amplification pump sources based on CRRFLs with both good temporal stability and wide wavelength tunability, which is of great importance in applications of optical fiber communication and distributed sensing.
Particularly, cascaded random Raman fiber lasers (CRRFLs) can generate high power and wavelength agile lasing beyond rare earth emission bands [2,8,[29][30][31][32][33][34][35][36][37], which can find important applications in distributed Raman amplification of optical fiber communication and sensing systems. To improve the long-distance optical fiber communication and sensing systems performances, the low-noise distributed Raman pump sources are preferred. Pumped by tunable conventional ytterbium-doped fiber laser, continuously wavelength tuning covering 1 µm to 1.9 µm of CRRFL can be obtained [32]. However, with longitudinal mode beating, temporal intensity fluctuations of such a CRRFL are severe. With the help of the high power and low-noise amplified spontaneous emission (ASE) of ytterbium-doped fiber (YDF) as the pump source, the spectral purity and temporal stability of the CRRFL can be significantly enhanced [29][30][31][32][33]. It is also found that with the broader filtering bandwidth of the ASE source, higher spectral purity and temporal stability of the CRRFL can be realized [34]. However, wavelength tuning range of filtered ASE pump is limited due to low filtered power of ASE source with filtering wavelength beyond 1 080 nm, which further limits wavelength tuning range of CRRFL [35]. Recently, CRRFLs pumped by modeless ytterbium-doped random fiber lasers (YRFLs) have been proposed with high power and low relative intensity noise (RIN) output in a simple structure [36,37]. Compared to conventional ytterbium-doped fiber laser and ytterbium-doped filtered ASE source, YRFL can act as a pump source for the CRRFL featuring both low-noise and wide wavelength tunability [38]. However, impacts of the YRFL pump characteristics on spectral and RIN properties of CRRFLs are still under investigation.
In this paper, we experimentally demonstrate a CRRFL with wavelength-and bandwidth-tunable YRFL pump. The influences of the point reflector's bandwidth on the bandwidth and temporal stability of the YRFL pump are first investigated, which plays an important role in the spectral and RIN properties of the CRRFL. The experimental results show that with an increase in the bandwidth of the point reflector in the YRFL from 0.2 nm to 1.4 nm, the CRRFL can have higher spectral purity and lower RIN. Furthermore, a wavelength continuously tunable CRRFL with a tunable YRFL pump is demonstrated. This work shows that the CRRFL with both good wavelength tunability and temporal stability can be achieved with the tunable broadband random lasing pump, which provides a new kind of distributed Raman amplification pump source for long-distance optical fiber sensing.
Experimental setup
The schematic diagram of the proposed tunable YRFL pumped CRRFL is shown in Fig. 1. For the tunable YRFL seed, as shown in the dash box of The power of the YRFL is further boosted in the master oscillator power amplifier (MOPA) stage which is consisted of another LD and a 10-m-long YDF. The amplified YRFL is used as the pump source for the CRRFL and is injected into a wavelength division multiplexer (WDM) (pass port: 1 040 nm-1 095 nm, reflection port: 1 105 nm-1 700 nm) through another isolator (ISO). The reflection port of the WDM is connected to another 1:1 coupler based wideband fiber loop mirror, providing point feedback for cascaded random Raman lasing. The common port of the WDM is spliced with a 4-km-long dispersion shifted fiber (DSF) to provide the Raman gain and random distributed Rayleigh backscattering. To generate stable 1.3 μm 4th-order random Raman lasing and avoid the unwanted nonlinear effect, the DSF with zero dispersion wavelength of 1 499 nm is used here rather than the standard single-mode fiber with zero dispersion wavelength of about 1 310 nm. In this way, the CRRFL is realized in a forward-pumped scheme, and the spectral and RIN properties of the CRRFL can be tailored by tuning the wavelength and bandwidth of the point reflector in the YRFL.
Lasing properties of the YRFL seed
First, the influence of the point reflector's bandwidth on the spectral property of the YRFL seed is investigated by tuning the bandwidth of the filter. Figure 2(a) displays the bandwidth-tunable spectra of the YRFL seeds according to the point reflector's bandwidths. The power of the YRFL seed is 0.63 W with the LD power of 4 W. The reflection bandwidth of the point reflector, which is determined by the bandwidth tunable filter, in the forward-pumped structure allows the selective gain only for the YRFL radiation [39,40]. Thus, the bandwidth of the ytterbium-doped random lasing could be changed according to the bandwidth of the point reflector. It can be observed that the -3 dB bandwidths of the 1 085.5 nm YRFL seeds are broadened with an increase in the point reflector's bandwidth. As shown in Fig. 2(b), the -3 dB bandwidths of the YRFL seeds with 0.2 nm, 0.4 nm, 0.6 nm, 0.8 nm, 1.0 nm, and 1.4 nm of point reflector's bandwidths are 0.23 nm, 0.28 nm, 0.35 nm, 0.50 nm, 0.53 nm, and 0.65 nm, respectively. Further increasing the point reflector's bandwidth to 2 nm would result in an unstable random lasing spectrum with multi-peaks structure. This is mainly due to the existence of significant ripples in the transmission spectrum of the filter when the filtering bandwidth is beyond 2 nm. Further work can be done to realize a broader YRFL by using a ripple-free bandpass filter with the broader filtering bandwidth. The temporal characteristics of the YRFL seeds versus different point reflector's bandwidths are measured by a photodetector with a 400 MHz bandwidth and an oscilloscope with a 2 GHz bandwidth. It can be seen in Fig. 3(a) [41,42], which will be carried out in the future.
Temporal stability evolution of the amplified YRFL
We further study the temporal stability evolution of the amplified YRFL after the isolator in the MOPA stage. The measured optical conversion efficiency from the LD power to the amplified YRFL power after the isolator is 70.9%. The bandwidth of the point reflector in the YRFL seed is fixed as 1.4 nm. As presented in Figs. 4(a) and 4(b), the STD/mean and RIN of the amplified YRFL at 18 W of the LD power are only increased by 0.81% and 0.79 dB, respectively, which confirms that the temporal stability of the amplified YRFL only deteriorates slightly in the MOPA stage.
Characterization of CRRFL pumped by random lasing
The amplified YRFL is used as the pump source for the CRRFL. By fixing the point reflector's bandwidth as 1 nm in the YRFL, the evolution of cascaded random Raman lasing is studied in this section. The spectra of the 1st-to 4th-order random lasings are illustrated in Fig. 5(a), with the amplified YRFL pump power, which is measured after the WDM with 0.80 dB insertion loss of 2.62 W, 4.10 W, 6.07 W, and 9.03 W, respectively. With an increase in the YRFL pump power, the 1 141 nm 1st-order random lasing generates at first with an unstable behavior. Plenty of random spikes can be observed in the lasing spectrum, which could be attributed to the generation of narrow-band spectral components and the cascaded stimulated Brillouin scattering [1,2]. By further increasing the pump power, the 2ndto 4th-order random lasings with smooth and stable spectra can be stimulated successively. 5(b), corresponding to the left column of the spectra. It can be seen that, for the unstable 1st-order random lasing, it shows a pulsed scenario in time domain as well. The time domain traces of the 2ndto 4th-order random lasings show quasi-continuous wave behaviors, and the STD/mean value increases with the order of Raman lasing.
The output power evolution of the 1st-to 4th-order random lasings is depicted in Fig. 5(c). The threshold lasing powers of the 1st-to 4th-order random lasings are 1.14 W, 2.62 W, 4.10 W, and 6.07 W, respectively. As a result, the maximum output power of the 4th-order random lasing reaches 3.03 W at the pump power of 9.03 W, corresponding to an optical conversion efficiency of 33.6%.
Influence of YRFL pump characteristics on 4th-order CRRFL
The influences of the YRFL pump on the 4th-order CRRFL are also investigated. Firstly, the spectral purities of the 4th-order random lasing, which is defined as the ratio of the output of the 4th-order random lasing to the total output, as a function of point reflector's bandwidths in the YRFL are measured and plotted in Fig. 6(a). The spectral purity of the 4th-order random lasing pumped by the YRFL with 0.2 nm of point reflector's bandwidth is only 91.20%. While, by employing a relatively broadband YRFL pump with 1.4 nm of point reflector's bandwidth, the spectral purity of the 4th-order random lasing can be improved to 96.34%. The higher spectral purity of random lasing could be associated with the lower RIN of the YRFL pump [35]. The influence of the YRFL pump on the temporal stability of the 4th-order random lasing is also measured and presented in Fig. 6(b). It can be seen that with the broader YRFL pump, the temporal stability of the 4th-order random lasing can be enhanced. This is due to the temporal intensity fluctuation of the pump is transferred directly to the CRRFLs by the ultra-fast responding process of the stimulated Raman scattering effect [43]. As a result, the time domain STD/mean value of the 4th-order random lasing decreased from 20.74% to 10.81% by increasing the bandwidth of the point reflector in the YRFL from 0.2 nm to 1.4 nm. Meanwhile, the RIN spectra of the 4th-order random lasing in Fig. 6(c) also verify that the RIN of the 4th-order random lasing can be decreased by 7.2 dB with the broader YRFL pump. The results illustrated in Fig. 6 show that the YRFL pump bandwidth has a significant effect on the lasing performances of the CRRFL, due to the different temporal stability behaviors of the YRFLs under different bandwidths.
Tunable wavelength emissions of the CRRFL
The tunability of the CRRFL is tested as well, by adjusting the central wavelength of the filter used in the YRFL. By fixing the bandwidth as 1.4 nm and changing the central wavelengths from 1 055 nm to 1 095 nm of the point reflector, the lasing wavelength of the YRFL is tuned accordingly and the normalized spectra are shown in Fig. 7(a). Compared to the tunable filtered ASE pump [35], the wavelength tuning range of the proposed YRFL is wider. Thus, the wavelength tuning ranges of each order random lasing in the CRRFL can also be broadened. Since the 1st-order random lasing is unstable, we record the normalized tunable spectra of the 2nd-to 4th-order random lasings in Figs. 7(b), 7(c), and 7(d), indicating the tuning ranges of 1 169 nm -1 219 nm, 1 233 nm -1 287 nm, and 1 304 nm-1 364 nm, respectively. The left peaks of several random lasing spectrum in the Fig. 7(b) is caused by another Raman gain maxima. Further decreasing the YRFL wavelength below 1 055 nm, the wavelength of the 1st-order random lasing falls outside the wavelength range of the reflection port of the WDM (1 105 nm -1 700 nm). It is thus possible to further broaden the wavelength tuning range of the CRRFL and realize gap-free wavelength tuning by using the more suitable WDM [32]. Therefore, cascaded random Raman lasing with the good temporal stability and specific wavelength can be simultaneously obtained by tuning the bandwidth and the wavelength of the YRFL pump.
Conclusions
In summary, we experimentally investigate the impact of the YRFL on the spectral purity and temporal stability of the CRRFL by using a wavelength-and bandwidth-tunable point reflector, for the first time. It is shown that the broader bandwidth of the point reflector in the YRFL leads to the better temporal stability of the YRFL, thus resulting in higher spectral purity and lower RIN of the CRRFL. As a result, with the point reflector's bandwidth of 1.4 nm, a 4th-order CRRFL at 1 349 nm with the output power of 3.03 W, spectral purity of 96.34%, and RIN of -115.19 dB/Hz is realized, which is superior to that with 0.2 nm point reflector's bandwidth. Moreover, the wide tunability of the proposed CRRFL is also verified experimentally by continuously tuning the YRFL pump wavelength from 1 055 nm to 1 095 nm. This work indicates that both the wide wavelength tunability and good temporal stability of the CRRFL can be simultaneously achieved with the broadband random lasing pump, providing a way to further improve distributed Raman amplification pump sources performances by adopting CRRFLs based on the YRFL pump with the broader bandwidth. | 3,234 | 2022-04-13T00:00:00.000 | [
"Physics",
"Engineering"
] |
The mysterious desert dwellers: Coccidioides immitis and Coccidioides posadasii, causative fungal agents of coccidioidomycosis
ABSTRACT The genus Coccidioides consists of two species: C. immitis and C. posadasii. Prior to 2000, all disease was thought to be caused by a single species, C. immitis. The organism grows in arid to semiarid alkaline soils throughout western North America and into Central and South America. Regions in the United States, with highest prevalence of disease, include California, Arizona, and Texas. The Mexican states of Baja California, Coahuila, Sonora, and Neuvo Leon currently have the highest skin test positive results. Central America contains isolated endemic areas in Guatemala and Honduras. South America has isolated regions of high endemicity including areas of Colombia, Venezuela, Argentina, Paraguay, and Brazil. Although approximately 15,000 cases per year are reported in the United States, actual disease burden is estimated to be in the hundreds of thousands, as only California and Arizona have dedicated public health outreach, and report and track disease reliably. In this review, we survey genomics, epidemiology, ecology, and summarize aspects of disease, diagnosis, prevention, and treatment.
Introduction
The disease coccidioidomycosis, which is commonly known as valley fever (VF), was first described in the late 1800s in Argentina by Dr Alejandro Posadas [1]. The causative agent was first thought to be a protozoan that caused severe disease (thus, the etymology of Coccidioides immitis: Coccidia protozoan and immitis "not mild") but was later identified as a dimorphic fungus, with most disease being asymptomatic or mild [2][3][4]. The unusual life cycle is defined by the large pathogenic structure called a spherule ( Figure 1). The initial spherule develops from an inhaled arthroconidia, which is the asexual propagule that develops in the environment. This environmental life stage consists of nondescript mycelia that mature into alternating arthroconidia as the fungus grows and ages. The exact conditions required for growth and maturation in the environment are unknown, but evidence suggests that keratin sources and precipitation play a role [5,6]. Once infection is established, the spherule life stage predominates in the host, with endospores developing internally and the outer cell wall rupturing to release mature endospores. Each endospore can develop into a new spherule, and endospores are likely recognized and engulfed by host immune cells [7][8][9]. Once the spherule matures and enlarges, it cannot be engulfed and can rupture the host cell. Thus, it appears the Coccidioides can be both an intracellular and extracellular pathogen. The sexual stage of the life cycle has yet to be discovered, but appears to occur with high frequency [10,11].
Both organisms grow in arid to semiarid alkaline thermic soils throughout western North America and into Central and South America [12]. Endemic regions in the United States with highest predicted prevalence of disease include the Central Valley of California, southern Arizona, southwestern Texas, but only Arizona and California track and report disease prevalence to the Centers for Disease Control and Prevention (CDC), along with about half of the US states. The Mexican states of Baja California, Coahuila, Sonora, and Neuvo Leon currently have the highest skin test positive results, although Mexico no longer tracks the disease (pers comm Laura Rosio Castañón). Isolated areas in Guatemala (Motagua Valley) and Honduras (Comayagua Valley) have documented cases [13]. South America has several geographically isolated regions of endemicity including the northeastern area of Colombia; Lara and Falcon states in Venezuela; the Chaco region in Argentina/Paraguay; and Piaui, Maranhao, Ceara, and Bahai states of Brazil [14,15]. Disease prevalence in South America is not well characterized, possibly due to lower population densities and lower socioeconomic status in the regions of endemicity, but could also reflect the CONTACT Bridget M. Barker<EMAIL_ADDRESS>genotypes and phenotypes of both the pathogen and host found specifically in South America.
The genus Coccidioides consists of two species: C. immitis and C. posadasii. Prior to 2000, all disease was thought to be caused by a single species, C. immitis. However, genetic analysis clearly supports two distinct species [16]. Within each species, several populations have been proposed. For C. immitis, there are indications of population structure within the Central Valley, southern California/Baja California, Mexico, and a separate population in the newly identified endemic region of eastern Washington State [17][18][19]. For C. posadasii, a clear separation between isolates from Arizona and isolates from Mexico/Texas/ South America has been consistently observed [19]. Additionally, Arizona isolates from the Phoenix region and the higher elevation Tucson region may be distinct subpopulations [17,18]. The true population structure will remain an enigma until direct isolations from soil are made throughout the range of the organism. Despite the clear genotypic variation, no clinical differences have been defined among species or populationsalthough no published reports have ever assessed phenotypic variation in this context.
The disease caused by Coccidioides is highly variable among human patients. The majority (60%) of patients are asymptomatic after infection [20]. For the remainder, symptoms can be mild, including pneumonia, and these infections normally resolve without intervention. However, if the infection become extrapulmonary, medical intervention may be necessary. Infections can disseminate to spleen, liver, brain, bone, and many other tissues in the body. Complications in diagnosis, treatment failure, and unusual presentations may result in severe disease progression, and even death. No vaccine for this disease is available, although efforts are underway to develop an effective vaccine. In this review, we summarize historical and recent developments in the study of Coccidioides and coccidioidomycosis.
The ecology of Coccidioides spp
Most ascomycete fungi are saprotrophic in the environment and have an association with plants, but Coccidioides spp. has evolved the ability to infect immunocompetent mammals including humans [21]. Despite a dramatic increase in patients diagnosed with coccidioidomycosis in recent years, the ecology of the organism is poorly understood. Arid and semiarid soils of the southwestern United States, Mexico, Central, and South American are the natural reservoir for the fungus [12]. The distribution of the fungus in soil is inconsistent and unpredictable even in the endemic region where there is high disease burden [22]. There is evidence of an association between Coccidioides and animals (small desert mammals) due to greater detection of the fungus in close proximity to animal activity [23][24][25][26][27][28]. There is also evidence of climatic and seasonal variables that may influence growth and dispersion of the fungus leading to During the saprobic phase (left) the organism grows as mycelia, which mature into arthroconidia. These asexual conidia can be inhaled by a susceptible host. If this occurs, the fungus undergoes a morphological shift to form a spherule (right). The spherule structure matures to contain endospores, which can potentially disseminate to other body sites in the host including skin, bones, or central nervous system. patterns of disease outbreaks [29]. This section will synthesize the information known about the ecology of Coccidioides spp. and propose areas for future research.
Biotic factors: Soil
During the mycelial life cycle, Coccidioides spp. are thought to be a saprotrophic soil-dwelling fungi, although a preferred nutrient source is not described. The distribution in the soil is sporadic and irregular and may be driven by abiotic soil factors such as pH and electrical conductivity or possibly driven by biotic associations with small desert mammals such as rodents [27]. Few studies have tried to identify the abiotic soil variables that may be driving the distribution of Coccidioides spp. with limited support for any one predictor variable. In the 1970s, Lacy and Swatek investigated C. immitis around California archeological sites and reveled that sandy-textured soils made up of 98% of positive samples and 96.7% of positive soils were alkaline [30]. Elconin et al. also found a positive correlation between C. immitis isolation and increased soil salinity [31]. Though these studies provided evidence for an association with C. immitis and soils with alkaline pH, the samples sizes were quite small and relied on culturebased methods for fungal identification. Fisher et al. analyzed more abiotic variables, such as soil temperature, soil texture, chemical characteristics, and water quantity, which could affect the distribution and growth of Coccidioides throughout the southwestern United States [12]. The authors proposed that soils with low water content (water table is not near the surface) are more favorable for fungi, because as the soil dries microorganisms that can grow as filamentous hyphae may reach water pockets. Based on their assessment of laboratory and field studies, the optimal soil temperature range that promotes peak growth of the fungus is between 20 and 40°C and the soil texture (proportion of sand, silts, and clays in a given soil) in which Coccidioides is most commonly found is sandy loam (low water-holding capacity) with pH ranging from 6.1 to 8 with relatively low electrical conductivity. These data suggest that pH and texture are not limiting factors for the growth, but that temperature and water availability may be more important. The relatively few studies that have examined the role of abiotic factors do not provide strong evidentiary support for soil variables that can be used to predict the growth pattern and distribution of Coccidioides spp. in the environment [27].
Environmental detection
Detection of the pathogen in soil is a difficult task as culture methods are shown to be insensitive (thousands of soils with no/few cultured strains) and mouse inoculation is expensive and time consuming with variable results [27,28,[32][33][34]. With the development of new molecular technologies, it is easier and faster to detect presence of the pathogen using PCR, DNA sequencing, and real-time qPCR methods. Several methods target regions of the ITS or rDNA, and often require additional sequencing to verify the target identity [27,35,36]. A realtime PCR method targets a novel repetitive sequence that was first identified for use in a clinical diagnostic system [37,38]. The main benefit to molecular-based methods is the large number of soil samples that can be screened in a relatively short amount of time.
Wind, dust, and airborne conidia
The role of dust in the dispersion of Coccidioides spp. propagules has been posited for many years, but has not been experimentally validated because has never been isolated from ambient dust and molecular detection is difficult [39][40][41][42][43][44][45][46][47][48]. Chow et al. were able to detect airborne Coccidioides in a simulated dust storm with relative success, but detecting the fungus in actual dust storms is much more difficult [41]. Climate models show a drying trend in the endemic areas for Coccidioides that increases the likelihood for dust storms [49,50]. Tong et al. showed that dust storms have increased 240%, from 1990s to the 2000s, in the southwestern United Stated and have a positive correlation with dust storm frequency and reported cases of coccidioidomycosis [51]. It is proposed that with the increasing frequency of dust storms comes a higher risk of inhaling infectious propagules. With the increase in dust and wind activity comes a greater need for better air surveillance techniques. After the California Northridge Earthquake in 1994, there was an outbreak of coccidioidomycosis that included three deaths. This outbreak was attributed to many landslides that generated massive dust clouds that blew into nearby densely populated valleys [44]. Stochastic events that can generate a large bolus of dust containing infectious propagules, such as earthquakes, may increase the possibility of outbreaks. There is also a risk for wind to disperse the fungus to "nonendemic" areas [40,42,44]. Weil et al. showed that dust storms can transplant entire microbial communities hundreds of kilometers (Saharan desert to the Italian Alps) including pathogenic "black-mold," and these communities have the potential to become established in new areas [52]. Blowing dust has the potential to disperse infectious Coccidioides propagules to nonendemic areas, and should be monitored.
Animal associations
Most fungi cannot survive higher temperatures and acidic pH of the mammalian body. Enduring the unforgiving conditions of desert soil microenvironments, such as extreme temperatures, dramatic pH shifts, and microbial competition via secondary metabolites may have led to promotion pathogenesis and the ability to infect mammals via "ready-made" virulence factors [53,54].
Although an animal reservoir has not been identified for Coccidioides spp., there is strong evidence of mammalian associations with pathogenic and nonpathogenic relatives. Paracoccidioides brasiliensis, a close pathogenic relative of Coccidioides, has been isolated from the feces of bats (Artibeus lituratus) and from the internal organs of the nine-banded armadillo [55,56]. There is evidence of animal association with another close pathogenic cousin of Coccidioides, Blastomyces spp. The fungus has been isolated from the feces of bats and from other various animal manures as well as from beaver dams [57][58][59]. There also is a strong association with prairie dog burrows which, like many other burrowing mammals, create designated latrine areas to store their waste that the fungus seems to prefer [60]. There is indication that fungal pathogens within the order Onygenales are associated with wild animals, either in vivo or in situ, and this close relationship may be an indication of how they evolved to become pathogenic in humans and other animals.
Coccidioides spp. as well as other fungi in the order Onygenales have the ability to degrade keratin and utilize it as a source of carbon, nitrogen, phosphorus, sulfur, amino acids, and other minerals [61,62]. The Coccidioides genome has a significantly reduced fungal-cellulose binding domain gene family that gives the fungus the ability to break down plant material suggesting that Coccidioides has reduced this capability [21]. The subtilisin N domain-containing family is highly expanded in the Coccidioides genome as well as in close relatives. This gene family contains the peptidase S8 family domain that encodes several keratinolytic subtilases (keratinases) and this gene family is three times larger in Coccidioides than in other taxa [21]. This genomic information suggests that unlike other fungal taxa in the sister order Eurotiales, which are often associated with plants and plant materials, Coccidioides and other Onygenales utilize animal-derived substrates, and may have lost ability to thrive on a vegetarian diet.
The ability to metabolize animal derived material may restrict where the fungus is growing in the environment. There is a large amount of animal material, such as keratin, in desert rodent burrows suggesting a suitable habitat for Coccidioides. Multiple studies have shown that most soils containing the pathogen are extracted from or in the vicinity to rodent burrows, and infected animals buried in soil can establish and grow [27,34,63]. In a recent study from the endemic area in Mexico, 82% of soils containing Coccidioides were taken from rodent burrows indicating a strong correlation to the burrow microhabitat [64]. The abundance of desert rodents inhabiting the endemic region of the fungus suggests a possible connection. In early studies Coccidioides was isolated from deer mice, pocket mice, ground squirrels, grasshopper mice, kangaroo rats, and pack rats [34]. Although these early studies were culture and morphology based, they provide evidence that desert rodents could be natural reservoirs that harbor and disperse the fungus in the environment. However, there is recent evidence that Coccidioides spp. is harbored in nonrodent animals such as bats and armadillos, and in some cases animals in captivity such as otters, kangaroos, and nonhuman primates have developed severe disease and had the fungus isolated from tissue, so specific reservoirs reamin to be defined [65][66][67][68].
This association differs from an opportunistic fungal pathogen like Aspergillus fumigatus which is an environmental saprobe that releases a large quantity of conidia into the air [69] A. fumigatus is a ubiquitous environmental fungus that is usually associated with plants, decaying organic material, marine and aquatic systems that typically infect humans when they are immunocompromised [70]. Animals are constantly being exposed to Aspergillus conidia, estimated few hundred per day, which does not lead to disease unless the immune system is compromised [71,72].
Climate/seasonality
Changes in the environment can influence dispersal patterns of arthroconidia into the atmosphere that can lead to fluctuations in reported cases of VF [73]. There may be an association with increased incidence of reported disease with precipitation patterns based on the life cycle of the fungus. It is hypothesized that Coccidioides responds to soil moisture, so that when moisture is abundant the fungus grows as mycelium in soil and when the soil dries out specific viable hyphal cells mature into arthroconidia, which are released into the air [5,22,29,[74][75][76]. This is the time when humans are at greater risk for inhaling infectious coccidioidal propagules. In Arizona, low precipitation in early summer correlates to higher incidences of coccidioidomycosis in the later summer (July, August, and September); but, when there is increased monsoonal activity in the early summer, there are lower incidences of VF in the later summer [22]. The authors show that there is also a positive correlation of increased incidence when there are high precipitation levels in the winter and spring months, and increased cases of VF in the summer months after heavy winter rains. These saturation events may give the fungus enough moisture to proliferate and create greater fungal biomass in the soil and when the soil dries out release more spores into the air. A complicating factor for all climate models to date is the reliance on human case report data, which may be months after the exposure event.
Temperature is another variable that may influence the growth of arthroconidia and lead to changed patterns of incidence in the endemic region. There is a hypothesis that the soil becomes sterilized by extreme high temperatures but Coccidioides survives by growing into deeper soil horizons, and then when there is rain the fungus can grow back to the soil surface [77]. Recent studies have shown that annual mean surface temperature is a significant driver of coccidioidomycosis cases. No counties in the endemic area have a mean surface temperature lower than 10°C and incidence rates higher than six cases per 100,000 people, whereas the counties with the highest incidence rate in California and Arizona (70 cases per 100,000 people) have a mean temperature that is greater than 16°C [49]. This suggests that temperature may be an important predictor variable of where the fungus prefers to grow in the environment.
The changing climate may create suitable habitat for Coccidioides spp. outside of the endemic region, although it is clear that there are areas of transient endemicity that suggest that the true area is larger than proposed [78]. Temperatures in the southwestern United States are expected to rise by 2°C, with the greatest increases expected during the summer and autumn months [79]. Previous work indicates that Coccidioides may prefer to grow in areas with higher surface temperatures; therefore, this warming trend might shift the endemic regions farther north into areas that may not have been suitable environment for the fungus before [49]. Drought projections show an intensification of drought throughout the current endemic area which can lead increased dust and dust events that will increase the rates of VF cases [49,80]. We propose that the changing climate may allow the pathogen to occupy new areas, expose more naïve hosts to the disease, and increase disease burden in already established endemic regions.
Vaccines
A vaccine for VF was proposed by several groups [81][82][83]. It was observed that a primary infection seemed to protect individuals from subsequent infection, and most (60%) infections are not symptomatic [20]. Early vaccine tests were first assessed in guinea pigs, but without success [82]. After these frustrating starts, effective vaccines were developed for use in mice, monkeys, and dogs [84][85][86]. These early vaccine candidates relied on inactive whole cell and fungal components, both from the parasitic (spherule) phase and well as the environmental phase (conidia/mycelia) [87]. In fact, a formalin killed spherule vaccine went as far as Phase 3 human clinical trials. Nearly 3000 people received the vaccination, and 18 vaccinated individuals developed or were suspected to have developed mild VF, whereas 25 unvaccinated individuals developed or were suspected to have developed mild VF [88]. Based on these results, work aggressively shifted to development of specific antigen vaccines and attenuated strains. One of the most promising antigen-based vaccines was based on the development of a recombinant protein of antigen2 and a proline-rich antigen (Ag2/PRA), and the Coccidioides specific antigen (CSA) [89][90][91][92]. However, protection was still only 50-60% of mice surviving a challenge of~200 conidia.
Single and multiple gene deletions in Coccidioides have resulted in attenuation or abolition of virulence. Some of these avirulent strains have been proposed to be used as a vaccine. In particular, the deletion of 2 chitinase genes was shown to protect a very susceptible mouse model; however, T-cell based immune response was indicated as critical for protection, and the authors suggested that the vaccine would be less effective in HIV/AIDS patients [93]. A current live attenuated vaccine is in development, and has shown highly protection in a mouse model of VF [94,95].
Epidemiology
Steady increases in reported VF have been observed in the United States as regular reporting began in the 1990s [96,97]. In general, reported VF cases are highest in specific regions, primarily southern Arizona and the Central Valley of California. However, it is important to note that VF is only reported nationally in the United States, and is reported by only 24 states (and District of Colombia), and surprisingly known and suspected endemic states do not report disease including Texas, Oklahoma, Washington, Colorado, and Idaho, according to the CDC Morbidity and Mortality Weekly Report (https://www.cdc.gov/mmwr/index.html). No other countries in the endemic regions nationally report this disease, and therefore beyond the United States, there is not reliable data to understand if these increases are universal.
Delayed type hypersensitivity skin testing was used in early epidemiological surveys to determine regions of endemicity [98,99]. It was also used to determine the rate of infection among military personnel in California, and the first antigen used was called coccidioidin, which was administered intradermally [48]. Antigens derived from spherules (spherulin) rather than mycelia seemed to improve the sensitivity of the reaction, but not all confirmed VF cases reveal a positive skin test [100]. Importantly, patients with erythema nodosum should not receive the skin test due to potential tissue necrosis at the site of injection. Skin testing for determination of prior exposure may be useful to ascertain risk for certain occupations or among prison populations. Additionally, new epidemiological studies in novel endemic regions, such as eastern Washington State, are warranted.
Observed increases in disease could be attributed to improved reporting, diagnosis, and awareness [101]. Alternatively, these increases could be the result of changing climate, increased construction, and soil disturbance, as discussed above [5,36,40,73,[102][103][104]. Predictions on the effect of changing climate on the incidence of VF in the southwestern US suggest that disease incidence will increase in endemic regions under warming and changing precipitation patterns [49]. The potential for expansion of the endemic region is a concern, and greater efforts regarding awareness of disease among clinicians and public health officials is critical for improving diagnosis and reporting Certain patient populations have been shown to be at greater risk for severe disease. African Americans, Filipinos, pregnant women, and those with immunosuppressed conditions are well-documented groups at higher risk for dissemination [105][106][107][108][109][110][111][112][113][114][115]. Occupational exposures, such as working in construction, farm work, outdoor filing, solar farms, or archeological digs, have been associated with larger outbreaks among these workers [43,[116][117][118][119][120][121][122]. Additionally, prison inmates and guards/workers in the endemic regions have high rates of exposure and disease [106,107,[123][124][125][126].
There have been no studies that conclude that canines are more susceptible to coccidioidomycosis compared to humans, but infection may be more prevalent in canines due to behavioral tendency to disrupt soil [127]. As stated previously, infection is asymptomatic in 60% of human hosts and these rates in canines are comparable [128]. Early symptoms for both species include coughing, fever, weight loss, lack of appetite, and lack of energy. Most commonly, canines may not express symptoms of a lung infection but at the minimum, will show signs of an active disseminated disease such as lameness and seizures, which will allow early detection of valley fever. Due to the ambiguity of the symptoms, diagnosis depends on specific tests (summarized below) in addition to the clinical symptoms.
Diagnosis
Several methods have been developed to diagnose VF. In addition to clinical diagnosis of symptoms, direct culture or histopathological evidence of the organism, and radiographic findings; diagnostics that have been used include tube precipitin (TP), complement fixation (CF), immunodiffusion, agar gel precipitin-inhibition, latex particle agglutination (LPA), and enzyme-linked immunosorbent assays (ELISA). There has been recent work suggesting the use of a peptide microarray based on immunosignatures to diagnose VF. These proposed peptide diagnostics were shown to be extremely sensitive, and cross react with other related infections [129]. Charles Smith and colleagues developed the TP and CF tests in the 1950s [130,131]. Interestingly, the authors observed that TP positive reactions occurred within weeks of infection, whereas CF positivity occurred 2-3 months after infection, and CF titers could increase if infection was not controlled. In 2015, a new delayed-type hypersensitivity skin test was development and showed promise as a noninvasive diagnostic for VF with no cross-reactivity to other related infections such as histoplasmosis; but may also miss positive reactors and underestimate disease [132]. It is now known that this reflects immunoglobulin M (IgM/TP) and immunoglobulin G (IgG/CF). IgM-positive reactions likely occur in the first few weeks of illness, whereas the IgG reaction becomes positive later in disease, and titers may increase if infection is uncontrolled [133]. Similar to TP, LPA testing detects primarily IgM [134]. In asymptomatic cases, IgM and/or IgG may be detected, but titers may become nondetectable after the resolution of infection. A serological ELISA method based on the detection of both IgM and IgG show high specificity and sensitivity, 98.5 and 95.5%, respectively, and is commonly used for diagnostics [135].
Some studies have discussed difficulties in antibody detection during early time-points of the infection, as well as in immunosuppressed patients [136]. An alternative approach is the detection of fungal antigens in biofluids (typically sera) via antigen enzyme immunoassay [137]. For example, antibodies against fungal galactomannan could improve detection of coccidioidomycosis [138]. Cross-reactivity with other mycosis was shown in this report, so multiple diagnostic tests may be required and interpretation of results should consider the possibility of infection with other etiologic agents.
Molecular assays have been developed starting in the 90s, based on DNA hybridization and PCR/qPCR based methods, some mentioned above have been used in both clinical and environmental detection schemes. Detection and genotyping commonly relies on sequencing rDNA (both 18S and ITS1-5.8S-ITS2 have been targets). The ITS2 region in particular can be targeted for species specific detection [139]. A rapid and specific real-time qPCR that uses a Taq-Man probe to target a unique LTR retrotransposon detected both C. posadasii and C. immitis and is commercially available [38,140]. Many fungal genomes have been sequenced to date, and are available for development of sophisticated molecular tools to detect Coccidioides biomarkers [17,141].
Antifungal drugs/treatment
Antifungal treatment recommendations for coccidioidomycosis are dependent on the clinical severity. The duration of treatment can range from 3-12 months to lifelong treatment. The deadliest infections include meningitis or the dissemination to the central nervous system and it is recommended that these cases are given lifelong antifungal medication [142]. Studies investigating the effects of ceasing azole therapy for C. immitis have further demonstrated that with any level of infection, long-term azole or triazole therapy is suggested to prevent relapse that could lead to a more serious Coccidioides infection, especially if one is immunocompromised [143].
Amphotericin B was introduced in the 1950s and quickly became the antifungal drug of choice due to its efficiency in clearing systemic fungal infections [144]. Although its use as an antifungal drug was life-saving, the nephrotoxicity was underestimated and renal failure, mortality rate, and additional financial costs were not trivial [145]. This drug became a "Gold Standard" but was only used when other antifungal treatments failed due to side effects. This led to the development of other antifungal drugs for the treatment of VF, such as fluconazole in the early 90s that had fewer side effects and toxicity [146,147].
The most common class of antifungal treatments for VF are the azoles that target ergosterol biosynthesis, and the polyenes that bind ergosterol. Ergosterol is a component of the fungal cell membrane, and these drugs often cease fungal growth but are not fungicidal. Common drugs that are administered for the treatment of coccidioidomycosis are fluconazole, itraconazole (azoles), and amphotericin B (polyene) [148]. Both species of Coccidioides have shown variable resistance to the listed antifungal medications [149][150][151]. Further work is needed to determine the mechanism of fluconazole resistance in Coccidioides. It is currently unclear whether emerging resistance is concerning in a clinical setting, and the conditions under which antifungal resistance needs to be monitored.
Conclusions
Coccidioidomycosis is a potentially severe and understudied fungal infection. The regions of endemicity are often associated with lower socioeconomic status, and infections may be exacerbated by health disparities and other comorbidities. Both species are found in association with animals in the desert environment, but a lack of specific knowledge of the ecology and effects of climate make prediction of the future risk of increase in disease with climate change complicated. Diagnostics are imprecise and often complicated if the host is immunosuppressed. No vaccine exists and treatment is based on standard antifungal drugs.
Disclosure statement
No potential conflict of interest was reported by the authors. | 6,774.6 | 2019-01-01T00:00:00.000 | [
"Biology"
] |
RESEARCH ARTICLE Role of Adipose Tissue Nutrient/Vitamin Metabolism in Physiological and Altered Metabolic Settings Glucose-dependent insulinotropic polypeptide promotes lipid deposition in subcutaneous adipocytes in obese type 2 diabetes patients: a maladaptive response
Glucose-dependent insulinotropic polypeptide promotes lipid deposition in subcutaneous adipocytes in obese type 2 diabetes patients: a maladaptive response. Physiol doi:10.1152/ajpendo.00347.2016.—Glu-cose-dependent insulinotropic polypeptide (GIP) beyond its insulinotropic effects may regulate postprandial lipid metabolism. Whereas the insulinotropic action of GIP is known to be impaired in type 2 diabetes mellitus (T2DM), its adipogenic effect is unknown. We hypothesized that GIP is anabolic in human subcutaneous adipose tissue (SAT) promoting triacylglycerol (TAG) deposition through reesterification of nonesterified fatty acids (NEFA), and this effect may differ according to obesity status or glucose tolerance. Twenty-three subjects categorized into
GIP has other important extrapancreatic metabolic functions, with receptors expressed in such tissues as bone, brain, stomach, and adipose tissue, where it may modulate postprandial lipid metabolism (7). In animal models of obesity-induced insulin resistance, genetic and chemical disruption of GIP signaling protects against the deleterious effects of high-fat feeding by preventing lipid deposition, adipocyte hypertrophy, and expansion of adipose tissue mass and reducing triglyceride deposition in liver and skeletal muscle, maintaining insulin sensitivity (25,31). Thus, if GIP has a potential proadipogenic effect, selective GIP antagonists may be beneficial in treating obesity and type 2 diabetes mellitus (T2DM) (17).
There is evidence that plasma GIP concentrations are increased in obesity. Given that dietary fat consumption chronically stimulates the production and secretion of GIP, inducing K cell hyperplasia (8,36), higher GIP concentrations may reflect consumption of an energy dense, high-fat diet. Early rodent studies demonstrated that a GIP infusion during an intraduodenal lipid infusion decreased plasma triglyceride levels (14), whereas GIP has been shown to enhance insulininduced fatty acid incorporation in rat adipose tissue (9). Thus GIP mediated through the adipocyte GIP receptor is anabolic in adipose tissue promoting fat deposition.
It is important to distinguish between direct effects of GIP on fatty acid metabolism and indirect effects based on its insulinotropic action. Acute GIP infusion in lean healthy males (with hyperinsulinemia and hyperglycemia) increases adipose tissue blood flow, triacylglycerol (TAG) hydrolysis, and nonesterified fatty acid (NEFA) reesterification, thus promoting triglyceride deposition (5,6). In healthy obese men, acute GIP infusion reduced expression and activity of 11-hydroxysteroid dehydrogenase type 1, a fat-specific glucocorticoid me-tabolism enzyme that may enhance lipolysis in subcutaneous adipose tissue (SAT) (20). In addition, it has been suggested that GIP contributes to the induction of adipocyte and SAT inflammation (and thus insulin resistance), increasing production of proinflammatory adipokines such as monocyte chemoattractant protein-1 (MCP-1) (21), IL-6, IL-1, and osteopontin (1,37). Thus, from the available animal model and human data, GIP appears to have a key regulatory role in lipid metabolism and adipose tissue.
To date, very few studies have investigated the effects of GIP on human adipose tissue, and none have involved subjects with T2DM, although the reported presence of functional GIP receptors on adipocytes strongly suggests that GIP modulates human adipose tissue metabolism (41). GIP has also been proposed to modulate other adipose tissue depots, and excessive GIP secretion may underlie excessive visceral and liver fat deposition (33,34). In support of this, results from a crosssectional study of Danish men demonstrated an association between higher levels of GIP (during a glucose tolerance test) and a metabolically unfavorable phenotype (higher visceral: subcutaneous fat and a higher waist/hip ratio) (32).
We hypothesized that GIP would have an anabolic action in SAT promoting NEFA reesterification, which we speculated may be mediated either by enhancing lipoprotein lipase (LPL) expression/activity (a lipogenic enzyme) (15,26) or by reducing adipose tissue triglyceride lipase (ATGL) and hormonesensitive lipase (HSL) expression/activity, two key lipolytic enzymes. We postulated that this effect may be different according to obesity status or glucose tolerance. Thus, we set out to determine the acute in vivo effects of intravenous GIP on 1) plasma/serum insulin and nonesterified fatty acid (NEFA) concentrations, and 2) TAG content and gene expression of the key lipid-regulating genes LPL, ATGL, and HSL in SAT in obese individuals with different categories of glucose regulation [normoglycemic, impaired glucose regulation (IGR), and type 2 diabetes mellitus (T2DM)] vs. lean, normoglycemic controls.
Lean and obese were defined according to a BMI of Յ25 and Ն30 kg/m 2 , respectively. Allocation to glucose regulation categories was based on recent medical records combined with a fasting plasma glucose concentration. Obese subjects were allocated to the obese IGR group if they had one or more of the following: fasting hyperglycemia, impaired glucose tolerance on a 75-g oral glucose tolerance test (OGTT), or Hb A 1C in the prediabetes range (6 -6.5% or 42-47 mmol/mol). Obese subjects with T2DM (according to World Health Organization diagnostic criteria) (40) and not on pharmacological treatment for diabetes were allocated to the obese T2DM group. Homeostatic model assessment (HOMA)-2 was used to estimate whole body insulin resistance (23); adipose tissue insulin resistance (Adipo-IR) was calculated from fasting NEFA (mmol/l) and insulin (pmol/l) concentrations (19). Baseline demographic, anthropometric, and biochemical parameters of all participants are shown in Table 1.
Ethical Approval
Ethical approval for this project was obtained from the Northwest Research Ethics Committee (UK; REC ref. no. 08/H1001/20). All subjects were studied after informed and written consent was obtained.
Study Protocol
Each subject was studied on two separate occasions 1-3 wk apart. After overnight fasting, subjects were infused with either GIP (2 pmol·kg Ϫ1 ·min Ϫ1 in 0.9% saline) or placebo (0.9% saline alone). GIP was dosed based on the rate infused in previous studies (16,35,38). Subjects were randomly assigned to either GIP or Values are means Ϯ SD. T2DM, type 2 diabetes mellitus; BMI, body mass index; NEFAs, nonesterified fatty acids; HOMA-IR, homeostasis model assessment-insulin resistance; Adipo-IR, adipose tissue insulin resistance. a P Ͻ 0.05, b P Ͻ 0.01, c P Ͻ 0.001, and d P Ͻ 0.0001, statistically significant difference vs. lean group; e P Ͻ 0.05, significant difference vs. obese group. placebo infusion on their initial visit and received the alternate infusion subsequently. Anthropometric assessments were recorded during each visit. Percent body fat estimation was determined by whole body bioelectrical impedance analysis (Tanita, Tokyo, Japan).
GIP infusions, hyperglycemic clamp, and blood sampling. Intravenous cannulae were inserted into both antecubital fossae for blood sampling and infusions (GIP or placebo). GIP (Polypeptide Laboratories, Strasbourg, France) was sterile-filtered and dispensed by Stockport Pharmaceuticals (Stepping Hill Hospital, Stockport, UK). A blood glucose concentration of~8.0 mmol/l was maintained during a hyperglycemic clamp using a priming dose of 20% glucose bolus (based on weight and fasting glucose) given in the first 5 min, followed by a variable rate infusion of 20% glucose adjusted according to whole blood glucose levels measured every 5 min on a YSI blood glucose analyzer (YSI UK). Intravenous infusion of GIP/ placebo was continued from 30 min after initiation of hyperglycemic clamp until 240 min. Ten-milliliter blood samples were taken at baseline (before hyperglycemic clamp) and at 15, 30, 60, 120, 180, and 240 min following the initiation of GIP/placebo infusion. To minimize protein degradation, aprotinin was added to the tubes before sample collection. Samples were centrifuged immediately, and serum was stored at Ϫ80°C until further analysis.
SAT biopsies. Subcutaneous adipose tissue (SAT) biopsies were obtained at baseline and after 240 min of the GIP/placebo infusion on the contralateral site. Under local anesthesia (1% lidocaine, adrenaline 1:200,000), a small incision was made through the skin and fascia 10 cm lateral to the umbilicus. Adipose tissue samples (50 -150 mg wet wt) were collected and snap-frozen in liquid nitrogen and stored at Ϫ80°C until further analysis.
Laboratory Analysis: Biochemical Analysis
Plasma glucose concentration, lipid profile, liver function parameters, and Hb A 1c were measured using a Cobas 8000 modular analyzer (Roche Diagnostics). Blood glucose concentrations during hyperglycemic clamp were measured using YSI 2300 STAT glucose analyzer (YSI UK; and Fleet, Hampshire, UK). Serum insulin was measured by ELISA method (Invitrogen, Fisher Scientific, Loughborough, UK). Nonesterified fatty acids (NEFAs) were measured from plasma by Randox kit on a Biostat BSD 570 analyzer (Randox Laboratories, London, UK). Intact GIP was measured at the University of Copenhagen (Copenhagen, Denmark); the assay is specific for the intact NH 2 terminus of GIP (biologically active peptide) (13).
SAT Analysis
SAT lipid content. Lysates were prepared by homogenization of fat biopsies in a buffer containing 50 mM Tris·HCl, pH ϭ 7.5, 150 mM NaCl, 1% Triton X-100, and standard protease inhibitor cocktail (Complete Mini protease inhibitor cocktail; Roche Diagnostics). Triacylglycerol (TAG) was quantified by measuring free glycerol output following overnight lipase treatment at 37°C (Sigma). The values were normalized according to protein content.
SAT gene expression. Gene expression of LPL, ATGL, and HSL was quantified through RNA extraction and real-time quantitative PCR. Total RNA was isolated using RNeasy Lipid Tissue Mini Kit (Qiagen). Real-time quantitative PCR was conducted in triplicate using a Bio-Rad CFX-connect real-time PCR instrument (Bio-Rad Laboratories) using prevalidated TaqMan probes (Life Technologies) as follows: endogenous control -actin (Hs99999903_m1) and target genes lipoprotein lipase (lpl; Hs00173425_m1), ATGL (pnpla2; Hs00386101_m1), and hormone-sensitive lipase (lipe; Hs00193510_m1). Relative quantification was carried out using the ⌬⌬CT method with -actin gene expression as an internal control.
Statistical Analysis
Participant demographics, baseline biochemical parameters, and blood glucose concentrations during hyperglycemic clamp are expressed as means Ϯ SD; all other results are expressed as means Ϯ SE. One-way analysis of variance (ANOVA) and Tukey's t-tests were performed to compare participant demographics and baseline biochemical parameters among the four groups in this study. Area under the curve for insulin and NEFA concentrations over a 4-h period of infusion (AUC 0 -4 h) were calculated by trapezoidal rule using GraphPad Prism software. Paired t-tests were performed on changes in gene expression and lipid content (SAT-TAG) parameters to explore whether the change over the two time points differed between GIP and placebo. P Ͻ 0.05 (2-tailed) was considered to be significant. A Pearson product-moment correlation coefficient was computed to assess the relationship between degree of NEFA reduction and other variables [fasting plasma glucose and adipose tissue insulin resistance (Adipo-IR)].
A linear mixed-effects model was also used to model insulin secretion and NEFA concentrations using three time points (baseline, 120 min, and 240 min). Main effects for the four different groups are included along with a two-way interaction between treatment and group. This allows that the overall effect of GIP infusion in comparison with the placebo infusion can be assessed individually for different groups. Results are expressed in estimated average unit changes in insulin and NEFAs during GIP vs. placebo infusion.
Baseline Characteristics: Patient Demographics
Twenty-three individuals completed the study protocol in four subgroups: lean (n ϭ 6), obese (n ϭ 6), obese IGR (n ϭ 6), and obese T2DM (n ϭ 5). Waist circumference and percent body fat mass were significantly higher in obese, obese IGR, and obese T2DM compared with the lean group. The duration of diabetes in obese T2DM group was 7 Ϯ 5.5 mo (means Ϯ SD), with a mean Hb A 1c of 54 Ϯ 8.5 mmol/mol (7.1 Ϯ 0.8%), and all participants were naive to oral or injectable diabetes medications.
Baseline Biochemistry: Plasma Glucose and Insulin Concentrations
As expected, mean fasting glucose was higher in obese IGR and obese T2DM groups compared with the two other groups. Fasting insulin and HOMA-IR were significantly higher in obese, obese IGR, and obese T2DM groups vs. the lean group. Adipo-IR was significantly higher in obese T2DM group vs. lean and obese groups but not vs. obese IGR group (Table 1).
Metabolic Parameters
All subjects in obese IGR and obese T2DM groups had metabolic syndrome based on International Diabetes Federation 2006 criteria (2) with most consequently treated for hypertension and dyslipidemia: ACE inhibitors or angiotensin receptor blockers (3 subjects in the obese IGR group, 5 subjects in the obese T2DM group), -blockers (2 obese IGR, 2 obese T2DM), and calcium channel blocker (1 obese T2DM). Three subjects in each of the above two groups were on statins. Two subjects in the obese group had metabolic syndrome (one on ACE inhibitors and one a fibrate).
Biochemistry Changes During Infusions
Blood glucose. The blood glucose concentrations were maintained at~8.0 mmol/l during the hyperglycemic clamp with both GIP and placebo infusions in all four groups (Fig. 1, A-D) Fig. 2E). The change in insulin concentration over 240 min, compared with baseline values, differed by 63, 70, and 121 IU/ml with GIP infusion vs. placebo in the lean, obese, and obese IGR groups, respectively. In the obese T2DM group, there was only a 9 IU/ml increase in insulin concentration with GIP vs. placebo infusion (Fig. 2F).
Serum triacylglycerol concentration.
There were no significant alterations in serum triacylglycerol (TAG) concentrations with either GIP or placebo in any of the four groups (data not shown).
SAT Changes
SAT-TAG content. The changes in lipid content after 240 min of GIP vs. placebo infusion relative to respective baselines on each visit are shown in Fig. 5. In the obese T2DM group, the SAT-TAG content increased 1.78 Ϯ 0.4-fold (means Ϯ SE) from baseline with GIP infusion compared with 0.86 Ϯ 0.1-fold with placebo (95% CI: 0.1, 1.8, P ϭ 0.043). The changes in TAG content in the other three groups were not statistically significant (data shown in Fig. 5).
Gene expression of enzymes involved in lipid metabolism. The changes in mRNA expression (LPL, ATGL, and HSL) in SAT after 240 min of GIP vs. placebo infusion relative to respective baselines on each visit are shown in Fig. 6.
LPL. The LPL mRNA expression in the T2DM group was 1.25-fold higher from baseline with GIP infusion compared with the 0.94-fold change with placebo, but this was not statistically significant (P ϭ 0.27). In the other three groups, the changes in LPL mRNA expression with GIP and placebo were comparable (Fig. 6A).
ATGL. In the T2DM group, ATGL mRNA expression was higher with GIP infusion compared with placebo (1.5-vs. 1.1-fold, P ϭ 0.12), but this was not statistically significant. In the other three groups, the changes in ATGL gene expression with GIP vs. placebo were comparable (Fig. 6B). HSL. The changes in HSL gene expression with GIP did not differ significantly compared with placebo in all four groups (Fig. 6C). Fold change data for the three enzymes in all four groups is shown in Fig. 6D.
DISCUSSION
We demonstrate that acute GIP infusion during fasting under hyperglycemic conditions reduced plasma NEFAs, concomitantly increasing SAT triacylglycerol (TAG) content in obese patients with T2DM. This anabolic effect was not observed in the lean patients, obese patients, or obese patients with IGR. In contrast, whereas GIP was able to stimulate insulin secretion in the lean patients, obese patients, or obese patients with IGR, its insulinotropic action was not observed in obese patients with T2DM. Thus, in obese patients with T2DM, there is a dissociation of the effects on GIP on -cells and adipocytes, with blunted insulinotropic but preserved lipogenic actions, respectively.
Expression of the GIP receptor (GIPR) is somehow glucose dependent and downregulated in response to hyperglycemia (24). In patients with T2DM, the blunted incretin effect (involving both incretin hormones glucagon-like peptide-1 and GIP) may be due in part to reduced islet cell expression of GIP receptors (GIPR) secondary to chronic hyperglycemia (16,29,35,39). The physiological role of GIP in adipose tissue in T2DM remains unclear, although adipose GIPR expression may be similarly downregulated in insulin-resistant human subjects and may represent a compensatory mechanism to reduce fat storage in insulin resistance, considering the interference of NEFAs on insulin signal transduction (10,22). However, energy-dense, high-fat diets in obese individuals with T2DM could result in exaggerated fat storage (through exaggerated GIP release), even in the absence of adequate insulin secretion (Fig. 7). Although we did not measure GIPR, the lipogenic action of GIP at the adipocyte appears to be more pronounced in T2DM (Fig. 5). Studies in patients with nonalcoholic fatty liver disease suggest that elevated GIP secretion is also associated with intrahepatocellular lipid deposition (33).
Several factors may explain the differential ability of GIP to increase NEFA reesterification in SAT in obese T2DM subjects vs. other groups. In lean individuals, obese individuals, and obese individuals with IGR, where insulin secretion is potently stimulated and adipose tissue insulin sensitivity is preserved (lower Adipo-IR), insulin independently suppressed lipolysis, lowering NEFAs and perhaps leaving GIP's effects trivial. However, in T2DM, when insulin secretion is impaired and adipose tissue is insulin resistant (high Adipo-IR), the effect of GIP assumes greater importance, promoting lipid accumulation in adipocytes. This is consistent with animal data. GIP does not promote fat accumulation in adipocytes with normal insulin sensitivity, with GIPR Ϫ/Ϫ mice showing similar adiposity to wild-type mice on control diet (31). However, under conditions of diminished insulin action using insulin receptor substrate-1 (IRS-1)-deficient mice, when the effects of GIP were examined (by disrupting GIP signaling, GIP Ϫ/Ϫ vs. GIPR ϩ/ϩ ), GIP was shown to promote SAT and VAT expansion and decrease fat oxidation, with greater SAT and VAT mass and lower fat oxidation in IRS-1 Ϫ/Ϫ /GIPR Ϫ/Ϫ vs. IRS-1 Ϫ/Ϫ /GIPR ϩ/ϩ mice (42). A few human studies examined the metabolic effect of an acute GIP infusion in lean and obese individuals but none reported in people with T2DM. In studies to date, the effects of GIP have been examined under experimental conditions different from those here, for example, during concomitant intralipid infusion and/or with hyperinsulinemic hyperglycemic clamp conditions and measuring arteriovenous concentrations of metabolites. These data demonstrated that in lean people, GIP in combination with hyperinsulinemia and hyperglycemia increased adipose tissue blood flow, glucose uptake, and NEFA reesterification, thus resulting in increased abdominal SAT-TAG deposition (4 -6). The same group showed that, in obese and IGR subjects, GIP infusion did not have the same effect on adipose tissue blood flow or TAG deposition in adipose tissue (3). However, the independent contributions of insulin vs. GIP to these metabolic effects are difficult to dissect, although GIP per se appeared to have little effect on human subcutaneous adipose tissue in lean insulin-sensitive subjects, with an effect apparent only when GIP was coadministered with insulin during hyperglycemia. Thus it would appear that there are direct and indirect effects of GIP. During nutrient excess, lipogenesis is stimulated via lipoprotein lipase (LPL), hydrolyzing circulating lipoprotein-de-rived triglycerides and promoting NEFA esterification into TAG and storage within lipid droplets of adipose tissue. During periods of fasting, mobilization of NEFAs from fat depots relies on the activity of key hydrolases, including hormone-sensitive lipase (HSL) and adipose triglyceride lipase (ATGL). In SAT, insulin stimulates NEFA esterification by 7. In healthy people, GIP acts on its receptors on -cells and adipocytes to promote insulin secretion (insulinotropic action) and lipid deposition (adipogenic action) (left). In obesity, with consumption of an energy-dense, higher-fat diet, there is enhanced insulin secretion (which may help overcome peripheral insulin resistance) and increased lipid deposition (which will further enhance fat storage) (middle). In T2DM, the effects of GIP on -cell are impaired with reduced insulin secretion; the effects on the adipocyte seem to be preserved, further promoting lipid deposition (right). enhancing lipoprotein lipase (LPL),and inhibits lipolytic process (18). The majority of the animal studies have shown that GIP potentiates the role of insulin in regulation of LPL and NEFA incorporation into adipose tissue (9,15,27,31). GIP enhances LPL gene expression in cultured subcutaneous human adipocytes through pathways involving protein kinase B and AMP-activated protein kinase (26,28). Trying to determine the molecular mechanism by which SAT-TAG content changed, we measured SAT mRNA expression of LPL, ATGL, and HSL; surprisingly, we observed no significant changes in expression to account for altered serum NEFAs or SAT-TAG content. This may represent a time course phenomenon (changes in gene expression with GIP in human adipose tissue may occur over a longer interval). This speculation is consistent with the slow temporal onset of the molecular responses in adipose tissue in animal studies. GIP infusion may affect enzyme activity rather than gene expression and therefore results may differ if activity/phosphorylation was measured.
To better appreciate the physiological effects of GIP administration on human SAT, stable isotope studies to determine dynamic changes in fat metabolism with serial tissue biopsies are required. All studies were performed under hyperglycemic clamp conditions to achieve comparable hyperglycemia and to mimic postprandial increases in GIP and insulin. The peak GIP concentrations achieved in our study during GIP infusions were comparable with levels achieved elsewhere (3). We believe the changes in NEFAs and SAT lipid content in our obese T2DM are more likely due to the effect of GIP, particularly in the absence of excess insulin secretion. Reductions in NEFA correlated positively with fasting glucose and Adipo-IR in all the subjects across the four groups, suggesting that the effects of GIP are more pronounced in hyperglycemic and insulin-resistant states. We recognize that higher ⌬NEFA would be expected in subjects with higher fasting NEFA levels, however correlation with Adipo-IR was seen only with GIP but not with placebo infusion (Fig. 4).
Studying four distinct groups (with differing BMI and glucose tolerance) facilitates evaluation of the differential effects of GIP in insulin-sensitive and -resistant individuals. However, we acknowledge limitations, including small group sizes and the degree of obesity; there was limited pilot data in humans before the initiation of this study, and subsequently published human studies on GIP infusion had small number of subjects (3)(4)(5). Findings from our study may differ in less severely obese individuals. Lean subjects were younger compared with others and may have increased insulinotropic activity to GIP (30), but there was no significant difference in insulin AUC between the groups, except in obese T2DM. Unrecognized interactions between antihypertensive or lipid-modifying medication and the effects of GIP cannot be excluded.
In conclusion, we demonstrate that in obese patients with T2DM, acute GIP infusion in a fasting state during hyperglycemia lowers serum NEFA and increases the SAT lipid content despite reduced insulinotropic activity. In lean, obese, and obese with IGR, despite the intact insulinotropic response to GIP, no lipogenic effect was observed. This anabolic effect of GIP further exacerbates obesity and insulin resistance. | 5,128.2 | 2017-01-10T00:00:00.000 | [
"Medicine",
"Biology"
] |
Leveraging uncertainty quantification to optimise CRISPR guide RNA selection
CRISPR-based genome editing relies on guide RNA sequences to target specific regions of interest. A large number of methods have been developed to predict how efficient different guides are at inducing indels. As more experimental data becomes available, methods based on machine learning have become more prominent. Here, we explore whether quantifying the uncertainty around these predictions can be used to design better guide selection strategies. We demonstrate how using a deep ensemble approach achieves better performance than utilising a single model. This approach can also provide uncertainty quantification. This allows to design, for the first time, strategies that consider uncertainty in guide RNA selection. These strategies achieve precision over 91% and can identify suitable guides for more than 93% of genes in the mouse genome. Our deep ensemble model is available at https://github.com/bmdslab/CRISPR_DeepEnsemble.
Introduction
CRISPR-based methodologies have established themselves as a very important instrument for genomic manipulation (1).Fundamentally, these technologies employ a CRISPRassociated (Cas) endonuclease alongside a sequence-specific RNA component that directs the nuclease towards a predetermined genomic locus.This guide RNA (gRNA) is engineered to target specific genomic sequences for editing.Specifically, in the context of Cas9, a potential CRISPR target site requires the presence of a protospacer adjacent motif (PAM) characterized by an NGG sequence, with the adjacent 20 nucleotide sequence upstream serving as a template for constructing the gRNA.Over the preceding decade, the scientific community has leveraged CRISPR technology for a wide array of purposes, ranging from foundational research to practical applications.These include the development of animal models for disease research (2), the genetic study of endangered species (3), enhancement of agricultural crop resilience (4), and the pioneering of novel therapeutic approaches (5).However, despite the broad spectrum of applications underscoring the versatility of CRISPR-based genome editing, the process of gRNA design remains a non-trivial and intricate task, demanding careful consideration and expertise.One of the objectives when designing gRNAs is to maximise the on-target efficiency, which can be understood as the rate at which the desired edit is obtained.Liu et al. (6) highlights how a wide range of factors have been investigated to deter-mine the effects on gRNA efficiency.To assist in the process, many tools have been developed (7,8).While most of the tools can identify some efficient guides, the overlap between them is often limited (7).While this behaviour can be exploited to develop consensus approaches that outperform individual tools (9,10), there exists considerable room for further improvements.As more experimental data have become available, improvements have increasingly been sought through machine learning approaches, with a particular focus on deep learning (11)(12)(13)(14).One limitation of complex machine learning methods is the risk of treating them as black boxes and putting too much trust in their output.This is particularly true for deep learning, where explainability is a challenge.In this paper, we explore the notion of uncertainty.Can we deploy simple and scalable strategies to estimate the uncertainty in the predicted efficiency?If this is achievable, can we develop new guide RNA design strategies that incorporate that uncertainty, and will that improve the quality of the guides being selected?
Materials and Methods
A. CRISPRon.Xiang et al. (14) experimentally generated indel frequencies of 10,592 Cas9 gRNAs with minimal overlap with existing datasets that also report indel frequencies.Using this data, they developed a deep learning model called CRISPRon.The initial CRISPRon model was trained using one-hot encodings of 30 bp sequences (4 + spacer 20 + PAM 3 + 3) and other features such as melting point, and RNA-DNA binding energy.The initial results showed a strong correlation with the existing dataset from (12).This resulted in the merging of the two datasets to create a new dataset of 23,902 guides.The merged dataset was used to train the final version of CRISPRon.CRISPRon was reported to have a better correlation on both an internal independent test set and an external test set, outperforming popular models such as as Azimuth (11), DeepSpCas9 (12) and DeepHF (13).The CRISPRon model is essentially a convolutional neural network taking the 30bp sequence as input, modified to take additional features as input to its later layers, which resemble those of a regular feedforward neural network.See Xiang et al. (14,
B. Uncertainty quantification and deep ensemble approach.
We modify the CrisprON model in two ways, to capture both the aleotoric (data-based) and epistemic (modelbased) uncertainties in the predictions.For the former, rather than outputting a single prediction, the inherent variability in the data is accounted for by modelling the response variable as coming from a Beta distribution (which takes values between zero and one, the range of possible efficiency values), whose parameters are given by the output of the neural network.That is, the neural network outputs a distribution as a prediction implicitly via giving the two parameters of a Beta distribution (which will be different for each input).The model is trained via maximumlikelihood, that is, choosing the parameters that maximize the average likelihood over the observations for the output Beta distributions.A prediction can be obtained by outputting the expected value of the respective Beta distribution.The uncertainties for a single model can be obtained by simulating many times from the corresponding Beta distribution.Whilst the above approach is capable of modelling the uncertainty in the response variable for a given input, it does not capture our inherent uncertainty in the model itself that should be used to make such predictions.To overcome the latter issue, we use a simple deep ensemble approach.The essence of the approach is simple -one trains not one model but instead some large number of models, each with a different initialisation for the training procedure.This results in a collection of different models, due to the existence of many local minima in the training objective function (different initialisation will result in the training ending up in different minima).Despite the apparent simplicity, such an approach is effective as deep ensembling can be viewed as a crude approximation to sampling from the posterior distribution of a Bayesian model (16), which is a standard approach for accounting for model-based uncertainty (but is typically computationally intractable in deep learning settings).An additional advantage of deep ensembles is that they tend to produce better results in terms of performance (generalization to unseen data) (17).As mentioned, our initial modelling uses a probabilistic (Beta distribution based) extension of ( 14).However, instead of using a single model, we fit an ensemble of 25 models.Each ensemble member was trained for 50 epochs.The input for our deep-learning model uses the same 30 bp sequence (4 + spacer + PAM + 3) and uses the sequence melting point as a feature.The ensemble approach is configured as an unweighted model average, where all of the predictions for the ensemble members are averaged to give a final prediction.Uncertainty bands for prescribed quantiles can be produced by simulating many times from each individual ensemble members Beta distribution, aggregating the resulting samples, and computing the empirical quantiles of this simulated response.
C. Data.We combined the CRISPRon ( 14) datasets with other datasets containing experimental results on indel frequency (12,13,18).All datasets were filtered to remove duplicated entries and NaN results.The datasets (13,14) were expanded to 30 bp sequences.The expansion was achieved by aligning the provided sequences to the reference genome using Bowtie2 (19), and extracting the extended sequences with SAMtools (20) to extract the extended sequences.Any sequence that had multiple perfect alignments from Bowtie2 was excluded.After the above preprocessing, 13,359 guides remained from (12), 49,523 from (13), 8,372 from (18) and 9,886 from ( 14), yielding 80,408 guides in total.The processed datasets were then merged into a final dataset, and guides that were present in multiple datasets were merged and averaged.The 30 bp sequences were individually one-hot encoded and their respective melting points were calculated.Finally, our processed dataset was divided into training and testing portions, of sizes 60,408 and 20,000 guides, respectively.D. Metrics and thresholds.Three methods are used to evaluate the performance of approaches involving our ensembled model and its associated uncertainty ranges.The first method compares the predicted score from the model with the observed indel frequency, i.e., the actual score.This considers the correlation between the scores, as in (14), and the absolute prediction error.To explore guide design strategies, it is convenient to transform the scores into binary classes.Is this guide efficient or not?Is it accepted by the model or not?The second level of evaluation is then to consider the level of performance for this binary decision problem.For any gene, there are often dozens of guides that can be used, so it is more crucial that the selected guides are efficient, rather than to select all the efficient guides.As a result, precision is the key metric of interest.For completeness, recall is also reported.The third level of evaluation tests this assumption that a high recall is not essential.All potential CRISPR sites in the mouse genome are extracted and evaluated.Then, we count the number of selected guides for each gene.The selection of a threshold to turn the observed indel frequencies into binary classes is somewhat arbitrary.Based on the distribution of the training and on considerations on practical constraints of genome editing experiments, we chose 0.7 as the default.To explore the impact of that choice, we also repeated all tests for 0.6 and 0.8.To make the binary decision of selecting a guide or not, we can consider the score alone, or the score in conjunction with some notion of uncertainty (difference between lower and upper bound in the predictions, or interquartile range).In what follows, we note τ s the threshold on the score, τ u the threshold on the difference between the bounds, and τ q the threshold on the interquartile range.For τ s , the threshold is defined directly on the predicted score (i.e.single score if using a single model, or average score if using an ensemble).For τ u and τ q , the threshold is defined based on the range of values observed with the training data.For instance, τ q = 20% means we pick τ q so that 20% of the guides used for training had an interquartile range in predicted scores below τ q (which corresponds to a threshold of 0.13).To select guides, we only choose guides that have a predicted score above τ s and an uncertainty threshold (if used) below τ u (or τ q ).These three levels of evaluation are used to assess our ensemble approach.To understand the contribution of the ensembling and uncertainty quantification, we also use a single prediction from the first model of the ensemble.Note that this single model should not be seen as an exact replication of CRISPRon, due to the use of the Beta distribution as described above.
Results
E. Scoring performance.After training the ensemble, the performance was assessed on the testing set of 20,000 guides.The Spearman correlation and Pearson correlation were 0.839 and 0.838, respectively.To understand the performance of the ensemble, the first member was selected and tested in isolation, resulting in a Spearman correlation and Pearson correlation of 0.706.The ensemble model provides superior performance.Next, we looked at the absolute error in the predicted score.For the ensemble model, we observe a mean absolute error of 0.0947 (standard deviation 0.0813).In contrast, the single model had mean of 0.1395 and standard deviation of 0.1268.Again, there is a clear advantage to the deep ensemble.Figure 1 shows the error as a function of uncertainty (measured using IQR).The lower the uncertainty, the lower the prediction error.This further highlights the benefits of the ensemble approach.It also provides a motivation for using that uncertainty in the guide selection: if the uncertainty is low, the predicted score is more likely to be accurate, and filtering for highly-scored guides should return efficient ones.
F. Guide selection performance.Next, we explored different guide selection strategies to understand how the ensembled model and the uncertainty quantification could be exploited to select more efficient guides.We varied the threshold τ s on the predicted score, and the thresholds τ u and τ q on the uncertainty metrics.Table 1 shows some of the top-performing threshold combinations, sorted by precision.It is not a surprise to see a majority of configurations using τ s = 0.7, given how we binarized the actual scores into efficient/inefficient classes: in the default configuration, we used a threshold of 0.7 on the actual score.We explore this in Section H. What is more interesting is that we obtain very high precision, with all these configurations scoring above 93%.The guides being selected have a very high chance of being efficient in practice.
It is also interesting to note that the difference between using a threshold on the whole uncertainty range (τ u ) or on the interquartile range (τ q ) does not make a big difference.The threshold value is more important than its scope.As expected, the recall values are lower.This is a direct consequence of prioritising precision.The assumption is that these recall values (especially those above 15%) are high enough to select enough guides for most genes.We explore this in Section I.
When the uncertainty threshold is too low, τ s becomes redundant.We do not see predictions where the uncertainty is very low and the predicted score is low, so a very tight constraint on the uncertainty means that only high-score predictions are selected.Similarly, if the score threshold is extremely high, the uncertainty is always low (because the prediction is an average and is bounded by 1, so it can only be very high if all individual scores are high) and the uncertainty threshold becomes redundant.This leads to identical sets of results, so Table 1 does not show duplicates.These extreme configurations, such as τ s = 0.7, τ q = 5% or τ s = 0.95, τ q = 30%, are not considered practical.
In Table 2, we fix the score threshold at τ s = 0.7, and explore the impact of the deep ensemble and of uncertainty quantification.Just using the ensemble offers a 9 percentage points increase in precision (75.17 to 84.27%).Taking the uncertainty into account add another 6 to 13 percentage points in precision.Of course, the cost is a lower recall, but some configurations, such as τ s = 0.7, τ q = 30% still provide a recall above 55%.G. Previous datasets.In order to further our understanding of the model performance, we used the dataset generated by (21).We took all guides from this dataset and removed those that appeared in our training data.We ran different configurations of our model on the remaining guides.The results are shown in Table 3.
It is important to note that for this dataset the efficient/inefficient classes are based on the log 2 fold change, not indel frequency.Some indels may not lead to a change in expression, so it is a more difficult task.For all configurations, we obtained a high precision, ranging from 94.44% to 100%.This is higher than the results obtained by Crackling, at the cost of a lower recall.On the complete dataset, Crackling was reported to outperform all the tools it was tested against (10), so these results are encouraging.
H. Impact of threshold choice.
As discussed in Section D, the threshold used to transform the indel frequency into binary classes is somewhat arbitrary.To ensure that the results described in the previous Section are not an artefact of that choice, we repeated the same evaluation with a lower and a higher threshold.Overall, the results are consistent: • Precision is very high.Recall is lower, but for many threshold configurations it is sufficiently high.
• Extreme threshold values for either score or uncertainty make the other threshold redundant and produce a very narrow set of guides (with high precision but a very low recall).
• A threshold score close to the threshold to define efficiency generally produces good performance, but other configurations can also work.
Table 4 shows the impact of the efficiency threshold on a range of configurations.Our model reaches a high precision for all configurations if the efficiency boundary is at an indel frequency of 0.6 or 0.7.A boundary at 0.8 is more challenging, but several configurations still reach a precision above 91%.These results confirm that the model generalises well.
I. Whole-genome performance.Finally, we evaluated the performance of our model across the entire mouse genome.
Here, we do not have a ground truth for all potential CRISPR sites.Instead, we use some of the best-performing methods from Section F, for which we know the precision, and assess whether their recall is sufficient to identify efficient guides across a large number of genes.
The results are shown in Figure 2.While some settings are too extreme (as previously discussed), the τ s = 0.7, τ q = 30% configuration performs very well.It can identify at least one guide for 93.50% of the genes, and our earlier evaluation had its precision at above 91%.For 80.99% of the genes, it can identify more than 3 guides.This enables multi-targeting, which has been shown to dramatically increase knockout efficiency (22).Overall, this confirms that the deep ensemble approach can be used to identify guides for most genes that are close to being guaranteed to be efficient.
Discussion
J. Uncertainty can be quantified, and it can be used to improve guide selection.We proposed and tested a deep ensemble approach, and showed that it is able to outperform a single model using the same architecture.Crucially, our deep ensemble also provides a method to quantify the uncertainty in the score prediction.We believe that it is the first method to do so in the context of CRISPR guide RNA design.This uncertainty allows us to design novel guide selection strategies, which rely not only on the predicted score but also on how confident we can be in that score.We showed that these novel strategies achieve very high precision, and we confirm our hypothesis that while the recall is lowered, it remains high enough to identify guides for most genes.The deep ensemble generalises well, including to datasets where guide efficiency is reported using different metrics.
K. Future directions.This paper represents a first attempt at leveraging uncertainty quantification to design guide RNAs.While the results are very promising, there are a number of directions to explore.
Here, we used a Beta distribution to capture aleoteric uncertainty, but the approach can accommodate other distributions.s = 0 .7 , u = 5 % s = 0 .7 , q = 1 0 % s = 0 .7 , q = 1 5 % s = 0 .7 , q = 2 0 % s = 0 .7 , q = 2 5 % s = 0 .7 , q = 3 0 % E n s e m b le , s = 0 .7 S in g le , s = 0 .It would also be interesting to evaluate the impact of the size of the ensemble.Methods based on the consensus between multiple approaches can produce good results (10).Combining this consensus philosophy with the ability to estimate uncertainty could lead to new solutions with improved recall.As more experimental data continues to become available, the ensemble can be retrained and improved, or extended to other Cas proteins.
Conclusions
In this paper, we investigated the use of deep ensembles to improve the prediction of the on-target efficiency of CRISPR guide RNAs.We showed that this approach can capture both the aleotoric and epistemic uncertainties in the predictions.We also showed that the ensemble provides a more accurate score and that, by combining it with the uncertainty estimates, we can design guide selection strategies with a very high precision.This comes with a lower recall, but we also showed that it remains high enough to identify suitable guide RNAs for most genes.
Figure 2.a) for a diagrammatic representation of the precise neural network architecture.Given the strong performance of the model, we implemented it in PyTorch (15) as a base model which we modified to al-low for uncertainty quantification.
Table 1 .
Some of the best configurations based on the 20K holdout testing set.Results are ordered by precision and duplicates are removed.
Table 2 .
Impact of the uncertainty threshold.All configurations use a score threshold of 0.7.Results are ordered by precision
Table 3 .
Selected threshold configurations tested on the filtered Wang dataset, compared with Crackling
Table 4 .
Impact of the threshold used to define efficiency.
This represents a first attempt to leverage uncertainty quantification in CRISPR guide RNA design, and opens interesting directions for future research. | 4,706.8 | 2024-02-05T00:00:00.000 | [
"Computer Science",
"Biology"
] |
Non-relativistic and potential non-relativistic effective field theories for scalar mediators
Yukawa-type interactions between heavy Dirac fermions and a scalar field are a common ingredient in various extensions of the Standard Model. Despite of that, the non-relativistic limit of the scalar Yukawa theory has not yet been studied in full generality in a rigorous and model-independent way. In this paper we intend to fill this gap by initiating a series of investigations that make use of modern effective field theory (EFT) techniques. In particular, we aim at constructing suitable non-relativistic and potential non-relativistic EFTs of Yukawa interactions (denoted as NRY and pNRY respectively) in close analogy to the well known and phenomenologically successful non-relativistic QCD (NRQCD) and potential non-relativistic QCD (pNRQCD). The phenomenological motivation for our study lies in the possibility to explain the existing cosmological observations by introducing heavy fermionic dark matter particles that interact with each other by exchanging a light scalar mediator. A systematic study of this compelling scenario in the framework of non-relativistic EFTs (NREFTs) constitutes the main novelty of our approach as compared to the existing studies.
On top of being desirable from the phenomenological and observational points of views, the possibility of a richer dark sector, that comprises more than one particle, is fairly common in many DM models, cf. e.g. [21][22][23]. The dark particles can enjoy their own hidden forces, which are far less constrained than the interactions between DM and Standard Model (SM) degrees of freedom. Furthermore, the existence of light (i.e. with masses much smaller than that of the actual DM particles) mediators may affect the DM dynamics in multiple ways. Most notably, whenever DM particles are slowly moving with non-relativistic velocities, light mediators can induce bound states in the dark sector in the early universe and/or in the dense environment of present-day haloes [18,[24][25][26]. As for the above-threshold states, the effect of repeated mediator exchange manifests itself in the so-called Sommerfeld enhancement for an attractive potential [27,28]. In this context the role of a light mediator can also be played by SM particles. For a sufficiently heavy DM these may even be weak gauge bosons [24,25,[29][30][31][32] or the Higgs boson [30,33]. This latter option is becoming increasingly relevant as null searches for new physics at the LHC are pushing the scale of possible novel particles, including many thermally produced DM candidates, into the multi-TeV region. 1 Depending on the model at hand, one may find unstable bound states, that usually appear in symmetric DM models, as well as stable bound states (the latter are part of the present-day DM energy density). Typically, the annihilating particle-antiparticle pairs feel an attractive potential that can not only drastically change the annihilation cross section via Sommerfeld enhancement but also induce bound-state formation [18,26]. Once bound states are formed, and not effectively dissociated in the thermal plasma, they provide an additional channel for the depletion of DM particles in the early universe. The relic density determination has to be adjusted accordingly, since substantial annihilations may still occur after the chemical decoupling. This typically results in (i) mapping out different combinations of DM masses and couplings that reproduce the observed DM cosmological abundance Ω DM h 2 = 0.1200 ± 0.0012 [37]; (ii) a reinvestigation of DM phenomenology due to the interplay between the model parameters that fix the relic density and guide the experimental strategies. The stable bound states that often arise in asymmetric DM models affect the detection strategy and experimental searches for both indirect [38][39][40][41][42] and language and methods of NREFTs. Apart from the NRY we also consider the pNREFT version of the scalar Yukawa theory, which we call potential non-relativistic scalar Yukawa theory (pNRY). It is worth noting that the effect of adding interactions between heavy fermions and the Higgs to the conventional pNRQCD (which naturally leads to Yukawa potentials) has been considered e. g. in [73,74] when studying tt-production near threshold.
At variance with the previous works, here we are interested in the pure Yukawa theory that lacks any interactions with gauge bosons such as photons, gluons, W or Z. Furthermore, we would like to abstain from introducing any additional symmetries apart from what is already present in the scalar Yukawa theory. In our view, this approach allows us to investigate and highlight the essential features of non-relativistic scalar Yukawa interactions in a clear and transparent fashion without making any assumptions on the nature of the underlying higherenergy theory. The aim of the present work is to revisit the construction of the NRY by extending the treatment of [68,69] and to explore the consequences of the resulting NREFT and pNREFT for the DM phenomenology, where we are interested in describing the interactions of heavy Dirac fermions X with a much lighter scalar field φ. To the best of our knowledge, pNRY as a pNRQED-like theory that contains solely Yukawa interactions is presented in this work for the first time. In both cases we explore possible hierarchies of scales and discuss the appropriate power-counting rules. In this paper, we shall focus on the zero temperature case, and only marginally comment on the finite temperature generalization.
It is worth noting that the DM model under consideration has some intriguing properties that are unique to heavy fermions exchanging a scalar. First, as opposed to the vector mediator case, the annihilation of heavy particle-antiparticle pairs at leading order in the velocity and 1/M expansion proceeds via a P -wave process. More explicitly, one finds that the matching coefficients of 4-fermion dimension-6 operators vanish at O(α 2 v 0 ), whereas the first non-vanishing contributions show up in the velocity suppressed dimension-8 operators. Second, the pNRY exhibits, already at the Lagrangian level, the absence of electric-dipole transitions and the presence of monopole and quadrupole interactions between a heavy pair and the scalar mediator. In the context of pNREFTs, monopole interactions were discussed for super-symmetric Yang-Mills theories at weak coupling in [75]. Finally, in the case of vector mediators, pNREFTs have been already fully, or at least to some extent, exploited in the context of DM with and without co-annihilating partners [49,51,52,[76][77][78].
The structure of the paper is as follows. In section 2 we briefly introduce the simplified model that we take as our high-energy (in the EFT sense) theory. Then, in section 3 we address the construction of the low-energy NREFT (denoted as NRY) for non-relativistic fermions and antifermions exchanging a scalar. Here we shall give the set of operators as an expansion in 1/M , v and coupling constants, and discuss the symmetries and power counting rules of the low-energy theory. In section 4 we apply the NRY formalism to describe DM interactions and provide the results for the matching coefficients. As far as the fermion bilinears are concerned, we shall be content with tree-level matching coefficients. The matching for 4-fermion dimension-6 and dimension-8 operators will be carried out at O(α 2 ). These operators encode the hard contribution to the annihilation cross section for the process XX → φφ. In section 5 we proceed to the derivation of the pNREFT (denoted as pNRY), whose degrees of freedom are bound states, scattering states with kinetic energy of order M v 2 and ultrasoft scalar particles. We perform the potential matching at O(M α 4 ) and then provide an application of pNRY to the derivation of the discrete spectrum and the calculation of the bound-state formation cross section. Conclusions and outlook are offered in section 6.
Dark matter model
In this section we briefly introduce the DM model under consideration and discuss the relevant degrees of freedom. We assume DM to be a Dirac fermion singlet under the SM gauge group that it is coupled to a scalar particle with a Yukawa-type interaction. The Lagrangian density of the model reads [79,80] where X is the DM Dirac field and φ is a real scalar field. The scalar self-coupling and the Yukawa coupling between the fermion and the scalar fields are denoted as λ and g respectively. The mass of the scalar mediator m is assumed to be much smaller than the DM particle mass M , m M . Here we adopt a simplified model realization, where the question of the fermion mass generation and of the gauge group governing the dark sector are ignored. 2 Our aim is to consider the Lagrangian given in eq. (2.1) as one of the simplest representatives for a family of minimal DM models [22,80] with a light scalar mediating interactions between DM particles. It goes without saying that such a scenario admits different realizations that can be much more involved than a single Yukawa interaction (cf. e.g. [55,59,83]).
Next, L portal accounts for the interactions between the scalar φ and other degrees of freedom that can be either in the dark sector (e.g. all particles lighter than φ), and/or in the SM sector. The most common realization of such a portal involves interactions with the SM Higgs boson. In general, portal interactions are needed because the light scalar particles φ are abundant in the early universe and a substantial population is still present after the freeze-out of the dark fermion. Hence, there has to be a mechanism that allows φ particles to decay and deplete their population so that the scalar does not happen to dominate the energy density of the Universe [80,84]. The minimal model of eq. (2.1) is moderately under tension if one considers the interactions of the scalar φ with the Higgs boson and hence SM fermions. Especially the interactions with quarks severely constrain the model via direct detection experiments [80,85]. However, these tensions can be removed in a number of different ways [83,86]. Since a detailed phenomenological analysis is beyond the scope of this work, we do not specify L portal further and merely focus on the complementary terms in eq. (2.1) to derive the low-energy field theories relevant for the bound-state dynamics. This sets the stage for our NREFT and pNREFT formulations and paves the way for more thorough investigations (also with respect to the DM phenomenology) in future works.
Non-relativistic Yukawa theory
In the following we would like to discuss the procedure of constructing a tower of non-relativistic EFTs for a heavy Dirac fermion X that interacts with a light scalar field φ via a scalar Yukawa interaction. Our main motivation is to investigate the properties of XX bound states such as spectra, production and decays in a rigorous and model-independent way. In order to proceed systematically, it is useful to disentangle low-energy modes relevant for the bound-state formation from high-energy modes that are naturally present in the UV-complete theory described by eq. (2.1). Nevertheless, the contributions from large energies and momenta are not simply discarded: their effects will be incorporated into Wilson coefficients multiplying the operators that appear in the EFT Lagrangians. The process of determining these coefficients by comparing Green's functions of two theories at low-energies is called matching. The EFT description can be systematically improved by including higher-order operators compatible with the symmetries of the underlying theory. The effects of these operators can be quantified using EFT power-counting rules, so that at each order in the relevant expansion parameters only a finite number of operators must be taken into account. This leads to a comprehensive description of the low-energy physics, that allows us to make predictions for the physical observables of interest (e.g. cross sections or decay rates) in a simple and straightforward fashion.
Obviously, we need to assume a certain hierarchy between the scales relevant for the nonrelativistic bound states (see figure 1). The largest of these relevant scales is the heavy fermion mass M . An important scale below M is the typical size of the relative momentum between the fermions in a bound state, |p| ∼ M v, where v is the relative velocity of the particles. Notice that this scale is also related to the typical bound state size r, where 1/r ∼ M v (one can use the Bohr radius a 0 for Coulombic bound states for the size estimate). As our fermions are heavy and non-relativistic, we have v c. We assume that v is sufficiently small with at least v 2 ≤ 0.3. In nature v 2 ∼ 0.3 is found e.g. in heavy quarkonia made of a charm and an anti-charm quarks. The non-relativistic description is still applicable to such systems, but the velocity expansion converges rather slowly. On the other hand, for XX bound states with v 2 ∼ 0.1 (as in bbquarkonia), corrections of O(v 2 ) should be sufficient for a reliable phenomenological analysis. The typical bound state energy of an XX system scales as M v 2 . In the following we denote the scales M , M v and M v 2 are hard, soft and ultrasoft respectively.
For simplicity, we would like to consider the situation where the mass of the scalar m is of the same order, or smaller, than the ultrasoft scale m < ∼ M v 2 . In practice, this corresponds to considering Coulombic states induced by the scalar mediator, which is the regime typically studied in the existing literature (however see e.g. [57,74] for numerical studies with finite m). Furthermore, should the full theory feature a scale Λ below which perturbation theory ceases to be applicable (such as Λ QCD in strong interactions), this scale should be much smaller 3 than M v 2 . This ensures that the scales M and M v can be integrated out perturbatively.
Integrating out all degrees of freedom with energies and momenta of order M and above we obtain an EFT known as the Non-relativistic Yukawa Theory (NRY) [68,69]. The degrees of freedom of NRY are Pauli spinor fields ψ and χ describing a particle and an antiparticle respectively 4 as well as soft and ultrasoft scalar fields φ. The Lagrangian of NRY is a double expansion in 1/M and v. While M explicitly appears as a parameter in L NRY , this is not the case for the velocity v. Therefore, to determine the velocity scaling of the given operator it is necessary to work out power-counting rules that assign powers of velocity to the typical operator building blocks, i.e. couplings, fields and derivatives.
The fact that the energies and momenta of the φ fields can be soft or ultrasoft leads to additional complications in the power-counting. In particular, the scaling of scalar mediators involved in potential exchanges between the heavy fermions will, in general, differ from that of the on-shell φ fields in the external states. In other words, the power counting of NRY is not 3 In principle, it would be sufficient to demand only Λ M , which would allow us to integrate out the scale M perturbatively. The procedure of integrating out the scale M v without relying on the perturbative expansion in a small coupling has been discussed in [87,88]. However, to keep the present discussion as simple as possible, we assume perturbativity at least up to scales much smaller than the bound state energy. 4 To be more precise, ψ annihilates a fermion, while χ creates an antifermion. This property is most easily seen in the operator approach, where the free-field Fourier decomposition of ψ contains only a single particle annihilation operatorâ(p, s), while that of χ is proportional to the antiparticle creation operatorb † (p, s).
homogeneous, as it has been already discussed in [68,69]. This is less of a problem for production and decay calculations, but turns out to be rather inconvenient when looking at the bound state properties. In section 5 we will show how to circumvent this problem by devising yet another EFT (pNRY) that works at energies much smaller than M v. By construction, NRY is valid only at scales of order M v and below. In this energy region it must reproduce the full theory, so that both theories have identical infrared (IR) behavior. Formally, the Lagrangian of NRY contains infinitely many operators suppressed by increasing powers of M . This corresponds to the statement that both theories coincide in the limit M → ∞. However, one can always employ the velocity scaling rules to determine which operators contribute at the given order in v. This is why in practice we will only need to consider a small set of relevant operators.
Symmetries and NRY Lagrangian
A crucial property of an EFT is that it must encompass the symmetries of the underlying full theory. Therefore, to construct the Lagrangian of NRY we must write down all possible operators compatible with the symmetries present in the scalar Yukawa theory. For example, each operator must be invariant under charge conjugation, parity and time reversal. Lorentz symmetry is still present in NRY, but it is not manifest. 5 One of the implications thereof is the invariance under rotations in the 3-dimensional space. In addition to that, we will also encounter some symmetries that manifest themselves only when particles and antiparticles are treated as separate degrees of freedom and are not obvious when looking at the relativistic full theory Lagrangian.
The procedure of enumerating all operators that may appear in the given NREFT order by order in 1/M can be found e.g. in [93,94]. This problem can be also approached using the Hilbert series framework adapted to non-relativistic theories [95]. A more explicit way to obtain the fermion-bilinear piece of L NRY is to subject the full theory Lagrangian given in eq. (2.1) to a sequence of Foldy-Wouthuysen-Tani (FWT) transformations [96,97] or to use the equations of motion (EOM) method as in the Heavy Quark Effective Theory (HQET) [98][99][100][101] (cf. [102] for a pedagogical introduction to EOM). Both approaches can be iterated order by order in 1/M and lead to effective Lagrangians that incorporate relevant operators together with their tree-level matching coefficients. At this point it is important to stress that these techniques should not be employed mindlessly for a number of reasons. First of all, it is well-known (cf. e.g. [103]) that FWT and EOM by construction miss all operators that are allowed by symmetries but happen to have vanishing tree-level matching coefficients. 6 This should not come as a surprise, since both procedures essentially correspond to the tree-level matching. Second, the so-obtained 5 A thorough discussion of the Poincaré invariance in NREFTs such as NRQCD and pNRQCD can be found in [89][90][91][92]. 6 It is clear that such operators can still become relevant at higher loop orders and hence must be included in the Lagrangian.
operator basis is not guaranteed to be the most useful one and may contain redundancies. Field redefinitions can be used either to completely eliminate some of the appearing operators or to trade them for other operators. For example, in the case of NRQCD, NRQED or NRY one can get rid of operators with time derivative acting on the heavy fermions by introducing suitable redefinitions of these fields. Notice that field redefinitions leave only on-shell Green functions unchanged but alter the offshell ones. This is why the matching between the full theory and the NRY should be performed for on-shell Green functions. Nonetheless, as long as one keeps in mind the above facts, FWT and EOM can be regarded as a useful aid when working out a new NREFT containing heavy fermions. We demonstrate an explicit application of these tools to the scalar Yukawa theory up to O(1/M 2 ) in appendix A.
At O(1/M 2 ) the most general Lagrangian compatible with the symmetries of the scalar Yukawa theory can be written as where ψ (χ) is the Pauli field that annihilates (creates) a heavy fermion, while φ is the light scalar mediator. The anticommutators are defined as {a, b} = ab + ba. Furthermore, σ stands for Pauli matrices and we have ∂ i = ∇ i . Notice that the derivatives in the bilinear fermion and antifermion sector act on all the fields (scalar and spinors) on the right. The c i and c i are the matching coefficients of fermion and antifermion bilinears respectively, while d i belong to the scalar sector. The L 4-fermions part of the Lagrangian contains 4-fermion contact interactions that describe annihilations/decays of XX pairs 7 . These operators are necessary, since a heavy-fermion annihilation process such as XX → φφ cannot be described via the fermion-bilinear part of the NRY Lagrangian. In this case the scalar fields must carry energies of O(M ), yet these modes have been integrated out when constructing the NRY. This is why such processes must be described via 4-fermion interactions, where the effects of the high energy modes are incorporated in the imaginary parts of the Wilson coefficients multiplying these operators [62,104].
The NRY Lagrangian enjoys a heavy fermion spin symmetry (HFSS) up to corrections of 7 XX production can be described by vacuum expectation values of 4-fermion operators containing XX-Fock states between the fermion bilinears. Such objects are therefore not included in L 4-fermions but will enter the corresponding production cross sections.
O(1/M 2 ), where the first spin-flipping operator shows up. It is interesting to observe that in the case of NRQCD or HQET the heavy quark spin symmetry is broken already at O(1/M ). However, since NRY has no gauge symmetry and an operator proportional toXφ∇ · σX is forbidden by parity, the spin flip may occur only through an operator involving at least two spatial derivatives. The validity of the HFSS up to O(1/M 2 ) implies particularly small splittings in the spin-symmetry multiplets of XX bound states, which is an intriguing feature of the NRY phenomenology. Another symmetry of L NRY that should be familiar to NREFT practitioners is the heavy fermion phase symmetry ψ → e iα ψ, χ → e iβ χ, α, β ∈ R, (3.2) which implies separate conservation of the number of particles and antiparticles.
Power counting
To derive the power-counting rules of the theory we can make use of the standard arguments 8 used in NRQCD [62]. To this end it is useful to employ a quantum mechanical perspective before the second quantization, where we can interpret ψ as a wave function interacting with an external potential φ. The wave function normalization condition together with our previous estimate of the typical bound state radius r ∼ 1/M v readily suggests that d 3 x ∼ 1/(M v) 3 and therefore ψ ∼ (M v) 3/2 . A spatial derivative acting on ψ probes its typical 3-momentum, so that ∇ψ ∼ M v ψ. The equation satisfied by ψ at the lowest order in the 1/M expansion reads where we have anticipated the tree-level results for the matching coefficients c 1 and c 2 (cf. eq. (4.1)).
Here gφ plays the role of the leading-order contribution to the interacting part of the quantum mechanical Hamiltonian. Using the virial theorem for bound states we can estimate that ∂ 0 ψ ∼ M v 2 ψ and gφ ∼ M v 2 . The same argument applies also to the scaling of the χ field. In a similar manner, we may also pass to a picture in which the wave function φ satisfies the following Klein-Gordon-Schrödinger equation (again at lowest order in 1/M ) where the last term scales as gM 3 v 3 . If we assume that the typical momentum of φ scales as M v, then the virial theorem implies that gM v ∼ φ. Hence, g 2 ∼ v for a Coulombic state and φ ∼ M v 3/2 . Notice also that λφ 3 ∼ M 3 v 7/2 may seem much less suppressed than (gφ) 3 ∼ M 3 v 6 . However, since λ does not appear in the fermion-bilinear part of the NRY Lagrangian, a diagram involving λ must also contain at least one insertion of gφ that couples directly to the fermion current. This accounts for an extra suppression of processes involving the scalar self-coupling with λ.
Notice also that if the energy and momentum of φ scale as M v 2 , we find v 4 φ ∼ gM v 3 and consequently g 2 ∼ v 3 , upon using gφ ∼ M v 2 . In this case we would actually need less operators to describe the same observable at the given order in v as compared to the previous counting. Yet, to be on the safe side, in the following we will adopt the more conservative counting with ∇φ ∼ M vφ. We summarize the scaling rules in figure 1.
Applications of NRY to dark matter
In this section we adapt the general discussion of section 3 to the DM phenomenology, and derive the matching coefficients of the low-energy version of the model Lagrangian eq. (2.1), namely the parameters of the NRY (3.1). The effective Lagrangian comprises unknown coefficients that have to be fixed by the matching procedure. In practice, one computes on-shell Green's functions in the full theory in eq. (2.1) and in the effective theory and demands their equality at a matching scale µ match with m, M v 2 , M v µ match M . The relative size of the smaller scales is irrelevant here. Through the matching coefficients, which could be obtained at arbitrary loop order, the low-energy theory is also organized as an expansion in the couplings g and λ. As it is common in DM models, we assume the scalar self-coupling to satisfy λ ∼ g 2 , which facilitates the organization of the perturbation series. Furthermore, we define α ≡ g 2 /4π to organize the power counting of the low-energy theories. In this work λ will barely play any role.
A non-relativistic regime for dark particles is relevant both for annihilations during the thermal freeze-out, as well as in the present-day galactic halos. In the latter case, typical DM velocities are of order 10 −4 -10 −3 in units of c, cf. e.g. [105,106]. In the former case, DM particles are kept in chemical equilibrium through interactions with the thermal bath until T M and gradually freeze out at temperatures T ∼ M/25. 9 Annihilations continue even during later stages where the DM particles are still in kinetic equilibrium. In this situation most of the energy of a DM particle is sourced by its mass and, for non-relativistic species, the typical momentum is |p| = √ T M = M T /M . One usually identifies an average velocity v ≈ T /M , which is smaller than unity in the regime of interest. For above-threshold particle-antiparticle pairs feeling a Coulomb-like potential, the regime v ∼ α signals the potential energy E pot ∼ α/r ∼ M αv being the same as the kinetic energy M v 2 [46]. For XX in a bound state, the velocity estimate of the relative motion is fixed at v ∼ α. In a perturbative regime, this again gives a velocity smaller than unity. Since the temperature of the plasma is T M , the temperature scale is treated on the same footing with other smaller scales, and does not affect neither the matching and nor the form of the NRY (cf. discussions in [107,108] for NRQED and NRQCD at finite temperature and [109] for an explicit derivation of an NREFT for Majorana fermions in a thermal bath).
In summary, at energies much smaller than M , the degrees of freedom are non-relativistic Dirac fermions and antifermions, including bound states and near-threshold states, and scalars with energies and momenta much smaller than M . The NRY presented in eq. (3.1) is then a suitable field theory that describes non-relativistic DM particles and their dynamics. The first two lines of eq. (3.1) encode interactions between the non-relativistic fermion (and antifermion) and the light scalar mediator. An important difference with respect to NRQED/NRQCD is the lack of gauge symmetry. Hence, the effective Lagrangian eq. (3.1) contains no covariant derivatives and the form of effective operators containing scalars and fermions is not constrained accordingly. We discuss the matching of the bilinear sector in section 4.1.1. The last line of eq. (3.1) comprises 4-fermion operators, which account for DM pair annihilations in the low-energy theory. The corresponding cross section XX → φφ is a key ingredient for the determination of the relic density governed by the freeze-out as well as present day annihilations in the Milky-Way. In this work, we do not consider pair annihilations induced by interactions in L portal . We address the 4-fermion operator in section 4.1.2.
NRY Matching
We now discuss the derivation of the matching coefficients of the low-energy theory given in eq. (3.1). As already anticipated, this procedure amounts to enforcing the equality of on-shell scattering amplitudes in the full theory (2.1) with on-shell scattering amplitudes constructed with the general expressions of the NRY in terms of ψ, χ and φ and the unknown matching coefficients. The matching scale provides a UV cut-off for the low-energy theory, above which NRY is not reliable. Clearly, the fundamental theory and the NREFT have a different UV behavior, whereas the infrared (IR) properties are the very same. Only high-energy modes of order M (integrated out in NRY) contribute to the matching coefficients of eq. (3.1). In other words, when computing scattering amplitudes there can be residual IR contributions that are not included in the matching coefficients, because they appear on both sides of the matching condition.
Most of the calculations done in the course of this work (e.g. determination of the matching coefficients, derivation of the Feynman rules, manipulations of the EFT Lagrangians etc.) were carried out not only by pen and paper but also using software tools for automatic calculations. For the latter we employed Mathematica packages FeynArts [110], FeynRules [111] and FeynCalc [112][113][114]. The automation of non-relativistic calculations was significantly simplified by making use of interface to QGRAF [116] diagram generator was added to the development version of the Feyn-Helpers [117] extension. This allowed us to generate Feynman diagrams for non-relativistic EFTs in a straightforward fashion. All new functions that were developed while working on this project should be made publicly available and properly documented in the upcoming versions of FeynCalc, FeynOnium and FeynHelpers.
Fermion bilinear and scalar sector
Let us discuss the matching coefficients of the bilinear fermion and antifermion sectors, first two lines in eq. (3.1). This amounts to comparing scattering amplitudes with one incoming and one outgoing fermion and scalar mediators (one, two or three of the latter field). The diagrammatic representation of the matching for a fermion interacting with one single scalar field is given in figure 2. In this work, we consider the matching of the NREFT Lagrangian at tree-level as far as the fermion (antifermion) bilinear is concerned. For the trilinear coupling this means that it is sufficient to work at order g. However, we remark that this procedure is general and applicable to the matching at any loop order. We collect some details in the appendix A, whereas here we list the results for the matching coefficients that read For c 3 , c 4 , c 3 and c 4 , we have considered the matching of diagrams with two and three external scalars respectively. Consistently with the findings from the FWT and EOM methods (cf. appendix A), these matching coefficients are found to vanish at tree-level. The matching coefficients c 1 , c D , c S may receive O(g 2 ), O(λ) corrections (not addressed in this work 10 ), whereas c 3 , c 4 may start getting non-trivial contributions at one-loop level. The Figure 3: Diagrammatic matching between the relativistic theory (diagrams on the l.h.s) and the corresponding four-particle local interactions in the NREFT (diagrams on the r.h.s). The latter correspond to the dimension-6 and dimension-8 operators of the four-fermion sector (3.1) respectively.
coefficients of the kinetic terms c 2 and c 2 are fixed to unity to all orders in perturbation theory owing to the reparametrization invariance.
There is an important aspect we want to highlight. As one may read off from eq. (4.1), there is a relative sign difference between the particle and antiparticle interactions with the scalar field. At order O(1/M 0 ) this is in contrast to the situation in NRQED and NRQCD, where the signs are the same. This very difference will be the reason behind the appearance of monopole and quadrupole interactions in the lower energy EFT that we will derive in section 5, instead of typical dipole interactions of pNRQED and pNRQCD.
As for the scalar sector described in third line of eq. (3.1), we equally perform the matching at tree-level only. Our guidance here is again the power counting of the pNRY that will be given in section 5. Postponing the one-loop matching of the NRY to a future work on the subject, one can simply obtain the matching coefficients at tree-level to be
Four-fermion operators and annihilation cross section
As anticipated, the NRY can readily describe heavy pair annihilations in terms of local 4-fermion operators in eq. (3.1). The inclusive annihilation rate can be recast in terms of an amplitude that conserves the number of the heavy particles by means of the optical theorem: the imaginary part of the loop amplitude with four external heavy fermion legs is related to the cross section of the process XX → φφ [62,104], cf. figure 3. For the model at hand, it is known that the annihilation cross section is velocity suppressed [79]. This will be reflected in a vanishing contribution from the velocity-(or derivative-) independent operators, that are of dimension-6. They read [62] ( The spectroscopy notation is borrowed from NRQED/NRQCD, so that one can classify the annihilations in terms of the total spin S of the pair, the relative angular momentum L and the total angular momentum J, by writing 2S+1 L J . Then, we consider dimension-8 operators, which comprise higher powers in 1/M . These are compensated by derivatives acting on the fermion (antifermion) fields, that induce velocity suppressed contributions due to ∇ψ ∼ M v (cf. section 3). As we shall see, they provide the leading contribution to the annihilation process XX → φφ. Of course, higher dimensional operators (further suppressed in the velocity expansion) are allowed as well but will not be considered in this work. The explicit structure of the dimension-8 operators in eq. (3.1) reads [62] ( where the operators explicitly included are where ↔ ∂ is the difference between the derivative acting on the spinor to the right and on the spinor to the left, namely χ † ↔ ∂ ψ ≡ χ † (∂ψ) − (∂χ) † ψ. The notation T (ij) for a rank 2 tensor stands for its traceless symmetric components, T (ij) = (T ij + T ji )/2 − T kk δ ij /3. As pointed out in [62], we may also have operators with the derivative acting on the product of the spinor fields ψ † and χ or χ † and ψ. The matrix elements of such operators are proportional to the total momentum of the pair XX, that is zero in the rest frame of the particle-antiparticle pair.
The detailed derivation of the matching coefficients can be found in appendix A, whereas here we merely list the results Both matching coefficients of the dimension-6 operators in eq. (4.3) are zero at the order we are working. Accordingly, they do not contribute to the pair-annihialtions. We observe that the annihilating fermions are always in the spin triplet configuration with the orbital angular momentum L = 1 and definite total angular momentum values J = 0, 2. The allowed combinations of L, S, and then J, are constrained by the symmetry of the fundamental Lagrangian eq. (2.1), that are also inherited by the NRY eq. (3.1). Since the scalar does not carry any spin, the conservation of parity and the total angular momentum forbids S = 0, and impose ∆L = 0 or even. We conclude this section by reproducing the non-relativistic annihilation cross section for the process XX → φφ. In order to compare to the results in the literature, we average over spin polarizations of the incoming fermion and antifermion. The cross section then reads where we used the non-relativistic flux factor with the relative velocity in the center of mass frame, so that v rel = |v ψ − v χ | = 2v. The imaginary part of the non-relativistic amplitude M NR can be readily computed using the Lagrangian from eq. (4.4) and the matching coefficients eq. (4.15), and upon setting v = v in eq. (A.52) (since we consider the scattering amplitude of ψχ → ψχ) we obtain The result agrees with the literature, cf. e.g. [55,57].
As was pointed out in [56], and can be inferred from the benchmark point used in [57], large values of α in this model can be of particular phenomenological interest. It is in the reach of the NRY, and subject of another work [118], to derive such higher order corrections in the matching coefficients of the bilinear and 4-fermion sector, and to inspect their impact on, e.g. the annihilation cross section.
Potential non-relativistic Yukawa theory
In the previous section, we integrated out the hard degrees of freedom with energies of order M as well as fermion/antifermion fluctuations of the same order. Here we want to integrate out soft degrees of freedom with energies of the order of the relative momentum of the pair p ∼ M v. 11 The corresponding effective theory takes the form of a pNREFT and we can rely on the techniques employed in the derivation of pNRQED as long as we assume that the mass of the scalar satisfies m M v. This condition implements a Coulomb-like regime and implies the scaling v ∼ α for the velocity. Moreover, we are allowed to treat the scalar mediator as effectively massless in the matching between the NRY and the pNRY, upon relying on the scale hierarchy M α M α 2 , m, irrespective of the relative size of the smaller scales. We comment on the case m ∼ M α later in this section.
Let us come to the construction of the pNRY Lagrangian. First of all, as the two-point functions are not sensitive to the relative momentum of the pair, the fermion bilinears of the NRY from eq. (3.1) and the pNRY will look the same. That said, one has to keep in mind that only scalar fields with ultrasoft momenta are kept in the latter EFT. Conversely, diagrams with four-fermion external legs are sensitive to the relative momentum and non-trivial contributions will be generated: they are the potential terms in the pNREFT Lagrangian [121,122]. The important point to be stressed here is that the appearance of the potential terms can be seen as the effect of integrating out soft scalars, and hence the potential can be extracted by matching the NRY to the pNRY.
In order to elucidate on the distinction between soft and ultrasoft scalars, and to introduce the degrees of freedom of pNRY, we project the NRY onto the particle-antiparticle sector as follows where i, j are spin indices, while the state |φ US contains no heavy particles/antiparticles and an arbitrary number of scalars with energies much smaller than M α. Here, ϕ ij (t, x 1 , x 2 ) is a wave function representing the XX system. After the projection it will be eventually promoted to a bi-local field. As a next step, one recognizes the relative distance of the pair r = x 1 − x 2 to have typical size of the inverse of the soft scale M α, or the inverse Bohr radius of a Coulombic state, cf. eq. (5.9). This can be considered a small scale as compared to the typical wave-length of the ultrasoft scalars, which is of the order of the inverse of M α 2 . According to the projection (5.1), scalar fields now appear at points x 1 and x 2 , and we can ensure that they are ultrasoft by expanding them about the center of mass coordinate R = (x 1 + x 2 )/2 upon using r R. One has to evaluate the leading order interaction between particle-antiparticle pairs and the ultrasoft scalar field, namely the combination g(φ(x 1 ) + φ(x 2 )), around the relative coordinate 11 A distinction between potential photons, i.e. photons with k 0 ∼ M α 2 and k ∼ M α from soft photons, i.e. photons with k 0 ∼ k ∼ M α, can also be considered [119,120] and this could apply to the scalar mediator as well. This distinction is not that relevant in the formulation we are following since, as done for the QED case, both potential and soft photons are integrated out at the same time when matching NRQED to pNRQED. up to O(r 2 ) as follows As a consequence, the dipole terms, namely the ones linear in r, cancel exactly. This is a peculiar feature of the Yukawa-type theory eq. (2.1) and its low-energy version eq. (5.4) which distinguishes them from pNRQED (and pNRQCD) where dipole transitions naturally arise. In summary, we write the Lagrangian in terms of the wave function field ϕ(r, R, t) and the ultrasoft scalars φ(R, t) as follows where the square brackets in the second line of (5.4) indicate that the spatial derivatives act on the scalar field only, which has to be understood as multipole expanded in the last line of eq. (5.4) as well. To avoid cluttering the notation we suppress the spin indices of the bilocal fields that are contracted with each other. At variance with NRY (3.1), each term in the pNRY has a well-defined scaling: ∂ 0 ∼ M α 2 , the inverse relative distance and the corresponding derivative r −1 , ∇ r ∼ M α, whereas the scalar field and the center-of-mass derivative gφ, ∇ R ∼ M α 2 . The potential is understood as a matching coefficient and is organized as an expansion in α(M ) and λ(M ), as well as 1/M (inherited from NRY), the coupling α(1/r) and the relative distance r (as proper of pNRY). In the following, we parametrize it as V = V (0) + δV . In the case of m M α the leading order potential reads V (0) = −α(1/r)/r, which is a Coulomb potential. It is important to specify the scale at which α is evaluated, since it helps to keep track of the matching between the full theory from eq. (2.1) and the NRY from eq. (3.1), and between the latter with pNRY given in eq. (5.4). The corrections to the potential δV are needed to compute observables, such as the energy spectrum, at next-to-leading order. Here, we aim at extracting the potential at the first non-trivial order M α 4 , as an application of pNRY and the corresponding power counting. We discuss the potential matching in the following section 5.1. As for the kinetic terms, we included contributions up to order M α 4 as well.
Next, in the second line of eq. (5.4), we see the appearance of a monopole and a quadrupole interactions as well as interactions involving the derivative in the relative distance. Such structures are found by performing the so-called multipole expansion of the ultrasoft fields, here the scalar mediator, and terms up to order M α 4 are retained. The absence of dipole transitions in this model was already pointed out in [55,57] when dealing with the calculation of the bound-state formation. In our approach, the absence of such terms is already manifest at the level of the Lagrangian, where the degrees of freedom and their interactions are spelled out at the energy scale of interest for bound-state calculations. These are the wave function field of the particle-antiparticle pair and ultrasoft scalars, that interact via monopole and quadrupole interactions. The term with the derivative ∇ r in the second line of eq. (5.4) arises from the 1/M 2 spin-independent operator in eq. (3.1), and is of the same order as the quadrupole term, namely M α 4 . Indeed, the contributions of spin-dependent operators are subleading in the power counting. The presence of the ultrasoft scalar-derivative coupling conforms with the findings of [55]. 12
Potential matching
In this section we address the matching between NREFT and pNREFT. This procedure brings us to the systematic derivation of the particle-antiparticle potential, which can be understood as a matching coefficient of the pNRY. This procedure is rather general, and it has been generalized at finite temperature in [107,123] for pNRQED, as well as for pNRQCD [108]: any scale larger than M α 2 may contribute to the potential. For the moment, let us assume that the scalar mass is much smaller than the soft scale M α and can, therefore, be treated on the same footing with the ultrasoft scale M α 2 in the matching. At variance with on-shell Green's functions exploited in the matching between the relativistic theory eq. (2.1) and NRY eq. (3.1), here we enforce the equality between off-shell four-fermion Green's functions. The reason is that this is the typical situation of particles in a bound state or near-threshold. We give a diagrammatic representation of the potential matching in figure 4. Then, external momenta of the fermions are soft ∼ M α, whereas the energies are at the ultrasoft scale M α 2 , so that one can expand in such a scale. For the sake of the matching performed here, the ultrasoft scale can be put to zero. This amounts to simplifying the scalar propagator to −i/k 2 and taking the fermion propagators in loop diagrams of the NREFT side of the matching as static [121,124]. Loop diagrams on the pNREFT side vanish in dimensional regularization, because they are scaleless when expanded in the ultrasoft scale.
Since the NRY is organized as an expansion in α and 1/M , we can readily understand which are the terms we need in the potential V up to some order M α n [121,122]: one has to carry out the matching by combining inverse powers of the hard scale and the couplings as (1/M ) a α b with a + b ≤ n − 1. In this work we aim at potential corrections up to order M α 4 . In practice, for a given diagram, one counts the powers of α, inverse powers of M , and then multiplies by the soft scale M α in order to obtain the dimension of an energy. One has to consider tree-level, one- 12 The authors of [55] do not use EFT methods and hence provide no power-counting rules. The coupling gφ(R, t)∇ 2 R /M 2 has been considered in the non-relativistic Hamiltonian, which is however α 2 suppressed with respect to gφ(R, t)∇ 2 r /M 2 . Figure 4: Green's function in NREFT and pNREFT, where the momentum assignment is explicitly shown. The momentum transfer is k = p − q, where k is the momentum carried by the scalar mediator. The potential depends on p and k and the spin of the particle and antiparticle. and two-loop diagrams for the matching similarly to what has been carried out in the pNRQED case [122]. As explained above, in the diagrams involving heavy fermions in loops, we employ static fermion propagators, which allows us to simplify the derivation [121,124]. We address here the tree-level diagram only, whereas a more detailed discussion on the loop diagrams is deferred to appendix B.1. We collect the relevant tree-level diagrams in figure 5. In the upper row, the leftmost diagram provides the leading contribution to the potential of order M α 2 . It is easy to see that the middle and rightmost diagrams give instead a contribution of order M α 4 : the two inverse powers of M in the vertex are compensated for the soft scale M α, namely α × 1/M 2 × (M α) 3 . A quantum correction to the matching coefficients c D and c S would give a contribution of order M α 5 . Next, let us discuss the diagrams in the lower row of figure 5, as an example of diagrams which are suppressed in the power counting. The leftmost diagram accounts for the insertion of the corrected scalar propagator, and the contribution scales as α × d 4 /M 2 × (M α) 3 ∼ d 4 M α 4 . Since the matching coefficient is d 4 = O(α, λ), i.e. it vanishes at tree-level, this diagram is beyond our desired accuracy. By applying similar power counting arguments, one sees that the 4-fermion dimension-6 operators would contribute at order f ( 1 S 0 )M α 3 and f ( 1 S 0 )M α 3 . However, as shown in section A.4, such matching coefficients vanish at O(α) and, therefore, they could contribute only beyond the required accuracy. Finally, dimension-8 operators need not be considered, as they are further suppressed. We find that one-and two-loop diagrams do not contribute at order M α 4 , but only beyond (see discussion in appendix B.1). Then, the potential matching receives contributions only from the upper diagrams in figure 5, and the corresponding ones with the vertices c D and c S on the antiparticle line (not displayed). The resulting potential reads where S = diag(σ 1 , σ 2 )/2 is the total spin matrix, with σ 1 (σ 2 ) being the spin matrix of the two-component particle (antiparticle) field in the fermion bilinear, and we used c D = −c D and c S = −c S . As expected, the first term is the same as in pNRQED. This finding is consistent with the assumption m M α and the contribution corresponds to a Coulomb potential upon performing the Fourier transform. The third term (up to a relative sign difference) equally appears in the pNRQED potential. Yet the second term is characteristic of the pNRY and differs from what one finds in pNRQED. This difference can be traced back to the different momentum combinations entering the non-relativistic Lagrangian in eq. (3.1) as compared to the NRQED case. Upon performing the Fourier transform (cf. e.g. [125] for a collection of useful formulas), we obtain the following potential in the position space with p = −i∇ r , L = r × p .
Solving this equation yields Coulombic energy levels E n and the Bohr radius a 0 given by Let us also remark that since eq. (5.8) describes a fermion-antifermion bound state, it features a factor of 2 in front of the −gφ term and a potential V (0) as compared to eq. (3.4) for a single fermion. Before moving to the applications of pNRY, one more comment is in order. It is well known that the Yukawa potential induced by a scalar mediator is universally attractive so that not only particle-antiparticle but also identical fermions can form bound states. This is very different from e.g. QED, where e + e + -or e − e − -interactions are repulsive. We have explicitly verified that the pNRY for particle-particle (antiparticle-antiparticle) pair admits the very same form as eq. (5.4) upon performing the replacements χ j (t, x 1 )). However, since identical Dirac fermions cannot annihilate into scalars, the bound-states XX andXX are completely stable in the context of the scalar Yukawa theory.
Scalar mass of order M v
The Yukawa potential is usually understood as a screened potential of the form −αe −mr /r. The mass of the mediator leads to a finite-range interaction with r ∼ 1/m, as opposed to the Coulomb case. Our calculation of the potential in eq. (5.5) assumed the scalar mass to be much smaller than the momentum transfer of order M v (we restore v instead of α in order to be generic and not necessarily in the Coulomb regime v ∼ α). Hence, we consistently neglected the mediator mass in the matching and it did not appear in the corresponding potential in eq. (5.6).
Let us briefly discuss how the matching calculation changes when m ∼ M v, so that the scale m cannot be neglected. The main difference resides in the scalar propagator that enters the where one recognizes the leading term to be a Yukawa screened potential. Moreover, in this case the pole of the bound-state propagator receives a finite mass shift [108,126]. 13 The corresponding one-loop diagrams are shown in figure 6, and we find Notice that eq. (5.10) reduces to eq. (5.6) in the limit m → 0. On the other hand, one can also study corrections to the Coulombic regime by expanding eq. (5.10) in mr 1. Let us also remark that in the case of a vanishing scalar mass, the loop integrals in figure 6 are scaleless, and vanish accordingly in dimensional regularization.
Bound-state spectrum at order M α 4
As a first non-trivial application of the pNRY, we carry out the derivation of the discrete spectrum at order M α 4 . We follow the setting of [121,122] put forward for pNRQED and consider the two-point function of the field ϕ. The potential and kinetic contributions are simple insertions into the ϕ propagator (see figure 7 leftmost and middle diagrams). This brings us to the evaluation of quantum mechanical expectation values when projected onto bound-state wave functions. In the limit m M α, we can approximate the state as Coulombic and compute the 13 Despite of the fact that these references discuss finite temperature calculations, the zero-temperature case follows the same pattern. The mediator mass m is integrated out together with the inverse distance between the pair 1/r ∼ M v. The potential and mass shifts are understood as matching coefficients of the pNREFT.
expectation values on such unperturbed states. Additionally, one has to consider the ultrasoft contributions to the binding energy, namely those originating from the loop corrections to the propagator of an ultrasoft scalar. Similarly to the case of pNRQED, the leading non-vanishing ultrasoft contributions arise from one-loop self-energy diagrams. We assume that the scalar mass can be as large as the binding energy M α 2 . Applying the power counting of the pNRY, we see that only the diagram with two monopole vertices contributes within our accuracy, displayed in figure 7, and it scales as M α 3 . The ultrasoft contribution is finite, and some details of the calculation are provided in appendix B.1. If the scalar mass is much smaller than the energy scale M α 2 , the scalar propagator can be expanded in m/M α 2 and the corresponding loop integral vanishes in dimensional regularization. Accordingly, the one-loop self-energy diagram yields no contribution to the spectrum.
Owing to the presence of the spin-orbit coupling in the Hamiltonian, the latter does not commute with L and S separately. However, the combination L · S can be rewritten in terms of the squared operators J 2 , L 2 and S 2 (and the Hamiltonian commutes with all of them), see e.g. [127]. In this case it is common to label the states with |n j , since n, and j are good quantum numbers. The corrections to the spectrum from the ultrasoft exchange δE US , kinetic energy δE kin , and the potential terms δE δV read , (5.12) and (5.14) A comment is in order on the form of the ultrasoft contribution δE US . One can check its appearance in a complementary way. It has been already stated that this term corresponds to the propagation of an ultrasoft scalar with momentum/energy of order M α 2 . Assuming its mass to be of the same order, as done here, this amounts to expanding the potential in eq. (5.10) for mr 1. The contribution at order mα from the potential expansion adds up with the one from δM in eq. (5.11), giving δE US , whereas the contribution at order αm 3 cancels against the one in δM . In the case of a massive vector boson as a force mediator, the monopole contribution from the potential completely cancels against the mass correction (see e.g. [108] for the QCD case) so that there is no analogue of δE US at order mα.
Bound-state formation cross section
Let us come to an application of the pNRY that establishes a connection to the recent developments in DM phenomenology. As noted in the original works [18,26], the formation of unstable bound states can trigger another channel for DM annihilations, and consequently affect the estimates of the present-day relic density. In general, bound-state formation and decay are not only relevant for the early universe but can also provide enhanced signals in the annihilations of DM in the galactic halos and affect the corresponding experimental signatures.
Here we deal with the bound-state formation via a radiative transition, where an abovethreshold scattering state emits a scalar particle and turns into a bound state. In terms of the model degrees of freedom one has (XX) open → (XX) bound + φ. This process occurs at the ultrasoft scale, so that the energy difference between the initial and the final state is of order M α 2 . Such interactions can be naturally accommodated in our pNRY, where the field ϕ accounts for both scattering states (with positive energies) and bound states (with negative energies). Formally, one can think of it as of splitting ϕ ≡ ϕ s + ϕ b in the particle-antiparticle Fock space.
As it was done for the hard annihilations into scalars within NRY, here we again make use of the optical theorem. The process of interest can be calculated from the self-energy of the pair in a scattering state by extracting its imaginary part. We show example diagrams in figure 8. Loop diagrams involve scales that are still dynamical in pNRY, namely the energy scale M α 2 and the mass of the scalar m, which we have assumed to be much smaller than M α for the derivation of the pNREFT. No specific relation between their relative importance was needed in the matching between NRY eq. (3.1) and pNRY eq. (5.4). However, at this stage it is important to clarify their relative size. In the following derivation, we consider the case m < ∼ M α 2 , so that the m must be retained in the scalar propagator. The method that we describe in the following is suitable for both in-vacuum and finite temperature calculations, provided that the thermal scales are smaller then the typical relative momentum of the pair. In this case, one can take the pNREFT Lagrangian from eq. (5.4) as a starting point [107,108,123,128] and incorporate T = 0 that would enter as a dynamical scale together with the in-vacuum parameters m, M α 2 . 14 We leave a comprehensive construction of EFTs for scalar mediators at finite temperature for future work on the subject. Let us come to the self-energy diagrams relevant for the derivation of the cross section. We show the ones that involve two monopole or two quadrupole vertices in figure 8. The calculation is done in dimensional regularization with D = 4 − 2 . Let us start with the left diagram in figure 8, where the corresponding self-energy reads with E φ = √ k 2 + m 2 being the energy of the scalar mediator. In order for the self-energy to acquire a full meaning, it has to be projected on the external scattering states, labeled with the relative momentum quantum number p = M v rel /2, so that P 0 = E p = p 2 /M . Next, we also insert a complete set of bound states so that the internal propagator in the loop diagram describes indeed the propagation of the discrete states of the spectrum. More explicitly, One can easily extract the imaginary part of the self-energy using Cutkosky cutting rules at zero temperature [129] that impose the kinematic condition 0 < k 0 < ∆E p n . The result for the inclusive cross section to produce all possible bound states reads which has the correct dimension of inverse energy squared. 15 However, one can readily see that the cross section in eq. (5.18) vanishes because of the orthogonality between the scattering and bound-state wave functions Ψ p (r) and Ψ n (r), that appear in the expectation value p|n = d 3 r Ψ * p (r) Ψ n (r) = 0. For the same reason mixed monopole-quadrupole diagrams give no contribution to the total cross section. These findings nicely agree with the results of [55,57], where the authors consider interactions of Dirac fermion DM with a scalar mediator. The same pattern is observed also in the case of DM being a non-relativistic scalar particle coupled to a scalar mediator [56,130].
14 The situation is of course different if the temperature and other thermal scales, for example thermal masses, are of comparable size or larger than M α. Then the pNREFT has to be derived accordingly and it would be qualitatively different from eq. (5.4). See [107,123] for QED and [108,128] for QCD. 15 One can simply see this by recalling the energy dimension of bound and scattering states kets, given by Let us now consider the quadrupole contributions induced by the right diagram in figure 8. The corresponding self-energy reads where one may notice the appearance of powers of the scalar three momenta in eq. (5.19), that are induced by the action of the derivative operator ∇ R on the scalar propagator. This diagram can be evaluated in the same fashion as the monopole contribution. The result for the cross section reads which also has the correct mass dimension and features non-trivial expectation values. The form of the prefactor highlights the effect of the mediator mass being of the same order as the ultrasoft scale: this setting obviously features a strong suppression of the formation rate. On the other hand, in the case of m M α 2 one can simply expand the result in eq. (5.20) accordingly. The total cross section also receives contributions from the spin-independent relativistic correction operator −gφ∇ 2 r /M 2 in eq. (5.4). This amounts to additional diagrams involving two insertions of this operator, as well as diagrams with one such insertion and a quadrupole vertex, whereas the insertion of a monopole vertex yields a vanishing amplitude. Putting everything together, our final result for the total cross section reads When considering the ground-state formation cross section, namely the state |n = |100 , and neglecting the mediator mass, we obtain the following result using Coulomb bound state and scattering wave functions Our result in eq. (5.22) agrees with the earlier findings in the literature [55,57] in the limit ζ ≡ α/v rel 1 . The first and the second term in eq. (5.22) stem from the = 0 and = 2 partial waves in the scattering wave function respectively. 16 Their relative size as compared to the total cross section can be inferred from figure 9, where they are depicted as dashed orange and dot-dashed red curves. The brown curves correspond to the total cross section, the solid line is our result eq. (5.22), whereas the dotted one is the large ζ limit [55,57].
The recasting of the bound-state formation cross section in the language of pNRY offers a clear organization in terms of quantum mechanical expectation values. In particular, as for the ground-state formation, the = 2 contribution only comes from the p|r i r j |n matrix element (it develops a non-trivial angular dependence), whereas all the four matrix elements in eq. (5.21) contribute to the = 0 term.
Conclusions
Self-interacting dark matter is mostly welcome in the attempt of reproducing the observed galactic structures, and it appears to work better than collisionless dark matter. Typically, self-interactions between non-relativistic dark matter particles are induced by the exchange of a light mediator. In addition to the desired velocity-dependent interactions that accommodate the halos of different-sized objects, it may well be that such self-interactions induce DM bound states. Most notably, depending on the model at hand in terms of its field content, masses and couplings, the impact of bound-state formation can play a rather important role in the 16 As done in ref. [57], by choosing the coordinate system in such a way that p points into the z-direction, the scattering wave function can be expanded into partial waves as Ψp(r) = ∞ =0 Ψ p (r) with Ψ p (r) = r|p .
determination of the present-day DM energy density. This may lead to sizable changes in the parameter space compatible with the cosmological abundance, making it necessary to revisit the relevant experimental bounds. In this work we studied a model that represents a family of minimal DM models, where a light scalar mediator induces self-interactions between Dirac fermion DM via a Yukawa-type interaction. Making use of the assumed hierarchy of well separated dynamical scales M M v M v 2 , we employed EFT techniques to study the resulting bound-state dynamics. In particular, we carried out a rigorous derivation of NRY and pNRY for a scalar force carrier, in the spirit of NRQED and pNRQED and their QCD counterparts. These EFTs are known to be very useful and successful tools for investigating and calculating observables relevant to bound and near-threshold states. As for NRY we extended and generalized the formulation already available in the literature [68,69], whereas, to the best of our knowledge, the explicit construction of pNRY was carried out in this paper for the first time.
We started with the derivation of the NRY from the first principles, where we identified the relevant degrees of freedom (non-relativistic Pauli fields and a scalar) and worked out the powercounting. We explicitly included 1/M 2 -operators in the bilinear sector and 1/M 4 -operators in the four-fermion sector, which allowed us to reproduce the first non-trivial contribution to the annihilation cross section XX → φφ at leading order in the fermion-scalar coupling. In the bilinear sector the matching was performed at tree-level.
Then, we resolved the power-counting ambiguity in the NRY due to the soft and ultrasoft scales being still intertwined, by constructing the corresponding pNRY. The degrees of freedom of this low-energy theory were found to be particle-antiparticle pairs (represented by a bilocal field) interacting with ultrasoft scalars. The scalars were enforced to be ultrasoft by performing a multipole expansion in the relative center of mass coordinate r. This way the presence of characteristic monopole and quadrupole interactions at the level of the pNRY Lagrangian was made manifest. The same is true also for the arising spin-independent relativistic correction with derivatives acting on the heavy-pair field. Dipole interactions that typically appear in pNRQED, turned out to be absent in pNRY.
We explicitly computed the DM fermion-antifermion potential, which naturally arises at the level of the pNRY Lagrangian as a matching coefficient. This paves way for a systematic inclusion of quantum and relativistic corrections in future works on the topic. In the Coulombic m M α regime of pNRY the scalar-induced potential turned out to share some similarities with the one in pNRQED. However, we also found that the spin-orbit term comes with an overall opposite sing as compared to the pNRQED case, while the contribution induced by an operator proportional to r · (−i∇ r ) has no correspondence in the electromagnetic potential. We also performed the potential matching for the setting where the scalar mass is of order M v, thus recovering a Yukawa screened potential at leading order. Furthermore, we explained that pNRY can describe not only particle-antiparticle interactions but also bound states formed by identical Dirac fermions such as XX andXX.
As a first application of the pNRY, we computed the bound-state spectrum at the nextto-leading order, namely O(M α 4 ), which constitutes a new result presented in this work. In particular, we stressed the advantages of using a pNREFT for such bound state calculations as compared to non-EFT approaches. Our calculation was done in the Coulombic approximation, namely m M α, that still allows for the mediator mass to be as large as the ultrasoft scale M α 2 . The ultrasoft contribution to the spectrum, in the case of the scalar mass being not much smaller than the binding energy, was found to provide the leading contribution of order M α 3 .
A further application of the pNRY that was presented in this paper is the derivation of the bound-state formation cross section by taking the imaginary part of the heavy-pair field selfenergy diagram. In particular, the contributions from monopole-induced diagrams turned out to be vanishing due to the orthogonality of the wave function of the discrete and continuous spectrum, in full agreement with the previous findings in the literature. On the other hand, we identified the leading contribution to the cross section to be induced by quadrupole interactions and relativistic corrections. The final expression was written in terms of quantum mechanical expectation values that naturally arise in pNREFT calculations. We also performed an explicit analytic evaluation of these quantities for the Coulombic regime, thus agreeing with the earlier findings [55,57] in the limit of ζ 1. The fact that we were able to obtain the previously unknown full analytic result for arbitrary ζ-values using pNRY can be regarded as another highlight of this work.
To conclude, we would also like to provide a brief overview of future research directions in this field in conjunction with NRY and pNRY. The minimal model addressed in our work can be varied in different ways, such as a Majorana DM rather than Dirac DM, or a more general interaction with a pseudo-scalar force carrier and cubic self-interactions of the scalar/pseudo-scalar mediator. These equally compelling realizations can be handled within the EFT approach presented here. Moreover, an accurate derivation of the relic density requires calculation of various processes (e.g. bound-state formation, dissociation and Sommerfeld enhancement) to be done at finite temperature. We believe that the EFTs presented in this work can be regarded as a starting point for such finite temperature calculations, as it was the case with the corresponding generalizations of NRQED/NRQCD and pNRQED/pNRQCD. Especially in the heavy-ion phenomenology related to heavy quarkonia, pNRQCD has proven to be an extremely useful tool to scrutinize different hierarchies between in-vacuum and thermal scales, and to calculate relevant observables in a controlled and systematic way. It goes without saying that the presence of thermodynamical scales can significantly modify and affect the relevant cross sections also in the DM phenomenology. Therefore, the derivation of the finite temperature versions of NRY/pNRY constitutes a worthwhile and phenomenologically relevant task that we hope to address in the subsequent publications.
Acknowledgments
The work of S.B. is supported by the Swiss National Science Foundation under the Ambizione grant PZ00P2 185783. V. S. acknowledges the support from the DFG under grant 396021762 -TRR 257 "Particle Physics Phenomenology after the Higgs Discovery." The authors are grateful to Jacopo Ghiglieri for reading the manuscript and providing useful comments, and to Miguel Escobedo for stimulating discussions at early stages of our work. They would also like to thank Matthias Steinhauser for making them aware of [73,74] and Joan Soto for [88].
A Matching coefficients of NRY
In this appendix, we provide a detailed derivation of the matching coefficients that enter the NRY Lagrangian eq. (3.1). As for the bilinear fermion (antifermion) sector we work at leading order and we discuss the derivation in section A.1. Then, in section A.2 we derive the NRY by using the equation of motion method that allows us to (i) do a non-trivial check of the soobtained matching coefficients at tree-level; (ii) write the NRY in a covariant fashion, implement the reparametrization invariance, here at order 1/M , and consequently fix c 2 = 1 at all orders. Finally, in section A.4 we provide the derivation of the matching coefficients of the dimension-6 and dimension-8 operators.
A.1 Matching of the fermion bilinear with scattering amplitude
The derivation of the matching coefficients for the fermion bilinear at order 1/M 2 can be conducted in the following way. We write down the scattering amplitude for the process ψ(p) → ψ(q) + φ(k) in the fundamental theory eq. (2.1) and expand the resulting expression in powers of p/M , q/M . To this aim, we need to rewrite Dirac spinors in terms of the twocomponent Pauli spinors using non-relativistic normalization [62] with E p = p 2 + M 2 . Furthermore, we take the γ matrices in the Dirac basis and decompose them into Pauli matrices. By momentum conservation at the vertex we have k = p − q, which is the momentum carried by the scalar, so that the result for the diagram in figure 2 (left) reads for the particle interaction with φ, whereas for the antiparticle. Upon identifying the 2-spinors ξ and η with the ψ and χ fields respectively, we can compare the so-obtained expressions to the amplitudes induced by NRY eq. (3.1) and read off the values of the matching coefficients listed in eq. (4.1). It is interesting to remark that the Pauli structures appearing in eqs. (A.2) and eq. (A.3) differ by an overall minus sign, which is different from the situation that one finds in NRQED (no sign difference). This can be traced back to the vector structure of the electromagnetic interaction that features an additional γ matrix in the fermionic current, so that the couplings of the particle and the antiparticle to the temporal component of the photon field A 0 (k) have the same sign.
A.2 Equations of motions method
This method exploits the equations of motion of the high-and low-energy excitations of the relativistic field X [98][99][100][101][102] to derive the corresponding nonrelativistic EFT. The following derivation closely follows [102], where the same exercise is carried out for QCD. In practice, one starts from the full relativistic Lagrangian, where are we interested solely in the fermion-bilinear piece given by We then decompose the relativistic four-component spinor X as where (1 ± / v)/2 are velocity-dependent projectors with / v ≡ v µ γ µ . In the rest frame of the pair, where v µ = (1, 0), such operators reduce to (1 ± γ 0 )/2 and project onto the particle and antiparticle components of the Dirac field X.
Making use of the properties of the velocity projectors, we find where it is now clear that H v comprises the large energy modes of order M that we want to integrate out from the Lagrangian. To this aim, it is useful to introduce the derivative ∂ µ ⊥ = ∂ µ − v · ∂v µ and to rewrite the equation of motion for the field H v Substituting the expression for H v as given in (A.7) into (A.6) we find which is still exact. Now we can directly expand L heavy in 1/M and, up to order 1/M 2 , the Lagrangian reads In order to eliminate all terms containing (v · ∂)h v beyond O(1/M 0 ), we need to introduce a suitable field redefinition [124] given by Let us stress that the Lagrangian in eq. (A.11) may describe not only non-relativistic systems made of heavy Dirac fermions of the same mass, but also bound states formed out of a heavy and a light fermion, which might be another interesting DM scenario worth exploring in more details using our EFT framework. This statement is completely analogous to the well-known fact [124] that the HQET Lagrangian is equally suitable for studying properties of heavy-light mesons and heavy quarkonia: Both theories share the same Lagrangian but differ in their power-counting.
To complete our derivation for the case of the NRY, we switch to the rest frame with v µ = (1, 0), employ the relation σ i σ j = δ ij + i ijk σ k and identify h v with the particle component ψ of the X field. Thus, we obtain that agrees with the particle sector of eq.
A.3 Foldy-Wouthuysen-Tani method
The main idea behind the Foldy-Wouthuysen-Tani (FWT) [96,97] method is to introduce a sequence of unitary transformations that decouple the upper and lower components of the Dirac spinor order by order in 1/M . Consequently, in the non-relativistic limit the Dirac equation splits into two separate equations for Pauli fields describing particles and antiparticles respectively. The procedure of applying FWT transformations to QED can be found in various QFT textbooks (cf. e.g. [131][132][133] that we will partially follow here) and is often taught in advanced quantum mechanics courses. Therefore, we do not claim any originality for most of the material presented below. Once the technicalities behind the QED case are understood, it is a simple exercise to repeat the same procedure for the scalar Yukawa theory. The results for the pseudoscalar case can be found in [72]. First of all, let us introduce the concept of even and odd operators. Even operators are those that do not interchange upper and lower components of the Dirac spinor X, so that particles and antiparticles remain decoupled. Odd operators, on the contrary, are responsible for the mixing between particles and antiparticles. Schematically, we can writê whereÊ is an even andÔ is an odd operator. In the context of the Dirac Hamiltonian we have α i = γ 0 γ i and β = γ 0 , where the former is odd, while the latter is even. The Dirac spinor field X satisfies i∂ t X =ĤX, (A.14) which yields the familiar Dirac equation in the case of a non-interacting Hamiltonian.
For the sake of clarity, let us first discuss the generic case, without making an explicit reference to a particular theory. Our starting point for applying the FWT procedure is the unitary transformation X → X =Û X, (A.15) withÛ = e iŜ , whereŜ is some operator. Then the time evolution of the transformed field becomes i∂ t X = e iŜ (Ĥ − i∂ t )e −iŜ X . (A. 16) Here∂ t means that the partial derivative acts only on e −iŜ but not on X . Therefore, the transformed field satisfies i∂ t X =Ĥ X , where we used that∂ t is non-vanishing only when it acts on a time-dependent function.
In the case of an interacting theory (e. g. QED) it is usually not possible to choose anŜ such, that the upper and lower components of X decouple to all order in the 1/M expansion. Instead, one proceeds by starting with an ansatz that removes all odd terms at O(1/M 0 ) and then calculatesĤ to the desired order in 1/M , say 1
A.4 Matching of the dimension-6 and dimension-8 operators
Here we closely follow the tree-level matching between QCD and NRQCD in the 4-fermion sector described in [62]. We derive the contribution to the amplitude XX → XX in the center of mass reference frame. Hence, we take the incoming X andX to have momenta p and −p, whereas the outgoing X andX have momenta p and −p respectively. Due to energy conservation and same masses involved in the process for the DM states, one has |p| = |p | ≡ p. 17 The form of the non-relativistic Dirac spinors has been already given in eq. (A.1). The matching is performed by enforcing on-shell four-fermion Green's function in the full theory (2.1) and in the NRY (3.1). For completeness, let us briefly describe the matching of the four-fermion operators at order α. The relevant tree-level diagrams are shown in figure 10. It is clear that no imaginary part can arise at this order. Moreover, the diagram on the left can be precisely reproduced in the NRY because the scalar can carry a soft momentum (it is indeed the diagram appearing in the potential matching in figure 5). Only the diagram on the right contributes to the matching at this order, and provides a contribution to the real part of the matching coefficients. We find the only non-vanishing coefficient at order α to be f ( 3 P 0 ) = 3πα, while the matching coefficients of all dimension-6 operators vanish. Going to order α 2 , we find two one-loop diagrams that contribute to the process XX → φφ and we show them in figure 3. We are interested in their imaginary parts, which can be extracted by using the standard cutting rules, namely putting the internal scalar fields on shell. Expanding up to second order in the velocities v = p/E, v = p /E, and writing the final result as in [62], we find the annihilation contribution to the scattering amplitude to be where the subscript t + u indicates the sum of the t-and u-channel diagrams of figure 3. We remark that there is no term of order v 0 , implying that in the Yukawa theory (2.1) annihilations are velocity suppressed and start at order v 2 . The matching coefficients of the dimension-6 operators are zero, also at order α 2 (as expected by symmetry arguments). On the contrary, In this section we would like to discuss one-and two-loop diagrams that need to be analyzed for the potential matching of the pNRY. The systematic analysis is partly based on the pNRQED matching in the Feynman gauge [122], where the temporal component of the photon field has to be considered in loop diagrams (at variance with the Coulomb gauge). Then we have to consider (i) one loop diagrams as given in figure 11 (possible contribution at O(M α 3 )); (ii) the same diagrams with a kinetic insertion p 2 /2M in one of the fermion lines at a time, (possible contribution at O(M α 4 )); (iii) again the same diagrams with external energy insertions arising from the expansion of the propagators around zero external energy (possible contribution at O(M α 4 )); (iv) two-loop diagrams involving scalar propagators without kinetic/external energy insertion (possible contribution at O(M α 4 )). We checked explicitly that the same arguments put forward for the QED case holds here. The sum of the two diagrams in figure 11 indeed vanishes. Then, the same diagrams with a kinetic or an external energy insertion vanish individually, due to an odd number of the static propagators involved. The last set (iv) equally vanishes, since they are an iteration of the one-loop diagrams (i), as shown in [134]. In addition to the previous class of diagrams, we have to consider possible contributions arising from other topologies, namely those that are induced by the interactions between fermions (antifermions) and two or three scalars. Before discussing the diagrams in some detail, let us remind that the coefficients c 3 (c 3 ) and c 4 (c 4 ) vanish at tree-level, so that c 3 , c 4 = O(α), O(λ) at least. Actually, as far as c 3 (c 3 ) is concerned, one finds that the matching coefficients go like O(α 2 ), because the tree-level topology is reproduced in the NRY, and then there is no contribution to c 3 (c 3 ) at order α. The one-loop diagrams involving the vertices with two scalar fields are collected in figure 12 (upper row). They all contribute at order M α 5 or higher. Finally, two example diagrams with a three-scalar-fields vertex are given in figure 12 (lower row). By applying the power counting one sees that they all go beyond the accuracy of this work, M α 9/2 . All other diagrams involving c 4 and c 3 are further suppressed.
B.2 Master integrals
Here we provide explicit analytic results for some of the 1-loop integrals that we encountered in the course of calculations done in this work | 19,588.8 | 2021-06-11T00:00:00.000 | [
"Physics"
] |
A Holographic Bound on Cosmic Magnetic Fields
Magnetic fields large enough to be observable are ubiquitous in astrophysics, even at extremely large length scales. This has led to the suggestion that such fields are seeded at very early (inflationary) times, and subsequently amplified by various processes involving, for example, dynamo effects. Many such mechanisms give rise to extremely large magnetic fields at the end of inflationary reheating, and therefore also during the quark-gluon plasma epoch of the early universe. Such plasmas have a well-known holographic description in terms of a thermal asymptotically AdS black hole. We show that holography imposes an upper bound on the intensity of magnetic fields ($\approx \; 3.6 \times 10^{18}\;\; \text{gauss}$ at the hadronization temperature) in these circumstances; this is above, but not far above, the values expected in some models of cosmic magnetogenesis.
The Importance of Magnetic Fields in Cosmology
One of the most pressing issues in astrophysics is the question of the origin of large-scale magnetic fields [1,2,3]. These fields have been observed, using radiation at gamma-ray and other wavelengths, in a great variety of locales, including in intergalactic space [4]. This seems to find its most natural interpretation in terms of the hypothesis that magnetic fields are "seeded" by quantum fluctuations during inflation: that is, that the observed fields are cosmic in origin. A fully satisfactory theory of cosmic magnetism remains, however, to be completed.
The importance of settling this question can hardly be over-stated. To take but two examples: the existence of cosmic magnetic fields may have a profound effect on the interpretation of recent claims that primordial gravitational B-modes have been observed in the cosmic microwave background [5]; and such fields may help to explain the reionization of the intergalactic medium [6].
There are however some serious difficulties facing this idea, of which the following is perhaps the most severe. Because Maxwell's equations in four dimensions are conformally invariant, and because all FRW spacetimes are conformally flat, one can construct an extremely general and robust argument to the effect that magnetic fields must decay adiabatically, that is, quite rapidly, with cosmic expansion. This makes it very difficult to obtain magnetic fields, at the present time, of the observed magnitude 1 .
One ingenious attempt [7,8,9] to circumvent this "conformal" argument exploits the fact that marginally open FRW spacetimes are conformally flat in a slightly different sense to the spatially flat case. It is argued that, as a consequence, fluctuations of the magnetic field on scales larger than the spatial curvature scale ("supercurvature modes") may lead to an anomalously slow decay ("superadiabatic amplification") of the field. This slow decay might indeed be sufficient to solve the problems discussed earlier. If this were correct, then the existence of cosmic magnetism might be interpreted as direct evidence that the spatial sections of the Universe are negatively curved, a remarkable conclusion indeed.
Unfortunately, there are several serious objections to this claim: it seems that it may not be possible actually to excite supercurvature modes [10], and that, even if it is possible, such modes do not in practice give rise to a significant amount of superadiabatic amplification of magnetic fields [11].
More conservative explanations, that is, ones that accept the conventional evolution of the field, have been proposed: for example, a small magnetic field surviving through the inflationary era might be enormously amplified by dynamo-like effects during reheating -though a complete theory of such a dynamo remains to be constructed. (See the discussion around Figure 16 in [2].) In short, the claim here is that magnetic fields are observable at the present time not because they decay in some anomalous way, but simply because they were so large at the end of reheating. This hypothesis will be investigated in this work.
We begin with the observation that this approach entails the existence of enormous magnetic fields during the plasma epoch of cosmology. One might wonder whether such extreme 2 fields can really be sustained. Indeed, problems have been pointed out [12] with specific models, but what we seek here is a general upper bound on magnetic fields in such plasmas -general in the sense of being derived from some basic physical idea. We will argue here that some such bound must exist; the argument is based on holography.
The basic observation here is that the plasma in question is a quark-gluon plasma. Such plasmas have a well-developed holographic description [13,14,15,16,17] in terms of a dual thermal black hole spacetime. Magnetic fields in the plasma correspond, in a way familiar from applications of holography to condensed matter physics, to a magnetic charge (per unit horizon area) on the black hole. Large values of that charge will give rise to a major deformation of the bulk spacetime, which in turn may have a strong effect on objects, such as branes, which inhabit the bulk and are sensitive to its geometry. It is not clear that such effects will always be benign, and we shall see that they are not. In this way, holography gives us a way of constraining the intensity of magnetic fields during the plasma epoch.
To be specific, we find that, if the magnetic field B is sufficiently strong relative to the (squared) temperature, then the bulk black hole itself begins to generate branes, so that a static black hole picture is no longer consistent. The critical field strength is 2 π 3/2 T 2 ; this turns out to be roughly an order of magnitude larger than the value required to generate the observed intergalactic magnetic fields. In view of the uncertainties attending holography generally, this can be interpreted to mean that the magnetic fields during the plasma epoch do satisfy the holographic bound, but only by a slim margin. One might speculate that holography somehow sets the scale of cosmic magnetism.
We begin by constructing a very simple holographic model of cosmic magnetic fields.
Holography of Magnetic Fields in FRW Spacetimes
Four-dimensional FRW spacetimes have an extremely restricted and simple geometry, which can be described in a variety of ways. Two of these ways are important here. First, all FRW spacetimes are locally conformally flat; in the case where the spatial sections are flat, they are globally conformally flat (that is, the entire spacetime can be mapped to a flat spacetime by a single global conformal transformation; these are the FRW spacetimes of most interest, and we confine attention to them henceforth).
Secondly, all four-dimensional FRW spacetimes have the following property: at each point p in three-dimensional space, each two-dimensional plane in the tangent space at p can be mapped by a local isometry to any other such plane. This is just a way of formulating the condition that the space should be isotropic around every point (since, in three dimensions, there is a one-to-one correspondence between planes and their normal directions); but this way of stating the property is actually the more fundamental one. To see this, note that this condition, rather than isotropy per se, is the one that allows us to deduce that the three-dimensional spacelike slices are spaces of constant curvature, this being a distinguishing feature of FRW cosmologies. This just means that planes in different locations can also be mapped to each other isometrically 3 . In other words, FRW spacetimes can be thought of as the spacetimes in which spatial planes behave as simply as possible: if one understands the physics associated with any one plane, then one understands the physics of all of them.
This point of view is particularly appropriate for discussing magnetic fields, because it is natural to associate a magnetic field with the corresponding flux through a planar surface. In fact, it will be useful to take the point of view that the flux is fundamental, and the magnetic field is just a quantity deduced from the flux per unit area through some plane. This is appropriate from a holographic point of view because the flux (leaving aside superadiabatic amplification, which cannot occur in the spatially flat case) is actually conformally invariant. To see this, notice that by solving the magnetic wave equation for the modes corresponding to a cosmic magnetic field B, one concludes [1] that B must decrease according to a(t) −2 , where a(t) is the usual FRW scale factor; but then the flux through a plane S remains constant, because the area of S increases according to a(t) 2 .
All this suggests that we set up our holographic model of cosmic magnetism as follows. We take a FRW spacetime with flat spatial sections, containing a plasma and a magnetic flux associated with a fixed but arbitrary plane S. A conformal transformation takes us to a flat spacetime, in which S retains its geometry but no longer evolves with time.
Adjoining the time direction, we can use S to define a three-dimensional flat spacetime permeated by a magnetic field. We then study the holographic dual of this spacetime. The magnetic field and temperature in this spacetime are constant, but the time dependence can be restored when convenient by reverting the conformal transformation. In more detail: as we have just seen, the adiabatic dilution of B, to which we are adhering here, means that it decays according to a(t) −2 ; on the other hand, the temperature of the plasma will decline according to a(t) −1 . Therefore the ratio B/T 2 is a conformal invariant in FRW geometry, and it can be evaluated in the conformally transformed boundary geometry; therefore it can be studied through the latter's holographic dual. This is our plan for using holography to study cosmic magnetism.
The plasma is a thermal system, so we need an asymptotically AdS black hole with a non-zero Hawking temperature in the bulk. The transverse sections (perpendicular to the radial direction, parallel to the event horizon of the black hole) will be copies of S, so they must be planar. That is, we need a planar black hole, of the kind that exists in the asymptotically AdS case [19] because the negative cosmological constant violates the relevant energy condition.
We assume that the bulk black hole is "dyonic" (see for example [20]), that is, both electrically and magnetically charged; we then obtain the metric and electromagnetic potential from the conventional Einstein-Maxwell equations. The geometry is described by a "Charged Planar AdS Black Hole" metric, given by Here ψ and ζ are dimensionless planar coordinates, L is the asymptotic AdS curvature radius, and M * , Q * , and P * are geometric parameters with no direct physical meaning, but whose significance we now explain. In accordance with our emphasis on the role of planar geometry, we claim that the physical parameters for such a black hole are its mass per unit horizon area, which we denote by M, and the electric and magnetic charges per unit horizon area, Q and P. (In fact, the actual mass and charges of such a black hole are formally infinite, so it can only be described by using constructs of this kind.) M, Q, and P are related to M * , Q * , and P * as follows. First note that M * , Q * , and P * determine (for a fixed value of L) the value of r at the event horizon, r = r h ; then we have simply see [21] for a detailed discussion of similar formulae. Conversely, given Q, P, and the Hawking temperature of the black hole (see below), one can readily find r h , and then use these relations to compute M * , Q * , and P * in terms of M, Q, and P. The potential form for the electromagnetic field outside the black hole is where the constant term in the coefficient of dt is inserted so that the Euclidean version should be well-defined at the origin. The field strength form is The quark chemical potential of the dual system is related holographically to the asymptotic value of the time component of the potential form, that is, to Q * /(r h L). The plasma with which we are concerned here has an extremely high temperature, so that particles and antiparticles are present in essentially equal quantities; so for us the quark chemical potential is zero to an excellent approximation. We therefore set Q * = Q = 0 for the remainder of this work.
We see that, on the other hand, the magnetic field persists to infinity, thus opening the way to a holographic interpretation of magnetic fields on the boundary 4 . This observation [23] is familiar, and of fundamental importance, in applications of holography to condensed matter physics.
The metric at infinity, normalised so that t represents proper time for a stationary observer, is such that the norms of dψ and dζ are both equal to 1/L, so the corresponding unit forms are Ldψ and Ldζ; thus, if B is the magnetic field, we have, from equation (4), as the holographic relation between the magnetic parameter of the black hole (which determines its magnetic charge per unit horizon area in the manner explained earlier) and the magnetic field of the dual system on the boundary. (Notice that, in the units we use here, P * has the same units as r, so B has units of inverse length squared, as it should.) The temperature of this black hole can be found in the usual manner, by requiring that the Euclidean version of the geometry be regular: it is where we have used the definition of r h , namely to eliminate the explicit dependence on M * , which we do not need here. The Hawking temperature in (6) will be interpreted holographically, in the usual manner, as the temperature of the plasma. Equations (5), (6), and (7) allow us to translate between the geometric parameters M * and P * (which determine r h , and subsequently fix the physical parameters M and P) and their field-theory counterparts T and B.
It is of interest to note that if we impose the very reasonable condition that T ≥ 0 (which corresponds to requiring cosmic censorship on the black hole side of the duality), then we obtain an upper bound on the magnetic charge per unit horizon area, in terms of the asymptotic AdS curvature radius 5 : This will help us to understand how censorship violation is avoided in our subsequent work. This black hole is the basis of our holographic study of cosmic magnetic fields. However, we observe immediately that equation (6) seems to indicate that one should not expect to be able to establish any straightforward holographic relation between T 2 and B, as we are hoping to do in this work. In particular, the fact that both quantities have the same units is quite irrelevant, because there are two other parameters with units of length in the problem, namely r h and L; actually, because of this, (6) seems to relate T to B 2 (see (5)) rather than T 2 to B. We will nevertheless see that holography can surmount these difficulties, in a remarkably elementary way.
In order to proceed, we must assume that this simple gravitational system is an approximate description of a full string-theoretic system in the bulk, with all of its attendant fields and objects, such as branes. The description will be a good one provided that the appropriate parameters (the string coupling, the ratio of the string length scale to the AdS curvature scale L) are sufficiently small, and provided that the additional objects can be consistently ignored. It can be difficult to ensure this last condition, as we shall discuss. To aid that discussion, we make a brief excursion into the geometry of the general class of spacetimes we are considering here, namely those which are asymptotically AdS and can be foliated by planar transverse sections.
Asymptotically AdS Spacetimes with Planar Transverse Sections
As is well known, asymptotically AdS spacetimes, or submanifolds of them, can often be foliated in a variety of different ways. For example, a suitable submanifold of AdS 4 itself can be foliated by flat 3-dimensional subspaces transverse to a radial direction, so that the "planar AdS" metric takes the form the coordinates here are as in equation (1), from which (9) is obtained by setting M * = P * = 0. Now consider a transverse section (including time, as above) of the form r = constant. The "volume" form of such a section -for later convenience, let us think of it instead as an area -will take the form Upon performing the integrals over a compact domain in the (t, ψ, ζ) directions 6 , we see that areas in this spacetime grow in the radial direction as r 3 . However, if we wished to compute the spacetime volume "contained" in this section -we shall see later how to be more precise about this concept -then we would consider Performing the integrals, we find that this volume is also proportional to r 3 . Thus we arrive at the conclusion that areas of planar transverse sections, and the corresponding volumes, grow at essentially the same rate towards infinity in spaces of constant negative curvature. This fact is of course well known in the case of transverse spherical surfaces.
When we consider spacetimes which have planar transverse sections but which are only asymptotically AdS, the situation becomes more complicated. It is still true that areas and volumes grow at the same rate towards infinity at leading order in an expansion in r; but this need not be true at higher orders. This gives us a useful way of expressing how far a given asymptotically AdS spacetime has been deformed away from pure AdS.
We can express this idea in a concrete way by defining, on any such (four-dimensional) spacetime with asymptotic curvature radius L, the following quantity. Let A r be the area of the (t, ψ, ζ) surface r = constant, and let V r be the volume contained. Then we set with the understanding that this quantity is defined only up to an overall positive constant (which depends on the detailed choice of the domain of integration, and which we shall choose so that S(r) is dimensionless). The factor of 3/L is chosen partly for dimensional reasons, but mainly to ensure that the leading terms cancel, so that S(r) does indeed probe the higher-order terms.
For the planar foliation of AdS 4 itself, we therefore have S P AdS (r) = 0 (13) for all r (where r extends down to r = 0), so S(r) probes the difference between a given four-dimensional asymptotically AdS spacetime and AdS 4 itself. In other words, if we define AdS to be the spacetime in which areas and volumes associated with transverse planes are (by construction) the same, then, in asymptotically AdS spacetimes foliated by planes such that S(r) does not vanish, the interpretation is that either the area of a transverse section is larger than its volume, or the reverse. (Notice that S(r) might be positive for some values of r, and negative for others, so both kinds of behaviour can arise within the same spacetime.) For all black hole spacetimes, we define the "volume" discussed above as the volume outside the event horizon, so that r only extends down to r = r h . This is reasonable, because the area of the section r = r h (which is not what is usually called the "area of the event horizon", since it includes the time direction) is zero, and so the volume should likewise vanish. (Thus, S(r h ) = 0 for all black holes.) In the case of asymptotically AdS spacetimes foliated by transverse sections of the form IR × S 2 , such as the exterior AdS-Schwarzschild spacetime, we can define a quantity analogous to S(r), in precisely the same way as above. It turns out that, for AdS black holes with such transverse sections, this quantity is always positive both near to the event horizon, and far away from it; in fact, it is probably [24] positive for all values of r, for all black holes with event horizons having spherical topology. That is, the area is larger than the volume in all those cases.
The same statement holds true in many cases when the transverse sections are planar. For example, consider the planar AdS black hole with neither electric nor magnetic charge (thus with metric obtained from equation (1) by setting P * = 0). A straightforward computation then yields 7 , up to an overall positive constant factor as always, and it is clear that this is never negative. One might wonder whether S(r) can ever be negative, for any black hole with a planar event horizon. As we shall now explain, this question actually has a deep physical meaning.
S(r) as a Brane Action
It turns out that S(r) is of direct physical, as well as geometrical, significance. As was pointed out by Seiberg and Witten [25] (see also the investigations of Witten and Yau [26]), this quantity is, up to a positive constant factor (related to the tension of the brane) precisely the action of a BPS 2-brane wrapping the section r = constant. Since, for any black hole, S(r h ) = 0, we see that the question as to whether the area always exceeds the volume now takes on considerable physical significance: for if the volume can outgrow the area for some range of values of r, this means that the action is lower in that region than it is near to the event horizon. A brane-antibrane pair nucleating in that region will therefore have no tendency to contract or tunnel back into the black hole, and so the system will be unable to remain static. In short, it will no longer be consistent to ignore, as in Section 2 above, the presence of specifically string-theoretic objects in the bulk: even if they are not present initially, they will become steadily more important as the black hole itself generates them.
As we saw earlier, this is not a problem for the planar black hole spacetime when P * = 0, that is, in the absence of a magnetic field on the boundary; and, as one might expect, we shall see that it is likewise not a problem when the magnetic field is small. But it is far from clear that there are no difficulties here when the field is large, which, as we have seen, is certainly possible in the cosmological context. In short, it is quite possible that cosmic magnetic fields might correspond to a bulk geometry which is so distorted (away from the P * = 0 case) that the black hole becomes subject to the instability we have just been discussing. It is clearly very important to establish whether this actually happens. We now proceed to establish the precise condition for that.
We can evaluate S(r) for the metric in equation (1): it is given (with Q * = 0) by which, after some simplifications, can be written as A typical graph of this function, for P * relatively small 8 , is shown in Figure 1.
One sees that S CP AdSBH (r) is zero at the event horizon, and positive near to it; however, after reaching a maximum it steadily decreases. It remains permanently positive for small P * , but for larger P * it might eventually 9 become negative; that can be avoided if and only if the limiting value as r → ∞ is non-negative: that is, we need It is not obvious that this will always be satisfied, and in fact, for sufficiently large (but still sub-extremal) magnetic charge per unit horizon area, it is not: see Figure 2. Thus, by requiring that the holographic dual of the system on the boundary should be well-behaved, we obtain a non-trivial restriction on the magnetic field. As we shall now show, that restriction is remarkably simple.
A Bound on the Magnetic Field
Our objective is to turn the inequality (17) into a relation involving B and T , since those are the physical variables on the boundary. The difficulty is to do this while excluding r h , which we do not want (and without resorting to solving a quartic for it, as in equation (6)). With care, this can be done quite straightforwardly. First we eliminate M * by using equation (7), and this converts (17) to the form Applying this to the second term on the right side of equation (6), we find Combining these two inequalities with equation (5), we can actually eliminate both r h and L and obtain an inequality involving B and T alone: This is the upper bound we seek. Obviously this condition enforces a positive temperature if the magnetic field is nonzero; that is, it enforces cosmic censorship on the bulk side of the duality. To see how this works precisely, from the inequality (18) one sees that the maximal magnetic charge per unit horizon area permitted if Seiberg-Witten instability is to be avoided, P SW , is given by which compares with the censorship value P + (see inequality (8)) according to Thus Seiberg-Witten instability sets in when the magnetic charge per unit horizon area is still below (though not far below) the value at extremality 10 . As we have discussed earlier, bounding the magnetic field by the square of the temperature is in fact very natural in cosmology: if the bound holds at any time during the plasma epoch, it will automatically hold throughout, since the adiabatic dilution of B means that B/T 2 is constant ("conformally invariant"). Equally, this will not be true if the magnetic field evolves significantly more slowly -as must be the case, if such evolution is to solve the problem of excessively weak magnetic fields at the present time. One would expect that, in that case, (20) will eventually be violated with generic initial conditions; so our inequality might be regarded as yet another argument against such proposals.
The magnetic fields we need here (see [2], Figure 16) satisfy equipartition, that is, the energy density of the field is of the same order as the energy density of the plasma. We therefore have, from the Stefan-Boltzmann law, For any given plasma temperature, this is about an order of magnitude below the holographic bound. In view of the many uncertainties arising in such applications of holography, we prefer to state the case more tentatively: there is very likely to be a holographic bound on cosmic magnetic fields, and the fields actually occurring during the plasma epoch of the early Universe may well come close to reaching that bound.
Conclusion: The Uses of Holography in Cosmology
We have seen that cosmic magnetic fields can be usefully constrained by holography, even when we use the simplest possible structure for the bulk: we were able to find an upper bound on the magnetic field in the plasma epoch, in terms of a definite multiple of the squared temperature. Much more sophisticated techniques have of course been developed in holographic studies of magnetic fields in condensed matter contexts [27], and it would be interesting to adapt some of those methods to the cosmic case.
Even with the simplest model, the results are suggestive. It is remarkable that the holographic estimate of the largest possible field is so close to the values needed to obtain sufficiently large fields at the present time. Perhaps a deeper investigation will reveal more concrete evidence that holography not only constrains, but actually determines, the scale of magnetic fields during the plasma epoch of the early Universe. It might be useful to pursue this idea in the context of string-theoretic attempts to account for the seeding of cosmic magnetic fields; see for example [28]. | 6,524 | 2014-09-12T00:00:00.000 | [
"Physics"
] |
Predicting Drug-Target Interactions via Within-Score and Between-Score
Network inference and local classification models have been shown to be useful in predicting newly potential drug-target interactions (DTIs) for assisting in drug discovery or drug repositioning. The idea is to represent drugs, targets, and their interactions as a bipartite network or an adjacent matrix. However, existing methods have not yet addressed appropriately several issues, such as the powerless inference in the case of isolated subnetworks, the biased classifiers derived from insufficient positive samples, the need of training a number of local classifiers, and the unavailable relationship between known DTIs and unapproved drug-target pairs (DTPs). Designing more effective approaches to address those issues is always desirable. In this paper, after presenting better drug similarities and target similarities, we characterize each DTP as a feature vector of within-scores and between-scores so as to hold the following superiorities: (1) a uniform vector of all types of DTPs, (2) only one global classifier with less bias benefiting from adequate positive samples, and (3) more importantly, the visualized relationship between known DTIs and unapproved DTPs. The effectiveness of our approach is finally demonstrated via comparing with other popular methods under cross validation and predicting potential interactions for DTPs under the validation in existing databases.
Introduction
Since experimental determination of compound-protein interactions or potential drug-target interactions remains very challenging (e.g., requiring a huge amount of money and taking a very long period) [1], there is a need to develop computational methods to assist those experiments. Nowadays, the number of available drug-target interactions (DTIs) in public database, including KEGG [2], PubChem [3], DrugBank [4], and ChEMBL [5], is increasing which brings out two observations. The first one is that one drug can interact with one or more proteins. Another is symmetrically the fact that one protein can be targeted by one or more drugs. These two observations led to the formation of DTI network [6] and made it possible to utilize DTIs (approved drug-target pairs) to predict potential interactions among unapproved drug-target pairs (DTPs). The task to validate those predicted potential interactions is called drug repositioning or drug repurposing [7].
In terms of DTI network, predicting newly potential DTI is equivalent to predicting new edges in the network. Researchers developed network-based inference model (NBI) to deduce the potential interactions among unapproved DTPs in given DTI networks and further confirmed them from in vitro assays [7]. However, NBI cannot run the prediction for any DTP between which no reachable path (a set of consecutively connected edges) in network is available. In fact, a DTI network usually contains several isolated subnetworks. A difficult case for NBI is, for example, to predict the interaction between the drug in one subnetwork and the target in another. Besides, predicting interactions for a drug node d, the resulting targets usually bias to the target nodes of more degrees or the target nodes near to drug d.
With a different idea of regarding similarity matrices of drugs and targets as kernel matrices, kernel-based techniques of classification, such as bipartite local model (BLM) [8][9][10], are also popularly applied to DTI prediction. As a local classification model, for each target, BLM assigns known 2 BioMed Research International DTIs and unapproved DTPs between drugs and the concerned target as positive and negative samples, respectively. Then a kernel-based classifier is built on drug similarity matrices and applied to assign confidence scores to unlabeled samples (concerned unapproved DTPs). Similarly, for each drug, another kernel-based classifier can be also built. For each drug-target pair, we need to build two classifiers of which the output scores further are aggregated as the final score [8,9]. BLM, however, generates the biased prediction in the case of few positive samples (known DTIs). Also it cannot predict the interaction between a new drug (without linking to any known target) and a new target (without linking to any known drug) because no positive samples are available to train its classifier model. BLM-NII, an extension of BLM, recently developed a weighted strategy and integrated it into BLM to tackle the case of no positive sample available [10]. However, the biased prediction still remains when few positive samples are available. More importantly, since drug-target pairs are separately put into different classifier spaces, neither BLM nor BLM-NII is able to investigate the relationship between them. Such relationship is helpful for further predicting the potential interactions in both drug discovery and drug repositioning.
To summarize, three issues in existing predictive models are not yet solved. (1) Predicting interactions between drugs and targets occurring in isolated subnetworks of DTI network is difficult. (2) Inadequate positive samples usually cause biased local classifiers and local classification approach requires a number of classifiers. (3) The global relationship between approved DTIs and unapproved DTPs cannot be investigated in a consistent space.
Except for the predictive model, similarity measuring is another crucial factor in DTI prediction because similar drugs tend to interact with similar targets [11]. To capture pairwise similarities between drugs or targets in a better way, a topological similarity based on DTI network was proposed, such as Gaussian interaction profile (GIP) [9] and was linearly integrated into chemical structure-based similarity between drugs or protein sequence-based similarity between targets under the framework of BLM. Nevertheless, simple linear combination may not work optimally because the topological similarity is always related to the drug/target node degrees, which follow the power-law distribution [12]. In addition, for any two drugs/targets, GIP only considers the targets/drugs not interacting with them but has no consideration of the targets/drugs shared by them. So GIP may lose some information derived from those common targets/drugs between drugs/targets. Besides, since all possible values of the topological similarity proposed in GIP falls into (0, 1], GIP is an incomplete similarity metric which may not adequately characterize the dissimilarity between those very different drugs/targets.
In this paper, we believe that the difference between the similarities of drugs/targets sharing targets/drugs and the similarities of drugs/target sharing no target/drug in DTI network should be statistically significant. To address abovementioned issues, we first characterized each drug-target pair from the views of both drugs and targets, respectively. Under the publicly acceptable assumption that similar drugs tend to target similar protein receptors [11], two within-scores were presented to capture the similarities between drugs/targets sharing common targets/drugs. Based on our observation that similar drugs, in part, do not tend to target dissimilar proteins, two between-scores were also presented to capture the similarities between drugs/targets share no targets/drugs. Subsequently, we represented each drug-target pair as a feature vector which uniformly consists of four scores, regardless of the available path between drugs and targets. Each drug-target pair was labeled as positive or negative sample, depending on whether it is an approved DTI or an unapproved DTP. The use of all DTIs can guarantee that enough positive samples can be used to train the only one global classifier. After performing principal component analysis on feature vectors, we generated a drug-target pair space which provides a visualized way to investigate the relationship between known DTIs and unapproved DTPs.
In addition, to obtain a better combination between topological similarity and chemical/sequence similarity, we proposed an adaptive combination rule instead of the former linear combination and introduced a complete metric of topological similarity of drugs/targets by considering both the targets/drugs shared by two drugs/targets and the targets/drugs interacting with none of them.
Finally, based on four benchmark datasets, we demonstrated the effectiveness of our approach, by comparing with NBI, BLM, and BLM's extensions in cross validation and predicting potential interactions in unapproved DTPs under checking in existing databases.
Materials and Method
2.1. Datasets. In this paper, the adopted datasets, involving targets of ENZYME, ION CHANNEL, GPCR, and NUCLEAR RECEPTOR, were originally from [13] and further used in subsequent works [8][9][10]. All of drug-target interactions in the original datasets were collected from KEGG database. In short, we denote the four DTI datasets as EN, IC, GPCR, and NR, respectively. The brief information of four datasets is listed in Table 1. Notably, NR (the sparest DTI network in the given datasets) contains the most proportions of isolated subnetworks and is the most difficult case to predict the potential DTI [10] because it has most the proportion of unreachable paths between drugs and between targets. More details can be found in the original work [13].
Drug Similarity and Target
Similarity. The metrics of drug similarity and target similarity popularly adopted in former methods are chemical structure-based similarity and protein sequence-based similarity, respectively [8][9][10]. By representing a chemical structure as a graph, the chemical structure similarity between two drugs is defined as where | ⋅ | denotes the number of nodes in graph, ∩ V is the maximal common subgraph between and V , and ∪ V is their union [14]. The protein sequence similarity between two targets is calculated by sequence alignment and is defined as align( , V ) is the Smith-Waterman alignment score [15] between and V . In order to capture the real similarity between drugs/targets sharing common targets/drugs in a better way, former methods tried to propose new similarities and integrate them into abovementioned similarities. Under the framework of BLM, Gaussian interaction profile (GIP) was introduced to measure topological similarity between drugs/targets by considering DTI matrix as the adjacent matrix of DTI network [9]. However, for any two drugs/targets, GIP only considers the targets/drugs not interacting with them so that it may lose some information derived from their common targets/drugs. In addition, GIP is not a mathematically complete similarity since its similarity values fall into (0, 1]. So it may not be enough to characterize the dissimilarity between very different drugs/targets. Therefore, we applied a complete metric to measure the similarities between nodes of both drugs and targets, respectively, according to the DTI network. The topological similarity, named matching index (MI) [16], between drugs and the topological similarity between targets are defined as follows: where |⋅| denotes the degree of nodes and | ∩ | is the number of sharing neighbors of two nodes. For drugs, topo ( , ) considers the proportion of their shared target nodes as well as target nodes not interacting with them. For targets, topo ( , ) holds the similar consideration. Moreover, all possible values of MI fall into [0, 1].
In former work [9], the final similarities of drug and target are usually generated by linearly combining topo and topo with chem and topo , respectively. Nevertheless, such linear combination may not work optimally because the topological similarity is always related to the node degrees which follows the power-law distribution [12].
We observed that the topological similarity always works better when those drugs link to a target node of small degree; in contrast, chemical similarity always works better when those drugs link to a target node of large degree, respectively. Consequently, we designed an adaptive combination rule to expectedly achieve better prediction for MI. For target linking to drugs, the similarity between and among drugs is defined as follows: The similarity between targets and can be defined in the similar way.
Within-Score and Between-Score of a Drug-Target Pair.
A publicly acceptable assumption is that similar drugs tend to target similar protein receptors [11]. Based on this assumption, by considering the similarities between drugs/targets sharing common targets/drugs, we shall present two withinscores to capture them. Based on our additional observation that similar drugs, in part, do not tend to target dissimilar proteins, we shall also propose two between-scores to capture the similarities between drugs sharing no target and the similarities between targets sharing no drug respectively. The calculation of within-scores and between-scores is depicted in the following paragraphs. Given drugs and targets, and their known interactions, our task is to predict potential but unapproved interactions between drugs and targets. All drug-target pairs are usually organized as an interaction matrix × , in which = 1 when there is a known interaction between drug and target , and = 0 otherwise.
For drug interacting with targets, and̃denote the target interacting and not interacting with , respectively. In order to characterize the potential interaction ( , ) between drug and a queried target , we define withinscore ( , ) and between-score ( , ) from drug view as follows: For target interacting with drugs, is the drug interacting with it andṼ is the drug not interacting with it. Symmetrically, from target view, we define within-score ( , ) and between-score ( , ) as follows:
Types of Interactions.
Totally, we group all interactions into four types according to DTI network ( Figure 1): multiple, drug-centered, target-centered, and single interacting motifs. The summary of their counts in four adopted datasets can be found in Table S1 in Supplementary Material available online at http://dx.doi.org/10.1155/2015/350983. Either the target or the drug of a multiple interaction has >1 links to drugs or targets, respectively. The target of a drug-centered interaction has only one link to the drug interacting with >1 targets. The drug of a target-centered interaction has only one link to the target interacting with >1 drugs. Both the target and the drug of a single interaction only link to each other. A single interaction is usually newly approved [6]. The drug-target pairs in multiple motif are just shown in formula (5) in previous section. The drug-target pairs involving in drug-centered, target-centered, and single motifs are the special cases of multiple motif and are shown as follows: where null means that the score cannot be calculated directly. We adopted a bottom-line strategy to cope with the null cases by assigning ones to null entries.
With the representation of feature vector, we can map all drug-target pairs, including the pairs between new drugs and new targets, into the same space regardless of whether the drug and the target are in the same subnetwork or not.
Drug-Target Pair Space.
To check whether or not known interactions and unapproved pairs can be classified well in certain dimensions, we made the distributions of , , , and scores in feature vectors by histograms for four types of DTIs. As an illustration, the score distributions of four motifs of GPCR dataset [13] are shown in Figure 2. The distributions of all datasets can be found in Figures S1, S2, S3, and S4.
Known DTIs and unapproved DTPs show separations in terms of distributions of four scores. That is to say, they can be classified in certain dimensions (scores). In detail, (1) for multiple motifs (Figure 2(a)), known interactions (purple) and unapproved DTPs (cyan) can be separated significantly by , moderately separated by either or , and almost mixed together in terms of . (2) For drug-centered motifs whose is unavailable (Figure 2(b)), , , and show the best, the moderate, and the worst separations, respectively. (3) Likewise, for target-centered motifs whose is unavailable (Figure 2(c)), shows the best separation while neither nor provides an acceptable separation. (4) Single motifs only show and which both provide moderate separations (Figure 2(d)).
In terms of and , the separability of distributions between known interactions and unapproved DTPs denotes how their distribution meets the popular assumption that similar targets/drugs tend to interact with similar drugs/targets. Our results show that both and can follow the assumption well and the former is better than the latter.
On the other hand, both and cannot provide a good separability between known interactions and unapproved pairs. However, they follow our observation that similar drugs, in part, do not tend to target dissimilar proteins. More importantly, in the case of meeting our observation, and may help in prediction when they are combined with and together. Therefore, integrating all four scores together by combination, such as principal component analysis (PCA), can hopefully generate a better separation because known DTIs and unapproved DTPs can be classified in individual dimensions. After performing PCA on these four scores, we showed a space of drug-target pairs on the first three principal components (in Figure 3). In the space, the greatly significant separation between known interactions and unapproved drug-target pairs is observed.
Result and Discussion
In this section, we shall first demonstrate the effectiveness of our topological similarity metric and our adaptive combination of similarities, compare our approach with other popular methods, including NBI [7] and BLM [8] and its extensions BLM-GIP [9] and BLM-NII [10], build a drugtarget interaction space by PCA to elucidate the relationship between known DTIs and unapproved DTPs afterwards, and finally utilize the space to predict the potential interactions for DTPs. By applying PCA on feature vectors of all drug-target pairs, we used the distances of both known interactions and unapproved pairs to the origin as the confidence scores for both validating the performance of our approach and predicting potential drug-target interactions (more details in Section 3.3). Besides, the popular measurements including area under the curve (AUC) and area under the precisionrecall curve (AUPR) [17] were used to assess the computational effectiveness of approaches.
The Effectiveness of New Similarity and New Combination.
To illustrate why our approach achieved better results, we first compared GIP similarity and our MI similarity under BLM framework and our approach, respectively. Using the topological similarities only, we selected the sparsest DTI network (NR dataset) from the work [8] to perform the comparison ( Table 2). The results demonstrate that our new topological similarity is better than GIP similarity.
Then, we also applied linearly weighted combination to integrate MI with chemical structure similarity/sequence similarity in our approach, respectively. In terms of the values of AUC and AUPR, the linear combination achieved 0.977 and 0.826 while the adaptive combination achieved 0.982 and 0.949. Again, our adaptive combination is better than the linear combination.
Comparison with Other Methods.
To validate the effectiveness of our approach, we made a comparison with other approaches [7][8][9][10] which adopted the same datasets [13] (see also Section 2.1), the same testing strategy (leave-one-out cross validation, LOOCV), and the same assessment (AUC and AUPR) [17]. First, we run predictions with only chemical similarities of drugs and sequence similarities of targets and compared the results with those of BLM and BLM-GIP. Then after integrating topological similarities, our approach compared with NBI [7], BLM-GIP, and BLM-NII. All results on four datasets are listed in Table 3. In terms of AUC, our approach outperforms on all datasets. In terms of AUPR, our approach has about 7%∼10% increase on EN, GPCR, and NR, though it shows ∼5% decrease on IC when compared with BLM-NII. Totally, the proposed approach has better predicting performance.
Moreover, our approach has other advantages. First, our approach holds a sufficient number of positive samples (all known DTIs) even if the number of negative samples is large, while BLM may suffer from biased classifier models since each of its local models is trained by few positive samples (even 0 or 1 sample sometimes). Then, our approach only needs to train only one classifier whereas BLM and its extensions need to build many classifiers accounting for all targets and all drugs. Last but most importantly, with the representation of feature vector, we are able to put all drug-target pairs, including the pairs between new drugs and new targets, into the same space regardless of whether the drug and the target in the concerned pair are in the same subnetwork or not. Consequently, our approach is generally superior to other former approaches.
Drug-Target Pair Space and Its Application to Find Potential Interactions.
After performing PCA on feature vectors, we represented all DTPs as points shown by their first three principle components (denoted by , , and in Figure 3, resp.). Approved DTPs (DTIs) and unapproved DTPs show two separating groups. The unapproved DTPs (cyan crosses) gather around the origin in a sphere-like shape while the known DTIs were apart from them. Particularly in Figure 3(a), three clusters of interaction motifs are found. The cluster in left contains drug-centered motif (red circles) and multiple motifs (purple squares), and the lower cluster in right comprises target-centered motifs and multiple motifs and the upper cluster in right is composed of all four types of motifs.
The significant distribution of DTPs in the space allows us to visually investigate the relationship between known DTIs and unapproved DTPs. Therefore, after calculating the distances of all pairs to the origin, we are not only able to build classifiers by training a specific threshold of the distances when testing the performance of our proposed method Figure 3: Drug-target pair space. Unapproved DTPs are marked by cyan crosses. Approved DTPs of drug-centered, target-centered, single, and multiple motifs are marked by red circles, green triangles, yellow diamonds, and purple squares, respectively. , , and denote the first three principal components, respectively.
(refer to Sections 3.1 and 3.2) but are also able to adopt them as the confidence scores of being potential interactions when predicting potential interactions for unapproved DTPs. According to the distribution in DTP space, the farther the pair is from the origin, the more possible it is to be a potential interaction. Thus, we only focused on the unapproved drug-target pairs remarkably far away from the origin. In order to validate them, we selected the top five out of them as the interaction candidates in terms of their distance to the origin for each dataset and checked them in popular drug/compound databases, ChEMBL (C), DrugBank (D), and KEGG (K). Since ChEMBL provides the predicted interactions (not approved yet), we only selected the most confident interactions with the score of 1 under the cutoff of 1 M [5]. Comparing with ChEMBL, DrugBank, and KEGG, we showed our consistent predictions of the potential interactions of unapproved drug-target pairs for the adopted datasets in Table 4.
Conclusions
In this paper, we have addressed crucial issues in predicting drug-target interactions, which have not yet been solved well by former methods. These issues include the powerless inference in the case of isolated subnetworks, the biased classifiers derived from few positive samples, the need of training a number of classifiers, and the unavailable relationship between known DTIs and unapproved DTPs.
By characterizing each drug-target pair as a feature vector of within-scores and between-scores, our approach has the following advantages: (1) all types of drug-target pairs are treated in a same form, regardless of the available path between drugs and targets; (2) enough positive samples are able to reduce the bias of training model and only one classifier needs to be trained; (3) more importantly, the relationship between known DTIs and unapproved DTPs can be investigated in the same visualized space. #Combining topological similarities (MI) with chemical similarity and sequence similarity, respectively. NBI only provides AUC values and run tests under 5-fold cross validation (5CV) which is statistically same as LOOCV when the number of samples is enough. In addition, to capture similarity better, we have introduced a complete metric of topological similarity of drugs/targets by considering both the targets/drugs shared by two drugs/targets and the targets/drugs interacting with none of them. We also have proposed an adaptive combination rule, instead of the former linear combination between topological similarity and chemical/sequence similarity, by considering that the drug/target nodes' degrees follow the power-law distribution.
Finally, the effectiveness of our approach is demonstrated by comparing with existing popular methods under the cross validation and predicting potential interactions for DTPs under the validation in existing databases. | 5,415.2 | 2015-10-12T00:00:00.000 | [
"Computer Science",
"Medicine"
] |
An Uneventful Horizon in Two Dimensions
We investigate the possibility of firewalls in the Einstein-dilaton gravity model of CGHS. We use the results of the numerical simulation carried out by Ashtekar et al. to demonstrate that firewalls are absent and the horizon is drama free. We show that the lack of a firewall is consistent because the model does not satisfy one of the postulates of black hole complementarity. In particular, we show that the Hawking radiation is not pure, and is completely entangled with a long-lived remnant beyond the last ray.
Introduction
Almost forty years after Hawking's discovery that black holes radiate [1], our understanding of the resulting black hole information paradox [2] remains in a state of confusion. Recent arguments by AMPS [3] have given increased credence to the idea that the local semiclassical approximation breaks down at the horizon, resulting in a 'firewall' and a failure of the equivalence principle. (Related arguments have also been made in [4][5][6][7][8].) The AMPS firewall was motivated by the inconsistency of a set of long-held assumptions about the behaviour of black holes. These assumptions have been encapsulated as a set of postulates for black hole complementarity [9]. The particular postulates in conflict are: i) Unitarity of the S-matrix relating infalling matter to outgoing Hawking radiation ii) Validity of semiclassical field theory outside of the stretched horizon, and iii) Validity of semi-classical field theory in the infalling observer's local reference frame, i.e. no drama. While the authors of [3] maintain that a firewall is the most conservative solution (as defended most recently in [10]), much debate continues as to how the postulates might otherwise be modified to escape a contradiction .
A simplified setting in which to attempt to better understand the information paradox and the possibility of firewalls is 1 + 1-dimensional gravity. While one does not expect 1 + 1-dimensional theories of gravity to be good models of black hole complementary in higher dimensions, the above postulates can be applied more broadly. If they all hold true in two-dimensions, then by the reasoning of [3] they necessarily imply the existence of firewalls.
We choose, in particular, to study the CGHS model [67] of dilaton-gravity with a large number of scalar matter fields in a background where a left-moving null shell of matter classically creates a black hole. This model (as well as other two-dimensional models) affords important simplifications not present in higher dimensions. Firstly, the metric and dilaton are not dynamical, but are determined in terms of the matter degrees of freedom. In fact, solutions to the classical equations of motion can be written in closed form. Secondly, the scalar field action is chosen such that the scalar fields only couple to the two-dimensional metric and thus decouple into left and right moving sectors that propagate freely. The asymptotic boundary also has more components than in higher dimensions, with both left and right portions of future null infinity I + and past null infinity I − (see Figure 1). This nicely separates the black hole information paradox into two separate questions [68]. The recovery of information sent into the black hole is a question about the unitarity of evolution of the state on I − R to I + L . We will not address this question in this paper. Unitarity of Hawking radiation and the existence of firewalls is a question about the evolution of the state on I − L to I + R . This is the question we will address here. 1 In the classical CGHS black hole solution, the last ray of the black hole singularity is at infinite affine parameter along I + R , but at finite parameter along I − L . Thus I + R causally contains only a proper subset of I − L and we are left with a mixed state on I + R . However, the mean field analysis of [68][69][70] found that corrections to the Hawking radiation at late times brought the last ray of the singularity to a finite affine parameter along I + R . This suggests that, instead, I + R could be unitarily equivalent to I − L . A complete understanding of the quantum evolution past the singularity would then give a pure state on I + R . Moreover, it was found that there is no singularity in the metric at the apparent horizon. The smooth horizon together with purity of the state on I + R suggest a tension with the firewall argument. In [68], it was argued that the purity of the state on I + R implies that there are not remnants in the CGHS model. By this, they meant that the state on the entirety of I + R is not entangled with another part of the future boundary. By contrast, we compute the entanglement entropy of an interval of I + R containing the majority of the Hawking radiation and find it to be highly entangled with the remaining state to the future of the last ray of the black hole singularity, still on I + R . The remaining state beyond the last ray has a relatively small, universal (independent of the initial mass of the black hole) Bondi mass. This high-entropy, low-energy object is exactly a remnant.
The large entanglement of the remnant evades the argument for firewalls because the Hawking radiation is not in itself pure. While we expect a unitary S-matrix that evolves the state at I − L to I + R , the Hawking radiation is highly entangled with degrees of freedom behind the last ray of the black hole singularity. One is, of course, free include the remaining degrees of freedom in the definition of Hawking radiation, but what is important is the large entanglement remaining when the Bondi mass is small. This is better called a remnant. 2 This conclusion is consistent with much early work on the CGHS model [67,73,74].
Moving mirror models of Hawking radiation manifestly do not have firewalls, irrespective of measurement issues raised in [75], because the state is prepared precisely to be in the vacuum. Mirror trajectories that produce Hawking radiation always behave as some form of remnant [76].
Work in a very similar spirit to this paper has been carried out previously by [77] in the RST model. Previous numerical studies of the CGHS model have been carried out by [78][79][80][81]. Our work is able to draw stronger conclusions because of the numerical advances in [68][69][70] and their discovery of universal properties for suitably macroscopic black holes. In recent work, [41], it has been suggested that there could be a firewall outside the apparent horizon in related 2-dimensional models. We believe that there is sufficient control over the numerical simulations outside the apparent horizon such that this conclusion is not waranted. The arguments presented in this paper demonstrate instead how the firewall paradox is avoided.
An outline of the paper is as follows: in Section 2, we review the CGHS model and discuss the mean field theory results of [68][69][70]. In Section 3, we review the entanglement entropy of an interval in a 1 + 1-dimensional CFT and give a simple formula for the entanglement of the Hawking radiation at I + R in the CGHS model. We also discuss the equivalence of related calculations for the entanglement entropy of radiation from a moving mirror. In Section 4, we show that the entanglement entropy of the Hawking radiation in the CGHS model is large and scales like the ADM mass M ADM . We identify the modes in the interval that carry the excess entanglement, as well as the degrees of freedom that purify them across the last ray of the singularity. In Section 5, we discuss the uplift of these solutions to higher dimensions and the connections between remnants in 2D and higher dimensions. Figure 1: The geometry of an evaporating black hole in the mean field approximation of the CGHS model, as found by [68].
A Review of the CGHS Model
In this section we review the 2 dimensional dilaton-gravity model of [67]. The geometry of this model is specified by a metric and a dilaton, given by g ab and φ respectively. This system closely resembles the one obtained from dimensional reduction of the s-wave sector of 4 dimensional gravity, but differs in the form of the dilaton potential. The benefit of studying such a model is that it is classically soluble in closed form, and is expected to provide qualitative insights into the s-wave sector of 4 dimensional gravity. The action of the geometric sector of this model is given by, where G is the two dimensional gravitational constant and κ 2 is a cosmological constant term. This action was originally obtained as the low energy effective action for string compactifications, where it describes the near horizon physics of extremal dilatonic black holes [82].
Matter in this system is composed of N scalar fields, f i , whose action is, Note that the dilaton is absent in this action, and thus the scalar field can be viewed as living purely on the two dimensional spacetime given by the metric g ab . This simplification ensures that the scalar field couples to the geometry only via the constraints.
The Classical Solution
We will be adopting the conventions of Ashtekar et al. used in [68][69][70]. We will be working in conformal gauge where the inverse metric takes the form g ab = Ωη ab , and so all the metric information is encoded in Ω. We will be considering the fields as living on a fiducial minkowski manifold M 0 with the flat metric η ab whose coordinates are z + and z − . This fiducial spacetime given by (M 0 , η) has null boundaries I ± L,R , where the future(past) of I − L,R (I + L,R ) covers all of M 0 . After redefining the dilaton as Φ ≡ e −2φ , and further defining Θ ≡ Ω −1 Φ, the classical equations of motion take the form, with the constraints, Conformal gauge still leaves unfixed the conformal subgroup of diffeomorphisms. This is fixed by considering the solution, where f i + is the left moving part of the solution f i = f i + (z + ) + f i − (z − ) of the scalar field equation. Setting f i = 0 we see that this solution describes a flat metric with a linear dilaton; This is the so called throat limit of the extremal dilatonic black hole geometry in [82].
As shown in [67], sending in a left moving scalar field shockwave creates a spacelike black hole singularity at the locus where the dilaton vanishes. Moreover, both I − L and I + R are complete with respect to g ab , but the past of I + R does not cover the entire spacetime, admitting a horizon for right moving modes. Thus classically, I − L is not contained in the past of I + R . This is the basis of the information problem: the final state on I + L is obtained from evolving the state on I − L and tracing over part of the degrees of freedom that don't make it to I + R . Since we are working in two dimensions, left and right movers completely decouple and we can ask the question of unitarity for each one independently.
The Mean Field Approximation
In [68][69][70], Ashtekar et al. numerically studied the CGHS black hole solution at large N , including the back reaction of the Hawking radiation on the geometry. In the large N , meanfield approach they disregard quantum fluctuations of the geometry, but not that of matter. This approach was considered in earlier studies of both the CGHS model and close variants thereof [67,74,78,79,83]. The quantum state chosen for the scalar field is vacuum on I − L and a coherent state with the classical profile f 0 + on I − R , and thus f i + = f 0 + . In keeping with matter fluctuations, the conformal anomaly [84] in two dimensions results in a non traceless stress tensor which sources the equations of motion. The modified equations are, whereN ≡ N/24, and where the constraints, are specified on I − . The shockwave is introduced as the coherent state coming in from I R − given by where M ADM is the ADM mass of the resulting Black hole.
The mean field equations can be shown to provide a quantum corrected singularity [74,83]. After a quick manipulation of (6) and (7), we obtain the equation where R g is the Ricci scalar of the metric g ab = Φ −1 Θη ab . This implies a critical value for the dilaton, Φ cr = 2N G , where either the left hand side of (10) vanishes or R g diverges. In the latter case we have a quantum singularity which occurs when the dilaton is nonzero. While we will assume that the full quantum evolution smoothly resolves this singularity, we believe our main conclusions will be unchanged should it be necessary to replace the singularity with a local boundary condition at Φ cr . 3 The size of an evaporating black hole can be tracked by the location of it's apparent horizon. Since the dilaton measures the area of the reduced 2-sphere, it can be used to define trapped points in 2 dimensions. Throughout the evaporation, the apparent horizon is then located at the future marginally trapped points, where ∂ + Φ = 0 and ∂ − Φ < 0. Ashtekar et al. define the area on the future marginally trapped points as The shift proportional to is induced by the mean field equations as a quantum correction to the singularity, which now occurs when Φ = 2N G . This shift guarantees that a H shrinks to zero size at the singularity.
Along I + R , the mean field theory equations imply a balance equation in terms of a quantum corrected Bondi mass and a quantum corrected Bondi flux [68]: where the corrected Bondi flux is given by and the corrected Bondi mass is given by where B is subleading term in the expansion of Φ at large y + It is clear from the manifestly positive form of F ATV that the Bondi mass uniformly decreases along I + R . While numerical computation showed [68] that the traditional Bondi mass, considered in [67,[79][80][81][85][86][87], can acquire a negative value at late times, the corrected mass was found to remain positive.
The corrected Bondi flux is positive by definition, but this is not true of the traditional Bondi flux. The relation between the modified Bondi flux and the traditional one is given by It was found [70,88] that the mean field theory equations admit a scaling symmetry that changes what is usually thought of as physically distinct parameters. This symmetry is given byΘ where all the new fields satisfy the mean field theory equations. This induces the following change on the physical quantities This implies that the dynamics of the geometry depends only on the invariant quantities Whether a black hole is macroscopic or not depends on how it's physical properties relate to the Planck scale. We adapt the conventions in [70]. There is an ambiguity in defining the Planck mass and time in two dimensions, so we use the four dimensional definitions instead, which are M 2 P l = /G 4 and τ 2 P l = G 4 . From dimensional reduction, we have G = G 4 κ 2 and thus the Planck mass and time in two dimensional units are
Numerical Results
The numerical simulation of [70] bares a number of interesting results, some of which had been unanticipated. We focus on a few that are of most interest to the discussion at hand. Firstly, it was found that the dynamics was universal after a brief initial transient period.
Physical quantities, invariants under the rescaling discussed above, would match a universal curve until the end of the evaporation process. It was found that for large enough M ADM , M ATV Bondi approaches a universal value of 0.864N in Planck units. This is a small mass, in that we expect it to evaporate in a time of order τ pl . Further, there is no 'thunderbolt' curvature singularity along the last ray of the singularity [81]: the Ricci scalar of the mean field theory metric is regular on the last ray and goes to zero as I + R approached. Further, in the mean field analysis the affine parameter along I + R was found to be finite on the last ray, and thus I + R is incomplete. This, along with the finiteness of the Ricci scalar on the last ray, imply that I + R might very well be extendible past the last ray such that it is unitarily equivalent to I − L . Upon close inspection of the Ricci scalar profile plots given in [70], it is clear that the Ricci scalar diverges only at the singularity and is finite everywhere else including the regions near the horizon. Thus firewalls are completely absent in this model. This at first seems to be at odds with the paradox of [3], which states that an infalling observer encounters high energy particles in the vicinity of the event horizon of a black hole. The rest of this paper is dedicated to showing that the dynamics of this model do not satisfy one of the postulates of black hole complementarity. The hawking radiation produced is in a mixed state and is in fact entangled with the region beyond the last ray. We describe this as a remnant scenario.
Entanglement Entropy in 2D CFTs
A very useful concept for understanding the state of the late-time Hawking radiation will be the entanglement entropy (or geometric entropy) of an interval [76,89,90]. This entropy is simply the von Neumann entropy S = −Trρ Σ ln ρ Σ of the density matrix ρ Σ for the state having traced out all degrees of freedom localized outside an interval Σ.
For a 1 + 1-dimensional CFT, it was shown [76,89] that the entanglement entropy of an interval in the vacuum state is given by where i are spatial UV cutoffs at either end. (An IR cutoff Λ is also necessary, but here we work in the limit Σ << Λ where the entropy is independent of Λ.) This calculation of the entanglement entropy can easily be extended to non-vacuum states that are related to the vacuum by a conformal transformation [72]. Because a conformal transformation is just a change of basis that respects the division of degrees of freedom into inside and outside the interval, the entanglement entropy will be invariant. Thus the entropy of (Σ, 1 , 2 ) in an appropriate non-vacuum state is given by where ( Σ, 1 , 2 ) are the transformed proper lengths of the interval and cutoffs in the conformally related vacuum.
This entropy diverges as we remove the UV cutoff at either end. What we are really interested in is the renormalized entanglement entropy, which measures the excess entanglement relative to the vacuum: which then is given by
Entanglement Entropy of Hawking Radiation in the CGHS model
In the case at hand, we are interested in the entanglement entropy of the right moving modes across a region of I + R , given in the affine coordinate y − along I + R as y − 1 , y − 2 . The interval is chosen to contain the vast majority of the Hawking flux. Let the conformal transformation that takes us to the vacuum be given by y − = f (y − ) along I + R . To compute the entanglement entropy of this region, we consider some spacelike interval bounded by y − 1 , y − 2 that is asymptotically close to I + R so that giving .
The transformation that takes us to the vacuum at I + R is exactly the transformation that makes the affine parameter along I + R agree with that along I − L : z − = f (y − ). Moreover, for large negative y − there is no radiation emitted so that the affine parameters along I + R and I − L agree. Thus, if we take a long interval such that y − 1 is sufficiently negative, then we can conclude to arbitrarily high precision. We conclude that the renormalized entanglement entropy takes the simple form The independence of the renormalized entropy from moving y − 1 further to the past is sensible. We expect the Hawking radiation to be largely independent of IR details of how the black hole was formed. Moreover, the invariance of the entropy as we extend the interval farther into the past implies that the excess entanglement is with degrees of freedom past the last ray, not at large negative y − . Note, however, that there is a more complicated dependence of the renormalized entropy on large relative changes in distances to the past and future boundaries for certain classes of remnants. We discuss this briefly in 4.1.
When y − is sufficiently negative, the entropy also depends entirely on the change in the cutoff on I + R relative to I − L . Since we are considering the entropy of free right-moving fields, the entanglement of degrees of freedom inside and outside the interval cannot change from I + R to I − L . In terms of modes on I − L , we find excess entanglement above the vacuum because, by fixing the proper length of the cutoff, we are now counting the entanglement of modes on I + R that were above the cutoff on I − L [77]. However, when mapped forward to I + R , the identification of the modes carrying the excess entanglement is different. We elaborate on this point in 4.1.
Entanglement Entropy of radiation from Moving Mirrors
The above concepts are nicely illuminated by examining the closely analogous example of radiation from a moving mirror in 1 + 1-dimensions [72]. We describe here an equivalence between calculating the entanglement entropy in the CGHS model where the future and past affine parameters are related by y − = f (z − ) and doing the same computation for free scalar fields reflecting off a mirror with trajectory given by y + = f (y − ). 4 Most importantly, moving mirrors allow us to more tangibly discuss the generic types of behaviour we can expect after the last ray in the CGHS model.
Consider a mirror that moves in a flat background, ds 2 = − dy − dy + , along the trajectory y + = f (y − ). The mirror provides perfectly reflecting boundary conditions for free scalar fields set in the vacuum along I − R . Again we are interested in the entropy of a spacelike interval spanning the null interval y − 1 , y − 2 . The entropy can be calculated again by using a conformal transformation, y − = f (y − ), to map the configuration to the vacuum where the mirror trajectory is given by y + = y − . As in the CGHS model considered above, the entropy relative to the vacuum is determined by the rescaling of the interval and cutoffs under the conformal transformation. Exactly as before, we find .
We are interested in a mirror trajectory such that for some Y − 1 , we have f (y − ) = y − for all y − < Y − 1 . Then given an interval y − 1 , y − 2 such that y − 1 << Y − 1 , the entropy again reduces as in the CGHS model Now suppose further that our mirror accelerates from rest until reaching a constant leftmoving velocity, ie. f (y − ) < 1. It has a greater entanglement entropy than the vacuum. As this entropy is unchanged when we take y − → −∞, the excess entanglement is with modes across the boundary at y − 2 . There is an interval of radiation along I + R when the mirror is accelerating followed by what locally appears to be the vacuum. The large amount of entanglement between the radiating interval and the vacuum interval indicates that we should understand this trajectory as the analog of a remnant [72].
As in the CGHS model, to better understand where the excess entanglement comes from, we can simply map all questions of entanglement of different states on I + R to the entanglement of different intervals on I − R . We consider a fixed interval on I + R for our moving mirror and for the non-moving mirror vacuum. The two cases are illustrated in Figure 2. Because the majority of the length of the interval is contained in the region where the mirror is not moving, the length of the interval on I − R is the same at leading order for both the vaccuum and the moving mirror. However, the cutoff at y − 2 is rescaled by a factor of f (y − 2 ), relative to the vacuum, when traced back to I − R . The excess entropy for the moving mirror can thus be understood, in terms of modes on I − R , to be modes that have been pulled down from above the UV cutoff.
Note that this example displays interesting behaviour for the renormalized entropy when we make large changes in the location of boundaries. As we increase y − 2 , at leading order there is no change in S Σ,ren . But, as we continue to move y − 2 farther along the region of constant f (y − ), eventually the length of portion of our interval in this region becomes comparable to that in the region where the mirror is not moving, and our approximation breaks down. In this regime, S Σ,ren is shrinking. If we were to further increase y − 2 , we eventually reach a regime where Σ Σ ≈ f (y − 2 ) , and The interval is now actually even less entangled than it would be in the vacuum state. The entropy can be completely rejuvenated now by simply moving y − 1 sufficiently further to the past. While this isn't surprising when considering the modes on I − R , it is less intuitive in terms of the state on I + R . We explain this long-range behaviour in section 4.1. As a separate example, we can also consider a mirror trajectory that, after accelerating to a constant velocity and producing Hawking radiation, quickly decelerates back to zero velocity in a short affine interval. We take y − 2 after this deceleration, and note that length of the corresponding interval on I − R is the same as in the vaccuum at leading order. Likewise, the cutoffs are also now the same as in vacuum. Thus S Σ,ren ≈ 0. In this case the excess entanglement in the accelerating region is clearly purified by degrees of freedom that live in a short decelerating interval. This example does not exhibit the interesting long-range entanglement as did the previous case.
CGHS Remnants
We now show that the renormalized entanglement entropy of the black hole radiation on I + R is large and scales like M ADM . This excess entanglement (above the vacuum) is between the radiation and the state in the causal future of the black hole. This remaining object has a large entropy and a small Bondi mass; it is a remnant.
We can rewrite the mean-field corrected Bondi flux as where we have defined the function s(y − ) = ln( dy − dz − ). When M * Bondi >> M P l , we expect that the semi-classical description remains approximately valid and the black hole radiates at a constant rate set by the temperature T = κ . This is borne out by the numerical simulations [68], where it is found that, while M * Bondi >> M P l , and after the formation of the apparent horizon, then dy To leading order, for a large black hole, we can then write (13) as Integrating this equation from y − = −∞ to the last ray gives Because the future and past affine parameters agree at large negative y − , and recalling that [68] found that M ATV Bondi | Last ray was universal and small as compared to our choice of M ADM , (M ATV Bondi | Last ray ≈ 0.84N M p ), this gives at leading order We immediately recognize the left hand side of this equation as the renormalized entanglement entropy of a large segment of I + R ending at y − sing . It contains contains almost all of the black hole radiation. The entropy scales with M ADM and so is large in comparison toN M p . A similar result was previously argued for using thermodynamic arguments (and assuming necessary corrections to the late time Hawking flux) [85].
Note that from [68], we can extract this result from their numerical computation of dy − /dz − for the infinite family of black holes with M ADM = 8N M p . One finds that This agrees well with our approximate analytic calculation, which in this case gives S = 4 3 N (the numerical simulation uses units where κ = = G = 1). The fact that our analytic result is slightly larger is expected as our approximation underestimates the rate of flux at late times as compared to the numerical simulation.
As was argued earlier in Section 3, this entropy is insensitive to our choice for the boundary of the interval at some large negative y − . Thus, it is clear that the excess entanglement is with the state to the future along I + R . Moreover, the corrected Bondi mass is small at y − sing and is always decreasing. We identify such a small mass state that has a large entanglement entropy as a remnant. While the separation of degrees of freedom on I + R into Hawking radiation and a remnant is somewhat artificial, the salient point is that the entropy is increasing as we include more of the Hawking radiation in our interval and that the entropy is still large when the remaining Bondi mass is small.
It's not possible to describe what the degrees of freedom that are entangled with the Hawking radiation look like on I + R without knowledge of dy − /dz − . Two general possibilities were described in the analogous example of the moving mirror, where it either continued at constant velocity, or decelerated to zero velocity. Indeed, if we had a positive mass theorem for the modified bondi mass M ATV Bondi , then the modified flux would necessarily have to approach zero. This is equivalent to the requirement of dy − /dz − = k.
Still assuming a positive mass theorem, we can go somewhat further: if the mirror is to return to zero velocity in the original frame, then it must happen slowly. Thus, the remnant is necessarily large on I + R . To see this, consider the integrated flux after the last ray A positive mass theorem would give the constraint and, as s(y − sing ) ≈ 2 κN G M ADM , we then must have that As both s(y − ) at the last ray and the ratio M ADM /M ATV Bondi | Last ray are large, we must have that the affine parameter over which the mirror returns to rest is large. We thus rule out the class of mirror trajectories where the mirror quickly returns to rest in the original frame. Figure 3: The radiated modes in the Hawking-like region are exactly the localized modes described in [73]. They are entangled with partner modes across the last ray of black hole singularity.
Nevertheless, it is worth noting that despite the fact that the remnant is large in terms of the affine parameter y − on I + R , it can still occupy a much shorter interval in terms of z − , because at the last ray we have dy − /dz − is exponentially large (dy − /dz − ≈ e M * ).
Identifying Entangled Modes
We wish to understand what the degrees of freedom are in the remnant region that purify the excess entanglement contained in the Hawking radiation. Recall that in the semi-classical treatment of the CGHS model [73], we can divide I − L at the last ray into two semi-infinite intervals. We take the last ray here to be at z − = 0. As the last ray is at infinite affine parameter along I + R , a basis of modes to the past of the last ray is given by (letting which are positive frequency for z − near 0 (the region of Hawking radiation). A corresponding choice of modes can be made to the future of the last ray We can also construct localized wavepackets peaked at z − = − exp(−2πn/ ) with width −1 in ln(−z − ) and frequency ω j = j . One then finds [73] that in this basis, for small , the in-vacuum has the thermal form where n jn are occupation numbers for the localized wavepackets. Now in the mean field approximation, we can consider such wavepackets that are localized in the predominant region where the radiation is Hawking-like. These modes necessarily have the same entanglement as above and are purified by their partner modes equidistant in z − across the last ray, ie. by the remnant. This is illustrated in Figure 3. (In terms of the affine coordinate y − , in the case equivalent to a constant velocity mirror, y − = kz − , the Hawking radiation is contained in an interval of length ln(k) and is purified by modes within a distance k past the last ray. In the case at hand, the length of the purifying region scales as e M * ). Moreover, because we restrict our consideration to the Hawking-like region, these modes are still the thermal radiation excited above the vacuum at I + R . Thus we can identify the entanglement of the excited Hawking radiation above the vacuum at I + R as contributing to the large entanglement entropy of the interval.
Long-Range Behaviour
While we have identified the modes that purify the Hawking radiation, this doesn't account for all of the excess entanglement in the interval, nor the change in entropy as we vary the endpoints on large scales. Our calculations in Section 3.2 showed that the renormalized entropy falls as we expand Σ past the last ray, but it did so on a scale set by the overall length of Σ. Moreover, we could again re-entangle the interval simply by moving the past boundary sufficiently farther away so our original approximation was once again valid.
Because these large variations are in a regime where both boundaries of the interval are far from the radiating region, the long-distance behaviour of the entanglement entropy is due to modes outside of the radiation region; these modes are locally in their vacuum configuration. It may seem puzzling that excess entropy above the vacuum can be due to modes that are locally in their vacuum state. We now show, though, that the difference in entropy with respect to the vacuum is due to the mismatch in the entanglement of these modes across the radiating region. Moreover, unlike when we looked at the state I − , the analysis of the entanglement on I + is insensitive to the UV cutoff. Figure 4: (a) When the interval is much longer to the past of the kink than to the future, there are many modes whose entangled partner is within the future boundary in the vacuum, but outside the boundary in the kinked state. (b) When the interval is much longer to the future of the kink than to the past, there are many modes who entangled partner is outside the past boundary in the vacuum, but inside the boundary in the kinked state.
The separation of entanglement entropy from Hawking-like modes and from long-range entanglement of the vacuum is most clear in the limit where a mirror instantaneously changes from rest to a constant velocity in the original frame. Then on I + R , the state is everywhere in vacuum except at the kink in the mirror trajectory, where there is a mismatch in phase in gluing the Rindler modes of the two regions of vacuum together.
The hatted Rindler modes to the future of the kink are shifted by where z − = kz + after the kink. In terms of the localized modes described above, this simply induces a shift such that v jn is peaked near z − = exp(−2πn/ + ln(k)) instead of near z − = exp(−2πn/ ). We see that, relative to the true vacuum, the kinked vacuum has modes prior to the kink entangled with modes localized further to the future.
This gives a nice picture of the entanglement entropy due to the mismatched vacuum regions, as illustrated in Figure 4. When the interval is much longer to the future of the kink than it is to the past, there are many modes whose entangled partner across z − = 0 is still within the interval in the vacuum state, but has been stretched outside the interval in the kinked vacuum state. This generates the large renormalized entropy.
Conversely, when the interval is much longer to the past of the kink than it is to the future, there are many modes to the future of z − = 0 whose entangled partners are outside of the interval in the vacuum state, but are inside of the interval in the kinked vacuum. This generates the negative renormalized entropy found.
Lifting to Higher Dimensions
It was shown in [82] that the CGHS action, (1), (2), describes the physics of the near horizon region of extremal dilatonic black holes in both four and five dimensions. There are several inequivalent ways of taking the extremal limit which push either the Horizon or the mouth infinitely far away. The classical CGHS action, which gives rise to the black hole solution and the flat dilaton solution, describes the physics of a geometry with a fixed sized throat, and thus corresponds to the limit where the mouth is pushed to infinity. The Horizon limit metric and dilaton are given by, where d is the dimension of the two-or three-sphere on which we reduce to obtain the 2dimensional CGHS action, andφ 0 is the value of the dilaton at infinity, which is held fixed when taking the extremal limit. The overall constant, κ 2 , which appeared as a cosmological constant term in the CGHS action, takes on the values 1/4Q 2 and 1/Q in the four and five dimensional cases respectively, where Q is the magnetic charge associated with a gauge field that points in the S d directions. Taking x 1 we obtain the metric and dilaton of the throat limit, The problem we are studying corresponds to throwing in a pulse into the flat linear dilaton region. In the classical case, the solution corresponds to patching the two solutions (48) and (49) along the infalling null shockwave (with an appropriate shift of the coordinates). The geometry develops a horizon, which on the null shockwave is located at x = 1 2 ln GM ADM κ . In order for this to carry through to the uplifted case requires ln GM ADM κ 1, since the blackhole in the classical CGHS case forms from throwing a pulse into the flat linear dilaton region. The classical uplifted picture then corresponds to the horizon, of the dilatonic black hole being pulled up the throat by the infalling matter.
In the mean field approximation scenario of [70], the infalling excitation forms an apparent horizon, which emanates from the null shockwave at x ∼ 1 2 ln GM ADM κ . To discuss the uplift we have to assume the nature of the geometry past the last ray. Before the last ray, it was shown that the geometry near I + R is asymptotically flat and that dy − /dz − approaches a constant at the last ray. Assuming that quantum gravity effects are confined to the vicinity of the singularity, we consider the case where the geometry is extended all the way past the last ray near I + R . Thus at late times the geometry transitions back to the throat limit of the extremal dilatonic black hole, although in rescaled coordinates.
As we have shown, the horizon formed by the infalling excitation evaporates predominantly in the manner of Hawking, producing radiation purely entangled with the region behind the last ray. In the uplifted picture, the radiation is entangled with excitations deep down the throat, and thus the throat geometry itself can be viewed as a remnant. This suggests the same remnant scenario considered previously under the name of cornucopions [74]. Our conclusion is somewhat unsatisfactory, due to the much-discussed issues with remnants in higher dimensions, including the possibility of infinite remnant pair-production (see, for example, [91][92][93][94][95]). Perhaps more importantly, remnants are incompatible with AdS/CFT [3]. We stress, though, that remnants in the two-dimensional model do not necessarily imply remnants in the uplift, as reduction and quantization do not always commute. 5
Conclusions
We have demonstrated that a black hole in the CGHS model does not result in a firewall, but rather, as previously expected, decays to a highly-entangled remnant: there is a large entanglement between an interval containing the majority of the Hawking flux and the matter state to the future of the last ray. The entanglement scales with the ADM mass of the black hole, while the remnant has a small, universal corrected Bondi mass.
The lift of the CGHS solution to higher dimensions gives a well-known picture of a near-extremal black hole Hawking radiating back down to extremality. The model then also suggests a higher-dimensional remnant, which lives in the throat outside the extremal horizon. Remnants are problematic due to issues with pair-production and are incompatible with AdS/CFT. While our result may be taken as mildly cautionary to the firewall proposal, we emphasize instead that this is strongly suggestive that the CGHS model, as presently understood, misses essential features of higher-dimensional gravity. Indeed, the model is a renormalizable local field theory and manifestly is not holographic.
It would be much more interesting to study non-local modifications of the CGHS model or altogether different two-dimensional models that better capture the postulates of complementarity. This would be a more useful toy model in which to search for the dynamical (non-)formation of firewalls. | 10,124.8 | 2013-07-30T00:00:00.000 | [
"Geology",
"Physics"
] |
Anomalous decay rate of quasinormal modes in Schwarzschild-dS and Schwarzschild-AdS black holes
The quasinormal modes of a massive scalar field in Schwarzschild black holes backgrounds present an anomalous decay rate, which was recently reported. In this work, we extend the study to other asymptotic geometries, such as, Schwarzschild-de Sitter and Schwarzschild-AdS black holes. Mainly, we found that such behaviour is present in the Schwarzschild-de Sitter background, i.e, the absolute values of the imaginary part of the quasi normal frequencies decay when the angular harmonic numbers increase if the mass of the scalar field is smaller than the critical mass, and they grow when the angular harmonic numbers increase, if the mass of the scalar field is larger than the critical mass. Also, the value of the critical mass increases when the cosmological constant increases and also the overtone number is increasing. On the other hand, the anomalous behaviour is not present in Schwarzschild-AdS black holes backgrounds.
I. INTRODUCTION
The quasinormal modes (QNMs) and quasinormal frequencies (QNFs) [1][2][3][4][5] have recently acquired great interest due to the detection of gravitational waves [6]. Despite the detected signal is consistent with the Einstein gravity [7], there are possibilities for alternative theories of gravity due to the large uncertainties in mass and angular momenta of the ringing black hole [8]. The QNMs and QNFs give information about the stability of matter fields that evolve perturbatively in the exterior region of a black hole without backreacting on the metric. Also, the QNMs are characterized by a spectrum that is independent of the initial conditions of the perturbation and depends on the black hole parameters and probe field parameters, and on the fundamental constants of the system. The QNM infinite discrete spectrum consists of complex frequencies, ω = ω R + iω I , in which the real part ω R determines the oscillation timescale of the modes, while the complex part ω I determines their exponential decaying timescale (for a review on QNM modes see [3,9]).
The QNFs have been calculated by means of numerical and analytical techniques; some well known numerical methods are: the Mashhoon method, Chandrasekhar-Detweiler method, WKB method, Frobenius method, method of continued fractions, Nollert, asymptotic iteration method (AIM) and improved AIM, among others. In the case of a probe massless scalar field it was found that for the Schwarzschild and Kerr black hole background the longest-lived modes are always the ones with lower angular number . This is expected in a physical system because the more energetic modes with high angular number would have faster decaying rates. In the case of a massive probe scalar field it was found [10][11][12][13], at least for the overtone n = 0, that if we have a light scalar field, then the longest-lived quasinormal modes are those with a high angular number , whereas for a heavy scalar field the longest-lived modes are those with a low angular number . This behaviour can be understood because for the case of massive scalar field even if its mass is small its fluctuations can maintain the quasinormal modes to live longer even if the angular number is large. This anomalous behaviour is depending on whether the mass of the scalar field exceeds a critical value or not. This anomalous decay rate for small mass scale of the scalar field was recently discussed in [14].
Extensive study of QNMs of black holes in asymptotically flat spacetimes have been performed for the last few decade mainly due to the potential astrophysical interest. Considering the case when the black hole is immersed in an expanding universe, the QNMs of black holes in de Sitter (dS) space have been investigated [15,16]. The AdS/CFT correspondence [17,18] stimulated the interest in calculating the QNMs and QNFs of black holes in anti-de Sitter (AdS) spacetimes. It was shown in [19] that this principle leads to a correspondence of the QNMs of the gravity bulk to the decay of perturbations in the dual conformal field theory.
The aim of this work is to study the propagation of scalar fields in the Schwarzschild-dS and Schwarzschild-AdS black hole backgrounds in order to see if there is an anomalous decay rate of quasinormal modes. We carry out this study by using the pseudospectral Chebyshev method [20] which is an effective method to find high overtone modes [21][22][23][24][25]. The gravitational QNMs of Schwarzschild-de Sitter black hole were studied in [26][27][28]. The QNMs for this geometry was calculated [29] by using the sixth order WKB formula and the approximation by the Pöschl-Teller potential. Also, it was shown the frequencies all have a negative imaginary part, which means that the propagation of scalar field is stable in this background. The presence of the cosmological constant leads to decrease of the real oscillation frequency and to a slower decay and high overtones was studied in Ref. [30]. Also, a novel infinite set of purely imaginary modes was found [31], which depending on the black hole mass may even be the dominant mode.
In the case of massless scalar field in the background of a Schwarzschild-dS black hole we find two types of QNMs, the complex modes and the pure imaginary ones. These modes have a different behaviour as the cosmological constant is changing. First of all for the complex modes all the frequencies have a negative imaginary part, which means that the propagation of scalar field is stable in this background. However the presence of a larger cosmological constant leads to decrease the real oscillation frequency and to a slower decay. On the contrary in the case of pure imaginary modes we find the cosmological constant leads to a fast decay, when it increases, that is, contrary to the complex QNFs.
In the case of massive scalar field in the background of a Schwarzschild-dS black hole we find that the imaginary part of these frequencies has an anomalous behaviour, i.e, the QNFs either grow or decay when the angular harmonic numbers increase, depending on whether the mass of the scalar field is small or large than a critical mass. We also find that as the value of the cosmological constant increases the value of the critical mass also increases. As we will discuss in the following the mass of the scalar field redefines the cosmological constant to Λ ef f and at the critical value of the mass of the scalar field at which the anomalous behaviour appears, Λ ef f goes to zero. In the case of Schwarzschild-AdS black hole background we find that we do not have an anomalous behaviour of the QNMs i.e we have a faster decay when the mass of the scalar field increases and when the angular harmonic numbers decrease. We also show that in the case of massive scalar field the pure imaginary quasinormal frequencies acquire a real part which depends on the scalar fields mass.
The manuscript is organized as follows: In Sec. II, we study the scalar field stability by calculating the QNFs of scalar perturbations numerically of a massless and massive scalar field in the background of Schwarzschild-dS and Schwarzschild-AdS black hole background by using the pseudospectral Chebyshev method. We conclude in Sec. III.
II. SCALAR PERTURBATIONS
The Schwarzschild-(dS)AdS black holes are maximally symmetric solutions of the equations of motion that arise from the action where G is the Newton constant, R is the Ricci scalar and Λ the cosmological constant. The Schwarzschild-dS and Schwarzschild-AdS black holes are described by the metric where f (r) = 1 − 2M r − Λr 2 3 , M is the black hole mass, Λ > 0 in the metric represents the Schwarzschild-dS black hole, while Λ < 0 represents the Schwarzschild-AdS black hole. In Fig. 1 we plot the behavior of f (r), where we observe that for Schwarzschild-dS black holes (left figure) the difference between the event horizon r H and the cosmological horizon r Λ decreases when the cosmological constant increases, while for the Schwarzschild-AdS black holes (right figure) there is one event horizon that decreases when the absolute value of the cosmological constant increases. The QNMs of scalar perturbations in the background of the metric (2) are given by the scalar field solution of the Klein-Gordon equation with suitable boundary conditions for a black hole geometry. In the above expression m is the mass of the scalar field ϕ. Now, by means of the following ansatz the Klein-Gordon equation reduces to where we defined κ = − ( + 1), with = 0, 1, 2, ..., which represents the eigenvalue of the Laplacian on the two-sphere and is the multipole number. Now, defining R(r) = F (r) r and by using the tortoise coordinate r * given by dr * = dr f (r) , the Klein-Gordon equation can be written as a one-dimensional Schrödinger-like equation with an effective potential V ef f (r), which parametrically thought, V ef f (r * ), is given by In Fig. 2 we plot the effective potential for massless scalar fields in the background of Schwarzschild-dS black holes and in Fig. 3 we plot a small zone for Λ = 0.11, where we can observe that the effective potential is positive in the zone between the event horizon and the cosmological horizon. Also, in Fig. 4 we plot the effective potential for In Fig. 6, we plot the effective potential for massless scalar fields in the background of a Schwarzchild-AdS black hole, which is positive outside the event horizon. Now, in order to compute the QNFs, we will solve numerically the differential equation (5) by using the pseudospectral Chebyshev method, see for instance [20]. First, it is convenient to perform a change of variable in order to limit the values of the radial coordinate to the range [0, 1]. Thus, we define the change of variable y = (r − r H )/(r Λ − r H ). So, the event horizon is located at y = 0 and the cosmological horizon at y = 1. Also, the radial equation (5) becomes In the vicinity of the horizon (y → 0) the function R(y) behaves as Here, the first term represents an ingoing wave and the second represents an outgoing wave near the black hole horizon. So, imposing the requirement of only ingoing waves at the horizon, we fix C 2 = 0. On the other hand, at the cosmological horizon the function R(y) behaves as Here, the first term represents an outgoing wave and the second represents an ingoing wave near the cosmological horizon. So, imposing the requirement of only ingoing waves on the cosmological horizon requires D 1 = 0. Taking the behaviour of the scalar field at the event and cosmological horizons we define the following ansatz Then, by inserting the above ansatz for R(y) in Eq. (15), it is possible to obtain an equation for the function F (y). The solution for the function F (y) is assumed to be a finite linear combination of the Chebyshev polynomials, and it is inserted in the differential equation for F (y). Also, the interval [0, 1] is discretized at the Chebyshev collocation points. Then, the differential equation is evaluated at each collocation point. So, a system of algebraic equations is obtained, and it corresponds to a generalized eigenvalue problem, which is solved numerically to obtain the QNFs (ω). In Table I we show some fundamental QNFs, in order to check the correctness and accuracy of the numerical technique used. Also, we show the relative error, which is defined by where ω 1 corresponds to the result from [29], and ω 0 denotes our result. The complex QNFs for this geometry was determined in Ref. [29] by using the WKB and Pöschl-Teller method. We can observed that error does not exceed 0.37% when we compare our results with the WKB method and 2.198% with the P-T method. As it was observed, the frequencies all have a negative imaginary part, which means that the propagation of scalar field is stable in this background. Also, we observe that the presence of a bigger cosmological constant leads to decrease the real oscillation frequency and to a slower decay. [29]. On the other hand, in [31] another branch of purely imaginary QNFs was found for this geometry by using the pseudospectral Chebyshev method, with the metric expressed in Eddington-Finkelstein coordinates. Here, we consider the coordinates given by the metric. Eq. (2) along with the change of variables y = (r − r H )/(r Λ − r H ). In the next, we will show that these quasinormal frequencies acquire a real part which depends on the scalar field mass. Now, in order to check the correctness and accuracy of the numerical techniques used, we show the purely imaginary and fundamental QNFs in Table II, where the relative error vanishes. As it was observed, the frequencies all are negative, which means that the propagation of scalar field is stable in this background. However, the presence of the cosmological constant leads to a fast decay, when it increases, that is, contrary to the complex QNFs. Also, it was shown that depending on the black hole mass may even be the dominant modes [31]. II: Purely imaginary quasinormal frequencies (n = 0) for massless scalar fields with = 1 in the background of Schwarzchild-de Sitter black holes with M = 1. The values of ωIm appear in Ref. [31].
Now, in order to show the existence of anomalous decay rate of quasinormal modes, we plot in Fig. 7 the behaviour of the complex fundamental QNFs, for different values of the parameter , and different values of the mass m of the scalar field. The numerical values are in appendix A Table III. It is possible to observe that the imaginary part of these frequencies has an anomalous behaviour, i.e, the QNFs either grow or decay when the angular harmonic numbers increase, depending on whether the mass of the scalar field is smaller or larger than the critical mass, where Im(ω) = Im(ω) +1 . Also, the behaviour of the real and imaginary part of the QNFs is smooth, and there is a slower decay of the mode when the mass of the scalar field increases. In order to show the same anomalous behaviour for other overtone numbers, we plot in Fig. 8, the imaginary and real part of the complex QNFs. Note that, the critical mass value increases when the overtone number n increases, for ≥ n. The numerical values are in appendix A Table V. The behaviour of the other branch is showed in Fig. 9. We can observe that this branch acquires a real part depending on the scalar field mass, see Table IV. Thus, as it was observed all the frequencies are negative, which means that the propagation of scalar field is stable in this background. Moreover, we can observe a faster decay when the parameter increases, and a faster decay when the scalar field mass increases until the QNFs acquire a real part after which the decay is stabilized. Also, the real part increases when the scalar field mass increases. However, there are other behaviours when we consider higher overtone numbers, see Fig. 10, where we plot the behaviour of the imaginary part of the QNFs as a function of the scalar field mass for different overtone numbers and = 0, and Fig. 11 shows the real part. In these figures we can recognize two branches: For zero mass, a branch of complex QNFs given by the black curves, and a purely imaginary branch given by the blue dashed curves. Also, we observe the behavior of the branches when the scalar field mass increases. We see that the black curves remains complex for all the values of m considered. Interestingly, the purely imaginary QNFs for zero mass can combine yielding complex QNFs, given by the continuous colored curves, when the mass increases, and then they split into purely imaginary QNFs which combine into new complex QNFs. As we have mentioned, it was shown that depending on the black hole mass the purely imaginary branch may even be the dominant mode [31]. Here, we can observe that for a fixed value of the black hole mass, the purely imaginary QNFs can be dominant depending on the scalar field mass and the angular harmonic numbers. Note that for the fundamental QNFs n = 0 and a small scalar field mass the dominant branch is the purely imaginary branch however for a scalar field mass m ≥ 0.15 the dominant branch is the complex one. Now, in order to see the influence of the cosmological constant in the critical mass, we plot in Fig. 12 the behaviour of the complex fundamental QNFs, for different values of the parameter , and different values of the mass m of the scalar field, but for a cosmological constant greater than the previous case Λ = 0.11. The numerical values are in appendix B Table VI. We can observe that for a greater cosmological constant the value of the critical mass increases. It is convenient to compare our result with those of [19], so we express the mass M as a function of the event horizon r H where the cosmological constant is taken as Λ = − 3 R 2 , with R being the AdS radius. Now, under the change of variable y = 1 − r H /r the radial equation (5) becomes where the prime denotes derivative with respect to y. In the new coordinate the event horizon is located at y = 0 and the spatial infinity at y = 1. In the neighborhood of the horizon (y → 0) the function R(y) behaves as where the first term represents an ingoing wave and the second represents an outgoing wave near the black hole horizon. Imposing the requirement of only ingoing waves on the horizon, we fix C 2 = 0. Also, at infinity the function R(y) behaves as So, imposing the scalar field vanishes at infinity requires D 2 = 0. Therefore, by considering the behaviour at the event horizon and at infinity of the scalar field, it is possible to define the following ansatz Then, by inserting this last expression in Eq. (15) we obtain an equation for the function F (y), which we solve numerically employing the pseudospectral Chebyshev method, as in the previous case. Now, in order to check our results, we can see that for r H = 10/R and massless scalar fields, we recover the QNF found in Ref. [19], see Appendix C Table VII, the QNFs were also studied in Ref. [32]. Also, in order to see if there is an anomalous decay rate of quasinormal modes, we plot in Fig. 13 the behaviour of the fundamental QNFs, for different values of the parameter , and different values of the mass m of the scalar field. The numerical values are in Appendix C Table VII. We can observe that the anomalous behaviour in the QNMs is not present in Schwarzchild-AdS black holes, for the cases considered. Also, they present a faster decay when the scalar field mass increases and when the angular harmonic numbers decrease. The frequency of the oscillations increases sightly when the scalar field mass increases and also when the angular harmonic numbers decrease.
III. CONCLUSIONS
In this work, we considered the Schwarzschild-dS and the Schwarzschild-AdS black hole as backgrounds and we studied the propagation of massive scalar fields through the QNFs by using the pseudospectral Chebyshev method in order to determine if there is an anomalous decay behaviour in the QNMs as it was observed in the asymptotically flat Schwarzschild black hole background.
The QNMs in the background of a Schwarzschild-dS black hole are characterized by one branch of QNFs which is complex and another one consisting of purely imaginary QNFs. The pure imaginary QNFs are generated for small scalar field mass and eventually this branch acquires a real part, and it is worth to mention that to our knowledge, this is the first time that this behaviour has been reported. All the frequencies have a negative imaginary part, which means that the propagation of scalar field is stable in this background. For the complex branch, the presence of the cosmological constant leads to decrease the real oscillation frequency and to a slower decay [29]. We showed that for the fundamental QNFs there is a slower decay rate when the mass of the scalar field increases for a fixed angular harmonic number .
Furthermore, we showed the existence of anomalous decay rate of QNMs, i.e, the absolute values of the imaginary part of the QNFs decay when the angular harmonic numbers increase if the mass of the scalar field is smaller than a critical mass. On the contrary they grow when the angular harmonic numbers increase, if the mass of the scalar field is larger than the critical mass and they also increase with the overtone number n, for ≥ n. We also showed that the effect of the cosmological constant is to shift the values of the critical masses i.e when the cosmological constant increases the value of the critical mass also increases. It is worth to mention here that the critical mass is an interesting quantity, because it shows that it is possible to have a scalar field with a critical mass and the decay rate does not depend on the angular harmonic numbers ; however, its frequency of oscillation depends on the angular harmonic numbers , increasing when increases.
It is interesting to note that, despite that the spacetime is asymptotically dS, where the boundary conditions are imposing at the event horizon and at the cosmological horizon the effective potential tends to −Λ(3m 2 − 2Λ)r 2 /9 for = 0, at infinity, and it can diverge positively, negatively, or be null, specifically, it vanishes for m c = ± 2Λ/3. So, for Λ = 0.04, and for a scalar field with mass m = m c ≈ 0.163, and also for Λ = 0.11 and for a scalar field with mass m = m c ≈ 0.278, the effective potential vanishes at infinity. These are the critical masses we have considered with n = 0. So, for a scalar field with critical mass, and = 0 the effective potential at infinity is not divergent. Also, for = 0 and m = m c , the effective potential tends to a negative constant at infinity given by − ( + 1)Λ/3 and the scalar field does not generate such divergence.
Another simple way to understand the appearance of the anomalous behaviour of the QNMs in the Schwarzschild-dS black hole background, is to define an effective cosmological constant through the relation Λ ef f = Λ(3m 2 − 2Λ). Then as we already discussed, the critical mass of the scalar field to have the anomalous behaviour and the corresponding value of the cosmological constant satisfy the relation m c = 2Λ/3. But then the effective cosmological constant becomes Λ ef f = 0 leading to the anomalous behaviour of QNMs at that critical mass. The physical picture behind it is that there is a specific critical scale of the scalar field that cancels out the scale introduced by the cosmological constant.
For the other branch, it was shown, depending on the black hole mass it may even be the dominant mode [31]. We found that for a fixed value of the black hole mass, the purely imaginary QNFs can be dominant depending on the scalar field mass and the angular harmonic numbers. Also, a faster decay is observed when the parameter increases, as well as, when the scalar field mass increases until that the QNFs acquire a real part, after it the decay is stabilized, and the frequency of the oscillations increases when the scalar field mass increases. Furthermore, we showed that this branch does not present an anomalous behaviour of the QNFs, for the range of scalar field mass analyzed.
In the case of a Schwarzschild-AdS black hole background we have shown that the QNMs of massive scalar fields do not present an anomalous behaviour. In this case, and according to the previous analysis, the effective potential at infinity always diverge, due to the fact that the cosmological constant is negative, and consequently the scalar field can probe the divergence of the effective potential at infinity. Also, they present a faster decay when the scalar field mass increases and when the angular harmonic numbers decrease. The frequency of the oscillations increases slightly when the scalar field depends on the curvature at infinity, i.e, such anomalous behaviour is possible in asymptotically flat and in asymptotically dS spacetimes and it is not present in asymptotically AdS spacetimes. The anomalous behaviour also could depend if the scalar field probes the divergence of the effective potential at infinity, despite that the boundary conditions can be imposed in a different point. It is worth to mention that for a Schwarzschild black hole, the effective potential tends to m 2 , so the scalar field does not probe the divergence, and consequently the anomalous behaviour in the QNMs can be observed.
It would be interesting to extent this work to the case the background black hole to be charged and study the behaviour of QNMs in this background and in different asymptotic spacetimes. If for example there is an anomalous QNMs decay for massive scalar perturbations in the background of Reissner-Nordström black hole then this behaviour of the QNMs may have important consequences on the Strong Cosmic Censorship [33,34] | 5,961.2 | 2020-04-20T00:00:00.000 | [
"Physics"
] |
INTEGRAL discovery of a high-energy tail in the microquasar Cygnus X-3
The X-ray spectra of X-ray binaries are dominated by emission of either soft or hard X-rays which defines their soft and hard spectral states. Cygnus X-3 is amongst the list of X-ray binaries that show quite complex behavior, with various distinct spectral states. Because of its softness and intrinsic low flux above typically 50 keV, very little is known about the hard X/soft gamma-ray (100-1000 keV) emission in Cygnus X-3. Using the whole INTEGRAL data base, we aim to explore the 3-1000 keV spectra of Cygnus X-3. This allows to probe this region with the highest sensitivity ever, and search for the potential signature of a high-energy non-thermal component as sometimes seen in other sources. Our work is based on state classification carried out in previous studies with data from the Rossi X-Ray Timing Explorer. We extend this classification to the whole INTEGRAL data set and perform a long-term state-resolved spectral analysis. Six stacked spectra were obtained using 16 years of data from JEM-X, ISGRI, and SPI. We extract stacked images in three different energy bands, and detect the source up to 200 keV. In the hardest states, our phenomenological approach reveals the presence of an component>50 keV in addition to the component usually interpreted as thermal Comptonization. We apply a more physical model of hybrid thermal/nonthermal corona to characterize this component and compare our results with those of previous studies. Our modeling indicates a more efficient acceleration of electrons in states where major ejections are observed. We find a dependence of the photon index of the power law as a function of the strong orbital modulation of the source in the Flaring InterMediate (FIM) state. This dependence could be due to a higher absorption when Cygnus X-3 is behind its companion. However, the uncertainties on the density column prevent us from drawing conclusions.
Introduction
During their outburst, X-Ray binaries (XRBs) can pass through different accretion states associated with intrinsic emitting properties that are drastically different. Transient XRBs spend the majority of their life in a quiescent state, before entering into a period of outburst in the so-called hard state. Here, the spectrum is dominated by emission in the hard (∼10-100 keV) X-rays: the commonly accepted interpretation is that of an inverse Comptonscattering of soft photons emitted by a cold (≤ 0.1 keV) accretion disk by hot electrons (50-100 keV) forming a hot "corona" (Haardt & Maraschi 1991). This state is also associated with a compact jet detected in the radio domain (e.g. , Fender 2001;Stirling et al. 2001;Fuchs et al. 2003;Corbel et al. 2013). On the contrary, after the hard state, XRBs are found in a soft state with a spectrum dominated by thermal emission in the soft (∼1 keV) X-rays. The disk is thought to be closer to the compact object, the jet is quenched (e.g., Fender et al. 1999;Corbel et al. 2000), and the Comptonized emission is much weaker, possibly indicat-Send offprint requests to<EMAIL_ADDRESS>ing the disappearance of the corona itself (Rodriguez et al. , 2008b. The transition from the hard to the soft state is made through the so-called intermediate states (Belloni et al. 2005) with discrete and sometimes superluminal radio ejections marking the hard-soft frontier (Tingay et al. 1995).
Beyond a few hundred keV the picture is much more blurred. For decades, the weak flux of the sources and the lack of sensitive instruments have prevented efforts to detect them, or to probe the eventual connections of the > 100 keV emission with the X-ray states. Observations with the Compton Gamma-Ray Observatory (CGRO, Grove et al. 1998;McConnell et al. 2000;Gierliński & Done 2003) and the INTernational Gamma-Ray Astrophysics Laboratory (INTEGRAL, Joinet et al. 2007;Laurent et al. 2011;Tarana et al. 2011;Jourdain et al. 2012Jourdain et al. , 2014Rodriguez et al. 2015b) have nevertheless shown that hard-X-ray excesses beyond 100 keV, so called high-energy tails, are common in black hole XRBs. Even for the prototypical black hole XRB, Cygnus X-1 (Cyg X-1), for which a high-energy tail was confirmed very early in the INTEGRAL lifetime (Bouchet et al. 2003;Cadolle Bel et al. 2006), the origin of the tail is not yet Article number, page 1 of 14 arXiv:2011.06863v1 [astro-ph.HE] 13 Nov 2020 A&A proofs: manuscript no. 37951corr well understood; it could either come from synchrotron emission from the basis of the jets (Laurent et al. 2011;Rodriguez et al. 2015b), or from hybrid thermal/nonthermal electron distribution in the corona (e.g., Del Santo et al. 2013;Romero et al. 2014), or both depending on the state (Cangemi et al., submitted to A&A.). By studying the presence and behavior of hard tails in other similar sources, we hope to understand the commonalities and define the origin of these features. In this paper, we investigate the case of the very bright source Cygnus X-3 (Cyg X-3) and study the potential presence of a hitherto undetected high-energy tail. Cyg X-3 is one of the first discovered XRBs (Giacconi et al. 1967). Its nature still remains a mystery, because for this compact object in particular, it is extremely difficult to obtain the mass function of the system (e.g., Hanson et al. 2000;Vilhu et al. 2009). However, its global behavior, the various spectra, and their properties seem to indicate a black hole rather than a neutron star (e.g., Zdziarski et al. 2013;Koljonen & Maccarone 2017;Hjalmarsdotter et al. 2009, H09). In this system the compact object is extremely close to its Wolf-Rayet companion (van Kerkwijk et al. 1992;Koljonen & Maccarone 2017) rendering the system peculiar in many ways when compared to other XRBs with low companion mass, such as for example GX 339-4 or even the high-mass system Cyg X-1. It is situated at a distance of 7.4 ± 1.1 kpc (McCollough et al. 2016) with an orbital period of 4.8 h (Parsignault et al. 1972). Cyg X-3 is a microquasar owing to the presence of strong radio flares, and is the brightest radio source of this kind (Mc-Collough et al. 1999). These radio properties agree with those expected from compact jets (e.g., Schalinski et al. 1995;Molnar et al. 1988;Mioduszewski et al. 2001;Miller-Jones et al. 2004;Tudose et al. 2007;Egron et al. 2017) but Cyg X-3 also shows discrete ejections during flares. These jets are very variable, as indicated by the fast variations in radio (Tudose et al. 2007), and for this reason the source has its own states defined by their radio flux: quiescent, minor flares, major flares which occur after a period of quenched emission (Waltman et al. 1996). At higher energies, Cyg X-3 has been detected by Fermi in the γ-ray range and its flux is positively correlated with the radio flux while showing variations correlated with the orbital phase (Fermi LAT Collaboration et al. 2009). During flaring states, the γ-ray spectrum seems to be well modeled by Compton scattering of the soft photons from the companion by relativistic electrons from the jets (Dubus et al. 2010;Cerutti et al. 2011;Zdziarski et al. 2018). Cyg X-3 shows a wider variety of states than the two canonical ones defined above. While the overall shape of its spectra is similar to those of other black hole XRBs, the value of the spectral parameters can be markedly different: the exponential cutoff is at a lower energy of ∼20 keV in the hardest states, whereas the disk is very strong in the softest states. Cyg X-3 also shows a strong iron line and very strong absorption . This complexity and its correlation with the radio behavior led to the definition of five X-ray states when considering the spectral shapes and levels of fluxes (Szostek et al. 2008, hereafter S08), plus a 'hypersoft' one when one uses the hardness intensity diagram (HID, Koljonen et al. 2010, hereafter K10). The latter state is modeled with a pure 1.5 keV black-body spectrum and a Γ ∼ 2.5 1 power law to represent the hard X-ray emission. Here we give a brief description of these states, from the hardest to the softest (see K10 Sect. 4.2 for more details): Quiescent state. This state is characterized by a high flux in the hard X-rays and a low flux in the soft X-rays. The radio flux is about 60-200 mJy (K10) and anticorrelates with hard X-rays (20-100 keV) but correlates with soft X-rays (3-5 keV, S08). The spectrum appears to be well fitted by Comptonization models with an exponential cutoff around 20 keV and a strong iron line at 6.4 keV (e.g., K10).
Transition state. In this state, the radio and soft X-ray fluxes start to increase. The source starts to move to the left part (soft) of the HID. However, the hard X-rays have a spectral shape that is quite similar to that of the quiescent state. This state and the quiescent state would correspond to the hard state in a standard black hole XRB, such as GX 339-4 (Belloni et al. 2005). The quiescent and transition states correspond to the radio quiescent state, with a typical radio flux of ∼ 130 mJy (e.g., Szostek et al. 2008, K10).
Flaring hard X-ray (FHXR) state. The shape of the spectrum starts to soften significantly in this state. Minor flaring is observed in the radio. This state would correspond to the intermediate state in a standard black hole XRB, and corresponds to the radio minor flaring state (with a mean radio flux of ∼ 250 mJy, e.g., Szostek et al. 2008, K10) Flaring intermediate (FIM) state. Major flares are observed, the spectrum is softer than in the FHXR and the presence of the disk starts to clearly appear in the spectrum. This state would correspond to a soft/intermediate state in a standard black hole XRB.
Flaring soft X-rays (FSXR) state. FSXR and hypersoft states seem similar in terms of spectra. However, we observe a higher radio flux in the FSXR state whereas it is very low in the hypersoft state. These two states are separated by the " jet-line" (K10), and unlike other black hole XRBs, a major flare occurs when the source goes from the hypersoft state to the FSXR state. The FSXR and FIM correspond to the radio major flaring state (300 mJy to ∼ 10 Jy, e.g., Szostek et al. 2008, K10).
Hypersoft state. The radio flux is almost quenched in this state (∼ 10 mJy, e.g., Szostek et al. 2008, K10). We see a strong presence of the disk in the spectrum while no emission above 80 keV has been reported so far (H09, K10). The hypersoft state corresponds to the quenched radio state.
Despite a huge number of observations both in soft X-rays (1-10 keV) and hard (10-150 keV) X-rays (notably with RXTE), and contrary to many other bright XRBs (e.g., Grove et al. 1998;Joinet et al. 2007;Rodriguez et al. 2008a;Laurent et al. 2011;Del Santo et al. 2016, Cangemi et al., in prep.), only one detection between 100 and 200 keV has been reported so far in Cyg X-3 (H09). Here we make use of INTEGRAL to probe the properties of this peculiar source over the full 3-1000 keV range covered by the observatory. We use the spectral classification of K10 to separate the data into the six spectral states defined therein to extract state-resolved stacked spectra. The description of the observations, and the data-reduction methods are reported in Sect. 2. Section 3 is dedicated to the state classification of the INTE-GRAL data. We then present a phenomenological approach to the spectral fitting in Section 4, before considering more physical models in Section 5. The results are discussed in the last part in Sect. 6.
Data selection
We consider all INTEGRAL individual pointings or science windows 2 (scws) of Cyg X-3 since the launch of INTEGRAL in 2002. We restrict our selection to scws where the source is in the field of view of the Joint European X-ray Monitors (JEM-X, Lund et al. 2003) in order to be able to use the soft X-ray classification with JEM-X data, that is, where the source is less than 5 • off-axis. As JEM-X is composed of two units that have not always been observing at the same time, this selection results in 1518 scws for JEM-X 1 and 185 scws for JEM-X 2. For scws with both JEM-X 1 and JEM-X 2 on, we only select the JEM-X 1 spectrum in order to avoid accumulating double scws. In this paper, all the following scientific products were extracted on a scw basis, before being stacked in a state-dependent fashion. We exclude INTEGRAL revolutions 1554INTEGRAL revolutions , 1555INTEGRAL revolutions , 1556INTEGRAL revolutions , 1557INTEGRAL revolutions , and 1558 to avoid potential artifacts caused by the extremely bright (up to ∼ 50 Crabs, e.g., Rodriguez et al. 2015a) flares of V404 Cygni during its 2015 June outburst.
INTEGRAL/JEM-X spectral extraction
After the selection of the scws according to the criteria defined in Sect. 2.1, the data from JEM-X are reduced with version 11 of INTEGRAL Off-line Scientific Analysis (OSA) software. We follow the standard steps described in the JEM-X user manual 3 . Although, as mentioned above, most of the data are obtained with JEM-X unit 1, we nevertheless extract all data products from any of the units that were turned on during a given scw. Spectra are extracted in each scw where the source is automatically detected by the software at the image creation and fitting step. Spectra are computed over 32 spectral channels using the standard binning definition.
The individual spectra are then combined with the OSA spe_pick tool according to the classification scheme described hereafter in Sect. 3.2. In some cases, JEM-X background calibration lines seem to affect the spectra (C.-A. Oxborrow and J. Chenevez private communication). This effect is particularly obvious in bright, off-axis sources, and is thus amplified when dealing with large chunks of accumulated data. To avoid this problem, when particularly obvious, we omitted the JEM-X spectral channels at these energies. This is particularly evident in the FIM, FSXR, and hypersoft data where the source fluxes at low energies are the highest. The appropriate ancillary response files (arfs) are produced during the spectral extraction and combined with spe_pick, while the redistribution matrix file (rmf) is rebinned from the instrument characteristic standard rmf with j_rebin_rmf. We add 3 % systematic error onto all spectral channels for each of the stacked spectra, as recommended in the JEM-X user manual. We determine the net count rates in the 3-6, 10-15, and 3-25 keV ranges normalized to the on-axis value for each individual spectrum. Table 1 indicates the number of scws for each state for both JEM-X 1 and 2. The upper panel of Fig. 1 shows the daily swift/BAT light curve, whereas the JEM-X 3-25 keV band light curve and the hardness ratio are shown in the middle and lower panels, respectively. The hardness ratio shows the spectral variability of the source and we observe transition to the softest states observed by INTEGRAL around MJD 54000 when the JEM-X count rate reaches its maximum. Another transition to a very low hardness ratio is seen with INTEGRAL around MJD 58530 (Trushkin et al. 2019).
INTEGRAL/IBIS/ISGRI spectral extraction
To probe the behavior of source in the hard X-rays we make use of data from the first detector layer of the Imager on Board the INTEGRAL Satellite (IBIS), the INTEGRAL Soft Gamma-ray Imager (ISGRI), which is sensitive between ∼20 and ∼600 keV (Lebrun et al. 2003). As OSA version 11.0 is valid for ISGRI data taken since January 1, 2016 (MJD 57388), we divide our analysis in two parts 4 . The first part, which is analyzed with OSA 10.2 extends from MJD 52799 (INTEGRAL revolution 80) to MJD 57361 (rev 1618), while the second part is analyzed with OSA 11.0 and extends from MJD 57536 (rev 1684) to MJD 58639 (rev 2098).
Light curves and spectra are extracted following standard procedures 5 . For each scw, we create the sky model and reconstruct the sky image and the source count rates by deconvolving the shadowgrams projected onto the detector plane.
For the data analyzed with OSA 11.0, we extract spectra with 60 logarithmically spaced channels between 13 keV and 1000 keV. For the OSA 10.2 extraction, we create a response matrix with a binning that matches the one automatically generated by running the OSA 11.0 spectral extraction as closely as possible. Response matrix channels differ at most by 0.25 keV between the OSA 10.2 and OSA 11.0. We then use the OSA 10.2 spe_pick tool to create stacked spectra for each spectral state according to our state classification (see Sect. 3.2 below). We add 1.5 % of the systematic error to both OSA 10.2 and OSA 11.0 stacked spectra. The ISGRI spectra obtained with OSA 10.2 are analyzed in the 25-300 keV energy range, while those obtained with OSA 11.0 are analyzed over the 30-300 keV energy range.
INTEGRAL/SPI spectral extraction
To reduce the data from the SPectrometer aboard INTEGRAL (SPI; Vedrenne et al. 2003) we use the SPI Data Analysis Interface 6 to extract averaged spectra. The sky model we create contains Cyg X-1 and Cyg X-3. We typically set the source variability to ten scws for Cyg X-3 and five scws for Cyg X-1 (e.g., Bouchet et al. 2003). We then create the background model by setting the variability timescale of the normalization of the background pattern to ten scws. The background is generally stable, but solar flares, radiation belt entries, and other nonthermal incidents can lead to unreliable results. In order to avoid these effects, we remove scws for which the reconstructed counts compared to the detector counts give a poor χ 2 (χ 2 red > 1.5). This selection reduces the total number of scws by ∼ 10 %. The shadowgrams are deconvolved to obtain the source flux, and spectra are then extracted between 20 keV and 400 keV using 30 logarithmically spaced channels.
Proportional Counter Array data reduction and classification
We consider all Rossi X-ray Timing Explorer (RXTE, Bradt et al. 1993) Proportional Counter Array (PCA, Glasser et al. 1994) Standard-2 observations of Cyg X-3, that is to say 262 observations from 1996 to 2011. This adds three years of new observations compared to the work done by K10. The data are reduced with the version 6.24 of the HEASOFT. We followed a procedure very similar to the one followed by Rodriguez et al. (2008a,b) to filter out the data from bad time intervals and to obtain PCA light curves and spectra for GRS 1915+105, another peculiar and very variable microquasar. We consider the data from all layers of the Proportional Counter Unit #2, which is the best calibrated and the one that is always turned on. Background maps are estimated with pcabackest using the bright model. Source and background spectra are obtained with saextrct and response matrices by running the pcarsp tool. For all observations, we extract net source count rates using the show rates command from xspec in three energy ranges: 3-6 keV, 10-15 keV, and 3-25 keV. We use 3-6 keV and 10-15 keV in order to be as consistent as possible with the approach of K10, permitting us to probe the different spectral regions in a modelindependent manner. The last energy band extends to 25 keV in order to be consistent with the JEM-X spectral band that we use for our classification of the INTEGRAL data. Figure 2 (left) shows the HID obtained from the PCA data. Each point corresponds to one observation. The colored dots are the observations analyzed by K10, while the black ones are the new addition from the present work. PCA count rates are normalized according to the maximum count-rate value. The state of each observation is attributed according to Table A.1 of K10. We define the best state division based on this classification and divide the HID in six different zones.
Extension of the RXTE/PCA state classification to INTEGRAL
When evaluating the hardness ratio, the observed fluxes are convolved with the instrument response matrices, and thus are instrument dependent. Therefore, we cannot simply use the PCA state boundaries to classify the JEM-X data. The differences are illustrated in Table A.1 with 16 quasi-simultaneous JEM-X/PCA (i.e., within 0.1 MJD) observations. These examples show that there is no one-to-one correspondence between the JEM-X and PCA values of the HR. We therefore need to convert the PCA boundary values to those of JEM-X.
To do so, and to thus extend the classification of K10 to the JEM-X data, we proceed as follows. For each division (quiescent/transition, transition/FHXR, FHXR/FIM, FIM/FXSR, FSXR/hypersoft), we select the closest PCA observations from both sides of the state division line, and then we search the best functions that fit the data 7 to the spectrum. Subsequently, we simulate JEM-X data using that model and the appropriate redistribution matrix which allows us to finally calculate the countrate of the same energy range as used in K10. HID of Cyg X-3. Left: HID of Cyg X-3 using PCA data in black dots. Each colored dot corresponds to an observation which has already been classified by K10. Vertical black-dashed lines correspond to our division into six states: Quiescent (light blue), Transition (dark blue), FHXR (purple), FIM (pink), FSXR (red), and hypersoft (orange). Right: HID with PCA and JEM-X data superposed. Gray dots correspond to the same data as in the left plot and colored dots correspond to classified JEM-X scws using the same color code as in the left figure. Vertical gray bands correspond to the JEM-X state divisions using the method described in the text.
We use xspec version 12.9.1p for all data modeling (Arnaud 1996). Thanks to the show rates 8 command in xspec, we can find the corresponding hardness ratio in JEM-X. This allows us to draw new state divisions which can be used for JEM-X. Figure 2 (right) shows the JEM-X HID; the vertical gray bands correspond to the new JEM-X state divisions using the described method above. As for PCA, JEM-X count rates are normalized to the maximum value. For our accumulated JEM-X spectra, we only select scws that are outside the state division line (i.e., colored dot in the plot) in order to be sure that our accumulated spectra are not polluted by scws with different spectral classification. After this selection, a total of 1501 classified scws remain.
As the SPI determination background is based on the dithering pattern (Vedrenne et al. 2003), we have to select continuous sets of scws (∼ 15) in order to obtain good precision on the flux evaluation. The JEM-X field of view is smaller than that of SPI or IBIS. It appears that for some observations, the source is outside the JEM-X field of view while being observed by SPI and/or IBIS. When considering a set of > 15 scws, some of them remain unclassified, as we build our classification with JEM-X observations. To try and overcome this problem in order to exploit SPI data, we specially create new lists of scws for the SPI analysis. To obtain lists of 15 consecutive classified scws, we make the following approximation: we consider that a given SPI scw has the same state as the previously (or next) JEM-X classified one if (1) the classification of the source is known for scws distant from < 10 scws, (i.e., the variability threshold of the source) and (2) no obvious state change is seen in the global light curve.
As the list of classified scws in the FSXR state does not verify these conditions, we do not extract a SPI spectrum for this state. Table 1 summarizes the number of scws for the different extractions.
Images
We
Spectral fitting: phenomenological approach
Because of the poor statitistics of the JEM-X 2 and ISGRI/OSA 11.0 spectra in the FIM and hypersoft states, we do not use them for our spectral fitting. Therefore, we use the five spectra (JEM-X 1, JEM-X 2, ISGRI/OSA 10.2, ISGRI/OSA 11.0 and SPI) for the quiescent and transition states, four (JEM-X 1, JEM-X 2, ISGRI/OSA 10.2 and SPI) for the FHXR state, three (JEM-X 1, ISGRI/OSA 10.2 and SPI) for the FIM and hypersoft states, and two (JEM-X 1 and ISGRI/OSA 10.2) for the FSXR. Figure 4 shows the obtained number of individual spectra that were stacked and fitted simultaneously.
We first use a phenomenological approach to investigate the spectral behavior of the source at high-energy and test the possible presence of a high-energy tail above ∼ 100 keV.
Method
We first model our spectra with an absorbed (tbabs, Wilms et al. 2000) power law and an iron line in xspec. The iron line centroid is fixed to 6.4 keV and its width is limited below 0.4 keV. We use angr solar abundances (Anders & Ebihara 1982). A simple power law does not provide acceptable fits to the quiescent, transition, or FHXR spectra. Large residuals around 20 keV indicate the presence of a break or a cut-off, in agreement with previous findings (K10, H09). We then use a powerlaw with cutoff cutoffpl instead. To obtain statistically good fits, a reflection component (reflect, Magdziarz & Zdziarski 1995) is also added in the quiescent, transition, FHXR, and FIM states. The reflection factor is limited to below 2. We also add a multicolor black-body component (diskbb) for the FIM, FSXR, and hypersoft states.
Without the addition of a power law in the quiescent and transition states, we obtain significant residuals at high energy. The reduced χ 2 values are 5.0 (119 dof) and 8.3 (121 dof), respectively. We test the significance of this additional component by performing an F-test. We find F-test probabilities (i.e., that the statistical improvement due to the addition of the new power-law component is due to chance) of 7.9 × 10 −33 and 2.4 × 10 −47 for the quiescent and the transition states respectively. In summary, we use: constant*tbabs*reflect(cutoffpl + powerlaw + gaussian) for quiescent and transition states, constant*tbabs*reflect(cutoffpl + gaussian) for the FHXR state, and constant*tbabs(powerlaw + diskbb + gaussian) for FIM, FSXR, and hypersoft states. The constant component allows us to take into account calibration issues between different instruments and the different sample of scws to build instrument spectra. We fix i = 30 • as inclination for the reflection (Vilhu et al. 2009;Zdziarski et al. 2012Zdziarski et al. , 2013. The 3-300 keV spectral parameters obtained for each spectral state are reported in Table 2.
Results
The best parameters obtained in the 3-400 keV band are reported in Table 2, and Fig. 4 shows the spectra and best-fit models for each state. The value of the column density varies between 5 and 10 ×10 22 cm −2 with a mean value of ∼ 7.6 × 10 22 cm −2 . For the quiescent and transition states, the cutoff energy is around 15 keV, matching with previous results (K10, H09). For the FHXR, the cutoff value is quite high (E cut = 434 keV). We therefore, try to use a simple power law to represent this spectrum, but this model does not converge to an acceptable fit (with a reduced χ 2 of 738.51/93 dof), showing the need for the cutoff albeit with poorly constrained parameters. The increase in cutoff energy is correlated with the increase in the photon index of the cutoff power law which evolves from Γ cut ∼ 1.5 in the transition state to Γ cut ∼ 3 in the FHXR state. The disk temperature kT disk found in the FIM, FSXR, and hypersoft states is consistent with the results of K10 and H09.
Concerning the simple power law (additional or not), we observe a photon index of Γ po ∼ 2.5 with a broadly similar value in three out of the six states (transition, FSXR, hypersoft). The FHXR does not show any power-law component. In the quiescent state, Γ po is marginally compatible with these, reaching 2.21 +0.11 −0.14 , its lowest value of all states. In the FIM, Γ po reaches In order to verify that our results, and in particular the need for an extra component, do not depend too strongly on the curved low-energy (< 50 keV) spectra, we investigate the 30-200 keV range alone. While in five out of six of the states, a simple power law fits the spectra well, a clear broken power law with an energy break at 73±8 keV is needed for the transition state (the one with the highest statistics). We find Γ 1 = 3.52 ± 0.03 and Γ 2 = 3.20 ± 0.10. This confirms the existence of an additional component to the simple power law above typically a few tens of keV.
Influence of the orbital modulation?
Cyg X-3 shows strong orbital modulation in its X-ray flux (e.g., Zdziarski et al. 2012). We therefore investigate whether the presence of the tails is correlated with the orbital modulation. In order to do this, we use the ephemeris of Singh et al. (2002) and create a phase-binned light curve. We define three different phase bins: the first corresponds to the inferior conjunction, that is, 1/3 < φ < 2/3 (the compact object being in front of the star), the second corresponds to the superior conjunction, that is, 0 < φ < 1/6 and 5/6 < φ < 1 (the compact object being behind the star), and the third bin corresponds to the transition between the two others. Figure 5 shows the folded light curve; each point represents one scw, and scw is classified according to a phase Article number, page 7 of 14 A&A proofs: manuscript no. 37951corr The three different bins are represented in blue (bin 1: inferior conjunction 1/3 < φ < 2/3), orange (bin 2: superior conjunction 0 < φ < 1/6 and 5/6 < φ < 1), and purple (bin 0: between the two conjunctions).
To check on the potential influence of the orbit, we check the two extreme positions and create stacked orbital-phase-and state-dependent spectra. Figure 6 shows the count-rate ratio between spectra extracted from bin 1 (inferior conjunction) and spectra extracted from bin 2 (superior conjunction). As the ratio remains at a constant value of 2 from 3 to 20 keV, we observe a slight decrease starting from 20 keV to 300 keV for the transition, FHXR, and FIM states which are not statistically signifi-cant in the other states. Dashed lines represent the best fits with a constant in the energy range 3-20 keV; results are indicated in the legend of Fig. 6.
We investigate the possible change in the slope of the spectrum as a function of the phase bin. In order to do this, we use the same phenomenological model as described in Sect. 4.1. The values of the photon index are reported in Table 3 for each state and for each bin.
The values are compatible in their 90 % confidence range, excepted in the FIM state where we observed an increase of 7.5 % of the photon index value when Cyg X-3 is in inferior conjunction, that is, the spectrum is softer when the source is in front of the star. This behavior is discussed in Sect. 6.3.
Physical approach: hybrid thermal/nonthermal model: eqpair
In order to better constrain properties of the nonthermal component observed in our phenomenological approach, we now apply a more physical model of hybrid thermal/nonthermal coronae to our spectra. 2.09 ± 0.04 Hypersoft Fig. 6. Count rate ratio between the spectra of bin 1 and bin 2 for each state according to the same color code as in Fig. 2.
The eqpair model
The complete model is described in Coppi (1999); here we give a brief summary. In this model, the total luminosity of the source L rad is re-expressed as a dimensionless parameter, the "compactness" l rad : where R is the characteristic radius of the corona (assuming it is spherical), σ T is the Thomson cross-section, m e the electron mass, and c the speed of light. The luminosity from soft photons from the disk is parametrized by another compactness parameter l s , and the spectrum shape of these soft photons is assumed to be a black body with a temperature kT bb . The amount of heating is expressed by the ratio of the compactness of the Comptonized medium and the compactness of the seed photons l h /l s . In this model, electrons from a cool background plasma with an optical depth τ p are accelerated to form the observed nonthermal tail. Thus, we assume the Lorentz factor of the accelerated nonthermal plasma to be distributed according to a power law within the range γ = 1.3-1000. The luminosity of these nonthermal electrons L nth is once again described by the dimensionless compactness l nth . Nonthermal processes from where particles are allowed to cool are Compton scattering, synchrotron radiation, and Bremsstrahlung emission. To balance these nonthermal processes, the compactness l th represents the dimensionless luminosity from thermal interaction between particles, i.e., Coulomb interaction. The reflection model implemented is ireflect (Magdziarz & Zdziarski 1995).
For our fitting with this hybrid model, we add absorption and an iron line, and therefore the model is computed in xspec as: constant*tbabs(eqpair + gaussian).
Exceptionally, we need to add an ionized iron edge to correctly describe the FIM state. The parameters allowed to vary freely in xspec are the ratio l h /l s , which is related to the slope of the Comptonizing spectrum, l nth /l h , the temperature of the blackbody kT bb , the optical depth τ p , the index of the injected electrons distribution Γ inj , the fraction of the scattering region intercepted by reflecting material Ω/2π, and the width of the iron line (restricted to a maximum value of 0.4 keV).
The luminosity from the seed photons is not well constrained, but the χ 2 minimum oscillates between l s = 40 and 140, and therefore we fix it at the value of l s = 100, which correspond to a small radius of the corona for a high luminosity (Zdziarski et al. 2005). Table 4 summarizes the parameters obtained with the hybrid model and Fig. 4 shows the residuals for each state. We observe that the quiescent and transition states are characterized by a high value of l nth /l h (> 60 %) and l h /l s ( 1) expressing a spectrum dominated by Comptonization processes. The energy cutoff around 15-20 keV and the shape of the nonthermal component with an electron injection index of ∼ 3.5 are well reproduced by the model. The reflection parameter is rather small compared to our phenomenological approach.
Results
In the FHXR state, the value of l h /l s = 0.75 decreases significatively. We observe a clear rise in the photon temperature kT s = 390 eV. We find a value for reflection of Ω/2π = 0.94 +0.13 0.10 compatible with the one found in quiescent and transition states, and the electron injection index Γ inj = 3.57 +0.41 −0.15 remains close to the values found previously.
Concerning the FIM state, the spectrum is characterized by an important photon disk compacity (l h /l s = 0.51 ± 0.03 i.e., ∼ 50 % of the total luminosity is supplied by the disk emission). The fraction l nth /l h = 0.50 +0.11 −0.10 and the electron injection index Γ inj = 2.78 ± 0.14 are also smaller than in the FHXR state. This state is very close to the very high state of H09.
Finally, the FSXR and hypersoft states are described by an important contribution of the disk photons (l h /l s ∼ 0.1 i.e., ∼90 % of the total luminosity is supplied by the disk emission). They also show an important nonthermal emission (l nth /l h > 50 % i.e., the total heating is dominated by nonthermal pro-cesses) with an electron injection index of Γ inj ∼ 3. The values of τ p are not well constrained in those states, and we remark that they, in particular, are not consistent with the results of the nonthermal and ultrasoft state found by H09. Moreover, the electron injection index in their ultrasoft state is smaller, resulting in a much harder spectrum at higher energy.
Discussion
We use the whole INTEGRAL database of Cyg X-3 in order to extract stacked spectra for each state previously defined by K10. Although this static approach is not adapted to study the source variability, it permits us to obtain results at high energies (> 100 keV) that are more statistically robust than in all previous studies, allowing us to probe the properties of the nonthermal hard-X-ray emission with the highest sensitivity.
Origin of the high-energy tail
It is interesting to note that a nonthermal power-law-like component is present in all the states of Cyg X-3. The differences in the photon indices of this detected tail can be explained by one of two main scenarios: (1) the mechanism that gives rise to the tail is the same in all the states and endures some changes during state transitions, or (2) the mechanism is different depending on the state.
(1) All of our spectra are statistically well modeled by the thermal/nonthermal corona model eqpair. With this model, the power-law component observed comes from a nonthermal distribution of electrons. The differences observed in the electron injection indices, and especially between those of the quiescent/transition/FHXR and FIM/FSXR states seem to point to a modification of the mechanism responsible for the electron acceleration through state transition.
We know that in the quiescent and transition states, the radio flux observed is 60-300 mJy (K10) and the radio spectrum is flat, implying the presence of compact jets in those states. On the other hand, we also know that powerful ejections with a radio flux of about 10 Jy Corbel et al. 2012) take place in the FIM state immediately after a period in the hypersoft state where the radio flux is quenched. This could indicate that the mechanism responsible for the electron acceleration in the corona is linked to the behavior of the jet (compact jet vs. discrete ejections). This link has also been observed by and Corbel et al. (2012) where these authors see a correlation between the radio flux and the hard-X-ray flux (30-80 keV) during a major outburst. In this scenario, the nonthermal Comptonization component varies less than the thermal one, which is in turn responsible for the large variations of the highenergy flux. This would be compatible with the interpretation of the thermal corona being the base of the compact jet (Markoff et al. 2005), because the latter is also seen to vary greatly (disappear) as the source transits from the hardest states to the softest.
(2) Even if a single hybrid Comptonization represents all the data well, it is also possible that the high-energy tail has a different origin depending on the states, especially in states where the high energies are dominated by thermal Comptonization while a compact jet is seen in radio (K10). The direct influence of the synchrotron emission from a hard state jet has been proposed in the case of Cygnus X-1 (Laurent et al. 2011;Jourdain et al. 2014;Rodriguez et al. 2015b), another high-mass microquasar, while hybrid corona could be at the origin of the high-energy emission in softer states (Cangemi et al. submitted). To carry out a basic test of this possibility in Cyg X-3 we gather infrared data from Fender et al. (1996) and a radio spectrum from Zdziarski et al. (2016). The infrared data were collected using the United Kingdom Infra-Red Telescope (UKIRT) on August 7, 1984, when the source was in its quiescent state. Fluxes are dereddened using the λ −1.7 extinction law of Mathis (1990), and, following Fender et al. (1996), we use A j = 6.0 as the extinction value. A radio spectrum from Zdziarski et al. (2016) was obtained by averaging the "hard state" data from three measurements at 2. 25, 8.3 (Green Bank Interferometer monitoring from November 1996 andOctober 2000), and 15 GHz (Ryle telescope monitoring from September 1993 and June 2006). Figure 7 shows the Cyg X-3 spectral energy distribution (SED) from radio energies to 1000 keV. We represent a 50 000 K black-body emission on the SED (typical temperature of a Wolf-Rayet star) in red. This component is totally consistent with the dereddened infrared points, showing that all measured infrared emission comes from the companion star . This allows us to place a rough constraint or limit on the contribution of the jet-synchrotron emission in the infrared (shown with a green arrow), which is necessarily negligible compared to the emission from the star. If one considers the range of the infrared synchrotron break observed in the case of other black hole binaries, for example GX 339-4 (Gandhi et al. 2011, and constrained between 4.6 +3.5 −2.0 × 10 13 Hz in gray in Fig. 7), we can extrapolate the then supposed synchrotron emission to the Xrays (green dotted line in Fig. 7). To reach the high energy, the synchrotron power-law index would need then to be Γ = 1.8, slightly harder than the one we obtain from the spectral fits (Γ = 2.2 ± 0.1). Alternatively, extrapolating the high-energy tail down to the infrared domain (blue dotted line in Fig. 7) results in a much higher infrared flux than measured by Fender et al. (1996). However, the Fig.7 shows that the possible synchrotron extension in light green could contribute to the highenergy emission we observe in the X-rays, implying that synchrotron emission could also be a plausible scenario.
In a recent work, Pahari et al. (2018) used Astrosat to measure a rather flat power-law component with a photon index of 1.49 +0.04 −0.03 dominating at 20-50 keV. This component appears during an episode of major ejection and is interpreted as the synchrotron emission from the jets. We do not find such a hard photon index in our FIM state. By doing the same extrapolation of the power law through low energies as these latter authors did, we find a much higher flux (more than ten orders of magnitude) than expected in this state (K10). Nevertheless, the very peculiar event observed by Pahari et al. (2018) may have been smoothed by our approach of stacking spectra. On the other hand, we do observe the hardening of the electron injection index in states where a major ejection is observed, and thus a connection between hard X-rays and radio emission, as previously mentioned.
Comparison with previous work
The global behavior we find with eqpair is similar to that found by H09; the lower the value of l h /l s , the softer the state. Although parameters obtained are globally consistent with the work of H09, we note some differences.
First, we find a different electron injection index in the quiescent/transition (Γ Q inj = 3.60 +0.14 −0.05 and Γ T inj = 3.31 +0.14 −0.09 ) than in the H09 hard state (Γ hard inj = 3.9 ± 0.1). Theses differences may come from a better definition at higher energies with IN-TEGRAL, bringing a more precise estimation of Γ inj than with RXTE/HEXTE used by H09. Table 4. Parameters for the eqpair fitting. Fixed parameters are indicated with an "f". The electron temperature kT e is calculated from the energy equilibrium, i.e., not a free fit or a fixed parameter. Secondly and more importantly, our reflection values are weaker than in H09 (particularly in the quiescent and transition states) and we observe a higher value of the ratio l nth /l h for the quiescent and transition states, leading to a different interpretation of the spectra. Indeed, the bump observed around 20 keV in the quiescent and transition states is in our case due to Comptonization, and not reflection as in H09. We note that in a previous work on the so-called Cyg X-3 hard state, Hjalmarsdotter et al. (2008) came to three slightly different interpretations of their analysis. The results from the present study allow us to break the degeneracy of their interpretation and lead us to favor their nonthermal interpretation (see Hjalmarsdotter et al. 2008, for details). In this case, the high value of the ratio l nth /l h implies that the spectrum is dominated by nonthermal electrons and the peak around 20 keV is determined by the energy kT e at which electrons are injected. This temperature, weaker than observed in other XRBs, is around 4 keV which means that the peak does not arise from the highest temperature of the electron distribution. Such nonthermal emission could come from shocks due to the dense wind environment, resulting in particle acceleration. Another possibility is that the corona is also the base of the emitting-jet region; in such a geometry, the mechanism responsible for triggering the ejections would also be responsible for the particle acceleration. Whatever the mechanism responsible for this nonthermal emission, it has to be efficient enough in order to prevent the thermal heating of the plasma electrons.
We also note differences with the set of parameters obtained by Corbel et al. (2012) which use RXTE/PCA data and the eqpair model in order to provide some insight into the global evolution of the 3-50 keV spectrum during a major radio flare. These latter authors in particular find much softer values for the injected electron index and low seed photon temperatures. Nevertheless, the goal of their work is not a detailed spectral analysis, as they obtain several degeneracies within the parameters, and we should not over-interpret these differences. Despite these differences, the global trend of their modeling also shows an increase in l h /l s and a decrease in l nth /l h as the source goes from hard to soft states.
Dependence on orbital modulation
As Cyg X-3 shows strong orbital modulation, we investigated the potential dependence of the nonthermal emission as a function of the orbital position of the source. In the FIM state, we find a slight difference in the photon index value between inferior and superior conjunctions: Γ inf = 2.94 ± 0.04 whereas Γ sup = 2.72 ± 0.04. Previously, Zdziarski et al. (2012) observed this kind of behavior by carrying out a phase-resolved spectral analysis with PCA and HEXTE. Their state number " 4 " (from the Szostek et al. 2008, classification), which corresponds to our FIM state, is also softer when the source is in superior conjunction. The authors explain this variation by an overly short exposure in this state compared to the others. Here, this argument is no longer valid; our IBIS exposure time in the FIM is highest after the transition state (15380 s and 13015 s in inferior and superior conjunction, respectively, compared to 43530 s and 36150 s in the transition state). Another interpretation is that this is an effect caused by a higher absorption when Cyg X-3 is behind its companion. With absorption affecting soft X-rays, higher energy photons would not be absorbed and the ratio between the emissions from the two different conjunction phases would be 1. This would bend the spectrum at low energy, resulting in a harder power law. In order to verify this assumption, we extract the density column value for each state and for each phase bin. However, the uncertainties on this parameter are too large, preventing us from coming to any firm conclusion. 10 9 10 7 10 5 10 3 10 1 10 1 10 3 Energy (keV) Average radio spectrum (Zdziarski et al. 2016) GX 339-4 synchrotron cutoff (Gandhi et al. 2011) Infrared dereddened Quiescent spectrum (this work) Fig. 7. Broad-band spectrum of Cyg X-3 in its quiescent state. Radio spectrum, infrared data, and X-ray data are represented with a darkgreen dotted line, red dots, and blue dots, respectively. We also show the extrapolation of the high-energy tail to lower energies with a dotted blue line and a black body emission for a temperature of 50 000 K with a red line. The green arrow shows the rough constraint on the jet synchrotron emission in the infrared imposed by the detected infrared emission associated with stellar emission , whereas the light-green dotted line shows the high-energy tail with a photon index of Γ = 1.8. The gray zone indicates the energy of the synchrotron cut-off for GX 339-4.
Link with the γ-ray emission
At higher energies, in the γ-ray domain, the extrapolation of the power law in the FIM and FSXR states where γ-ray emission is detected (Piano et al. 2012;Zdziarski et al. 2018) leads to weaker flux than detected, and the hard-X-ray emission does not seem directly connected to the γ emission. However, this latter has already been interpreted in the context of a leptonic (Dubus et al. 2010;Zdziarski et al. 2012Zdziarski et al. , 2018 or hadronic scenario (Romero et al. 2003;Sahakyan et al. 2014). In the leptonic scenario, this emission comes from Compton scattering of stellar radiation by relativistic electrons from the jets (Cerutti et al. 2011;Piano et al. 2012;Zdziarski et al. 2012Zdziarski et al. , 2018. The hadronic scenario on the other hand predicts γ-ray emission from the decay of neutral pions produced by proton-proton collisions. In the future, the Cerenkov Telescope Array may bring new constraints on the processes that occur at these energies. | 11,477.2 | 2020-11-13T00:00:00.000 | [
"Physics"
] |
Fabrication and Characterization of Transparent and Scratch-Proof Yttrium/Sialon Thin Films
Transparent and amorphous yttrium (Y)/Sialon thin films were successfully fabricated using pulsed laser deposition (PLD). The thin films were fabricated in three steps. First, Y/Sialon target was synthesized using spark plasma sintering technique at 1500 °C in an inert atmosphere. Second, the surface of the fabricated target was cleaned by grinding and polishing to remove any contamination, such as graphite and characterized. Finally, thin films were grown using PLD in an inert atmosphere at various substrate temperatures (RT to 500 °C). While the X-ray diffractometer (XRD) analysis revealed that the Y/Sialon target has β phase, the XRD of the fabricated films showed no diffraction peaks and thus confirming the amorphous nature of fabricated thin films. XRD analysis displayed that the fabricated thin films were amorphous while the transparency, measured by UV-vis spectroscopy, of the films, decreased with increasing substrate temperature, which was attributed to a change in film thickness with deposition temperature. X-ray photoelectron spectroscopy (XPS) results suggested that the synthesized Y/Sialon thin films are nearly homogenous and contained all target’s elements. A scratch test revealed that both 300 and 500 °C coatings possess the tough and robust nature of the film, which can resist much harsh loads and shocks. These results pave the way to fabricate different Sialon doped materials for numerous applications.
Introduction
Transparent screens and covers are essential components of various technology products such as solar cells, liquid crystals displays, electrochromic windows, and touch screens and many other applications [1][2][3][4]. These screens and covers are crucial for protecting sensitive components of these devices while still allowing light to pass. However, these screens and covers are often made from materials that are brittle, have poor resistance to scratches and fingerprints, and are prone to optical dimming [5]. Failure of these screens and covers, primarily through mechanical means such as cracking, scratching, shattering, warping, and the like can severely damage or destroy the entire device. Currently, an estimated over 145 million smart-phone screens break every year (2 screens per second in the USA alone). Cracked and scratched screens account for 56% of reported damages, and faulty the powder precursors. A composition is based on the general formula, Y m/3 Si 12-(m+n) Al (m+n) O (n) N 16−n and with m = 1.0, and n = 1.2. Probe sonicator (Model VC 750, Sonics, Newtown, CT, USA) was employed to achieve uniform mixing of the powders in ethanol. After 20 min of probe sonication, the powder mixture was oven-dried at 90 • C temperature for about 20 h to evaporate the ethanol. After drying, the mixtures were consolidated using spark plasma sintering (SPS) equipment (FCT system, model HP D5, Rauenstein, Germany). The powders were synthesized in a 20 mm graphite die under a uniaxial pressure of 50 MPa. The synthesis temperature, heating rate, and soaking time at which all the samples were sintered are 1500 • C, 100 • C/min and 30 min, respectively. Table 1. Weight (in gram) and molar percentage of precursors to synthesize 6 g of sample (composition: m = 1.0, n = 1.6) XX-1016 Series, Y m/3 Si 12-(m+n) Al (m+n) O (n) N 16 After the SPS processing, the graphite contamination on the surface of the synthesized samples was removed using SiC abrasives (grit sizes ranging from 180 grit to 1000 grit). Furthermore, the samples were ground and polished using a diamond grinding wheel to prepare a scratch-free surface. To obtain a mirror-like surface, the scratch-free samples were polished using alumina suspension (particle size 0.3 µm) on a polishing cloth.
The established Archimedean principle of density measurement was employed to determine the density of the synthesized sample. A Mettler Toledo kit was used to determine the density using distilled water as the media. The average of five results is presented in the range of 3.31 g/cm 3 . Universal hardness tester developed by Zwick-Roell (ZHU250 Germany) was used to determine the hardness of the synthesized sample (later used as a PLD target) at a load of 10 kg and resulting hardness was Hv10 16 GPa. Using the maximum crack length (d) and the hardness value obtained, the indentation fracture toughness (K IC ) was obtained using the Evans criterion stated below in Equation (1), where 'MCL' represents the maximum crack length and resulting K IC is 6.87 MPa·m 1 /2 .
Thermal expansion equipment (Mettler Toledo, TMA/SDTA-LF/1100, Switzerland) was used to compute the coefficient of thermal expansion of the synthesized sample. The sample used for this experiment was cut to finished dimensions of 4 mm× 4 mm × 4 mm each and the value measured was 3.24 ppm·K −1 .
Fabrication of Y/Sialon Thin Films
Y/Sialon thin films were fabricated with pulsed laser deposition (PLD, Neocera, Beltsville, MD, USA) using KrF laser operating at a wavelength (λ) = 248 nm, energy 200 mJ, laser fluence = 0.1 J/cm −2 , and frequency = 10 Hz. The distance between the substrate and the target was approximately 6 cm. The substrate temperature was changed from RT up to 500 • C to study the effect of the deposition temperature on the physical and mechanical properties of Sialon thin films. Before deposition, the substrates were ultrasonically cleaned in acetone, and ethanol, respectively. Y/Sialon thin films were grown on soda-lime silicate glass and silicon wafer with a dimension of 20 mm × 20 mm. The base pressure was 0.1 × 10 −6 Torr, while the deposition was carried out under an argon atmosphere with 1.3 × 10 −3 Torr working pressure for 2 h.
Characterization Techniques
Sialon target and thin film crystal structure were examined with X-ray diffractometer system (XRD, Rigaku Miniflex 600, Tokyo, Japan) with Cu Kα radiation λ = 1.5406 Å. The experiment was carried out in a 2θ range of 10 to 90 • with a scan speed of 0.2 • /min. The morphological properties of the target and the fabricated thin film samples were explored using field emission scanning electron microscope (FESEM) (Tescan Lyra 3, Brno, Czech Republic) and atomic force microscopy (AFM) (Nanosurf Easyscan, Liestal, Switzerland). The microstructure of the films was investigated by a field emission transmission electron microscope (FETEM) (JEOL-JEM2100F, Tokyo, Japan) with an accelerating voltage of 200 kV. The chemical composition of the thin films deposited on silica wafers substrate was investigated with the X-ray photoelectron spectroscopy (XPS) technique, (Thermo Fisher Scientific, model: ESCALAB250Xi, Waltham, MA, USA). Optical transmittance was carried out by employing UV/Vis spectrophotometer (Jasco V-570, Tokyo, Japan) in the wavelength range of 300-1200 nm. The scratch test was performed using a ramping/progressive load whereby the load is increased linearly from 0 to a maximum load of 15 N with a loading rate of 15 N/min. The indenter was a standard Rockwell C indenter with a 100 µm tip radius. The scratch length was set to 5 mm with a traversing speed of 5 mm/min. The instrument records and displays the acoustic emission signal (AE) graph, along with the variation of coefficient of friction and frictional force graphs along with the scratch, which are used in conjunction with the micrograph of the track to determine the lower (L c1 ) and the upper (L c2 ) critical loads. The lower critical load (L c1 ) is defined as the load at which the crack within the coating is initiated, which corresponds to the cohesive failure in the coatings and the upper critical load (L c2 ) is defined as the load at which the coating is completely delaminated from the substrate corresponding to the adhesive failure at the interface between the coating and the substrate. In the present study, L c2 is taken as the maximum load that the coating can sustain before complete failure or delamination. Figure 1 displays the XRD spectrum obtained from the Y/Sialon target and Y/Sialon thin films fabricated using PLD technique. As can be observed in Figure 1, the Y/Sialon target showed well-defined diffraction peaks that indicated the formation of β-Sialon phase. Peaks were observed at 2θ = 13.70 • , 23 [25]. Freshly fabricated Y/Sialon thin films deposited at RT, 100 • C, 300 • C, and 500 • C showed no diffraction peaks and thus confirming the amorphous nature of fabricated thin films. Figure 2 shows the influence of the substrate temperature on the morphology of the Y/Sialon thin films fabricated using PLD. The Y/Sialon film deposited at RT (25 • C) ( Figure 2a) possessed a thickness of approximately 18 nm and grain size of 20 nm. It appeared consistently uniform and smooth while exhibiting accumulation of spherical nanoparticles of two distinct dimensions at the surface. Similar features were observed in the case of Y/Sialon thin films deposited at 100 • C (Figure 2b) with the exception that the thickness of the synthesized film was relatively higher i.e., approximatelỹ 35 nm. A further increase in the substrate temperature of the film to 300 • C increased the number of the small-sized accumulated nanoparticles at the surface ( Figure 2c). Besides, the thickness of the sample was sharply increased to reach approximately~135 nm. At the substrate temperature of 500 • C (Figure 2d), not only the obtained film became more uniform and smoother, the dimensions of large-sized accumulated spherical nanoparticles increased with concurrent disappearance of the smaller ones. Table 2. It is found that the RMS roughness of Y/Sialon thin films decreases with increasing substrate annealing temperature. Generally, a decrease in RMS roughness leads to good homogeneity in thin films [29]. Such Table 2. It is found that the RMS roughness of Y/Sialon thin films decreases with increasing substrate annealing temperature. Generally, a decrease in RMS roughness leads to good homogeneity in thin films [29]. Such observation could be attributed to the diffusion of small accumulated nanoparticles at the surface of smooth thin film and into fewer larger particles as confirmed by the FESEM analysis.
Microstructure and Phase Analysis
RMS-root mean square; RT-room temperature Transmission electron microscope (TEM) was employed to examine the structure of the films prepared at 100 and 500 • C. Selected films were studied to evaluate their structural integrity and to detect the presence of any inhomogeneity in the film structure. TEM images in Figure 4 show the microstructure of the film, prepared at 100 • C, at various magnifications. There was no inhomogeneity observed except for slight variation in the thickness of the film. Additionally, there was no evidence for the presence of any ordered lattice fringes or any other substructural features in the film (Figure 4c). Transmission electron microscope (TEM) was employed to examine the structure of the films prepared at 100 and 500 °C . Selected films were studied to evaluate their structural integrity and to detect the presence of any inhomogeneity in the film structure. TEM images in Figure 4 show the microstructure of the film, prepared at 100 °C , at various magnifications. There was no inhomogeneity observed except for slight variation in the thickness of the film. Additionally, there was no evidence for the presence of any ordered lattice fringes or any other substructural features in the film (Figure 4c). Selected area electron diffraction (SAED) patterns confirmed the amorphous nature of the film (Figure 4d) which corroborates the results obtained from XRD analysis. Figure 6 shows the optical transmittance spectra obtained from Y/Sialon films prepared with PLD in the wavelength range (300-1200 nm). In general, high transmittance of thin films is an indication of good homogeneity as well as low surface roughness [30]. It is clear that the spectra of the as-deposited Y/Sialon films showed excellent transmittance in the UV, visible, and NIR regions. The average optical transmittance of the as-deposited Y/Sialon films in the UV (300-400 nm), visible (401, 750 nm), and NIR (751, 1200 nm) were 91.6%, 92.7%, and 95.5%, respectively. As the substrate temperature of the Y/Sialon films increased to 100 °C , a slight decrease in transmittance was observed. For example, the average optical transmittance in the visible region was 91.4%, which is merely 1% less than for the coating deposited at RT. For the Y/Sialon films deposited at 300 and 500 °C, the optical transmittance from the NIR to 600 nm was significantly decreased compared with other films. The reduction in the transmittance could be attributed to the increase in the grain size, and the thickness of the films as confirmed by FESEM cross-section and AFM images. Interestingly, the optical transmittance of the Y/Sialon films deposited at 300 and 500 °C declined dramatically below 600 nm, which might be attributed to the shift of the Y/Sialon bandgap to the UV region. Indeed, Boyko et al. [31] reported that the direct bandgaps of the β-Si6−z AlzOzN8−z (z = 0.0, 2.0, and 4.0) fabricated using ultra-hot isostatic pressing were about 7.2, 6.2, and 5.0 eV and the bandgap reduction could be attributed to the movement of the Op-states toward the Fermi level. Figure 6 shows the optical transmittance spectra obtained from Y/Sialon films prepared with PLD in the wavelength range (300-1200 nm). In general, high transmittance of thin films is an indication of good homogeneity as well as low surface roughness [30]. It is clear that the spectra of the as-deposited Y/Sialon films showed excellent transmittance in the UV, visible, and NIR regions. The average optical transmittance of the as-deposited Y/Sialon films in the UV (300-400 nm), visible (401, 750 nm), and NIR (751, 1200 nm) were 91.6%, 92.7%, and 95.5%, respectively. As the substrate temperature of the Y/Sialon films increased to 100 • C, a slight decrease in transmittance was observed. For example, the average optical transmittance in the visible region was 91.4%, which is merely 1% less than for the coating deposited at RT. For the Y/Sialon films deposited at 300 and 500 • C, the optical transmittance from the NIR to 600 nm was significantly decreased compared with other films. The reduction in the transmittance could be attributed to the increase in the grain size, and the thickness of the films as confirmed by FESEM cross-section and AFM images. Interestingly, the optical transmittance of the Y/Sialon films deposited at 300 and 500 • C declined dramatically below 600 nm, which might be attributed to the shift of the Y/Sialon bandgap to the UV region. Indeed, Boyko et al. [31] reported that the direct bandgaps of the β-Si 6−z Al z O z N 8−z (z = 0.0, 2.0, and 4.0) fabricated using ultra-hot isostatic pressing were about 7.2, 6.2, and 5.0 eV and the bandgap reduction could be attributed to the movement of the Op-states toward the Fermi level. Figure 6. UV-Vis transmittance spectra obtained from Y/Sialon thin films prepared using PLD at RT, 100 °C , 300 °C , and 500 °C .
The chemical composition of the Y/Sialon thin films was obtained with X-ray photoelectron spectroscopy (XPS). Each XPS spectrum was corrected for steady-state charging by aligning C1s to 284.60 eV. Typical XPS survey spectra of the fabricated films and O1s, Si2p, Al2p, N1s, and Y3d corelevel spectra for the Y/Sialon thin film prepared using PLD at 500 °C are displayed in Figure 7. As can be observed in the XPS survey spectra shown in Figure 7a, all the constituent elements (Si, Al, N, O, and Y) were observed. Argon, which could be trapped in the interstitial sites during the growth process or during the XPS etching was detected in all samples. The binding energy, full width at half maximum (FWHM), area and the weight of each component in the samples are listed in Table 3. It can be noticed from the table that there are no significant binding energy shifts. Furthermore, the Si:Al and the O:N atomic ratio in the Y/Sialon films are nearly equal. These observations suggest that the synthesized Y/Sialon thin films are nearly homogenous and form a complete solid solution. Figure 7b-f shows the high resolution XPS spectra obtained from Y/Sialon thin film prepared using PLD system at 500 °C deposition temperature. The Si2p XPS spectrum (Figure 7b) decomposed into two components attributed to a comprehensive contribution of Si-N bonds, and a little of Si-O bonds. The XPS spectrum of Al2p (Figure 7c) showed the presence of two peaks centered at 74.49 eV and 75.00 eV with similar atomic% corresponding to Al-N and Al-O bonds, respectively. The XPS spectrum for O1s revealed the presence of one peak corresponding to Si-O bonds (Figure 7d). Besides, the deconvolution of the XPS N1s peak (Figure 7e) further confirmed the presence of N-Si and little N-Si-O bonds. The deconvolution of the XPS spectra in the Y3d region (157-165 eV) shown in Figure 7f displays two peaks located at 158.67 eV, and 160.69 eV that correspond to Y3d5/2 and Y3d3/2 of Y2O3 nanoparticles, respectively. The peak area ratio for the Y3d5/2 and Y3d3/2 is about 1.43, which is very close to the theoretically expected value of 1.5 based on the degeneracy of the states. Figure 6. UV-Vis transmittance spectra obtained from Y/Sialon thin films prepared using PLD at RT, 100 • C, 300 • C, and 500 • C.
⁰C
The chemical composition of the Y/Sialon thin films was obtained with X-ray photoelectron spectroscopy (XPS). Each XPS spectrum was corrected for steady-state charging by aligning C1s to 284.60 eV. Typical XPS survey spectra of the fabricated films and O1s, Si2p, Al2p, N1s, and Y3d core-level spectra for the Y/Sialon thin film prepared using PLD at 500 • C are displayed in Figure 7. As can be observed in the XPS survey spectra shown in Figure 7a, all the constituent elements (Si, Al, N, O, and Y) were observed. Argon, which could be trapped in the interstitial sites during the growth process or during the XPS etching was detected in all samples. The binding energy, full width at half maximum (FWHM), area and the weight of each component in the samples are listed in Table 3. It can be noticed from the table that there are no significant binding energy shifts. Furthermore, the Si:Al and the O:N atomic ratio in the Y/Sialon films are nearly equal. These observations suggest that the synthesized Y/Sialon thin films are nearly homogenous and form a complete solid solution. Figure 7b-f shows the high resolution XPS spectra obtained from Y/Sialon thin film prepared using PLD system at 500 • C deposition temperature. The Si2p XPS spectrum (Figure 7b) decomposed into two components attributed to a comprehensive contribution of Si-N bonds, and a little of Si-O bonds. The XPS spectrum of Al2p (Figure 7c) showed the presence of two peaks centered at 74.49 eV and 75.00 eV with similar atomic% corresponding to Al-N and Al-O bonds, respectively. The XPS spectrum for O1s revealed the presence of one peak corresponding to Si-O bonds (Figure 7d). Besides, the deconvolution of the XPS N1s peak (Figure 7e) further confirmed the presence of N-Si and little N-Si-O bonds. The deconvolution of the XPS spectra in the Y3d region (157-165 eV) shown in Figure 7f displays two peaks located at 158.67 eV, and 160.69 eV that correspond to Y3d 5/2 and Y3d 3/2 of Y 2 O 3 nanoparticles, respectively. The peak area ratio for the Y3d 5/2 and Y3d 3/2 is about 1.43, which is very close to the theoretically expected value of 1.5 based on the degeneracy of the states.
Effect of Substrate Temperature on the Scratch Resistance of Coatings
Synthesized coatings were subjected to a scratch test in order to evaluate their adhesive strength, which is a measure of their scratch resistance. Figure 8 shows the scratch data plot for the coating developed at room temperature, which exhibits applied ramping normal load, along with the acoustic signal that accompanies the scratch. The panoramic view (Figure 8b) of the scratch and the zoomed-in views (Figure 8c,d) of the scratch at the location of Lc 1 (failure/crack initiation) and Lc 2 (failure/delamination) [32]. The analysis of the scratch was carried out by taking into consideration all the elements of the data such as the acoustic signal and the microscopic images of the scratch. The onset of cracking or the initial damage can be seen in Figure 8c, which is the zoomed-in view of the scratch, corresponding to a load of 8.1 N, which is assigned as the first critical load (Lc 1 ). This primary damage has the shape of interfacial shell-shaped spallation. It is also to be noted that Lc 1 corresponds to the first small jump on the acoustic emission signal [33]. The second critical load (Lc 2 ) of 13.8 N is found along with the scratch. Figure 8d at a point at which the damage becomes continuous such as hertz cracking where microcracks are observed within the scratch groove resulting in the delamination of the coating and its complete failure. Moreover, after this point, all the acoustic emission signal becomes noisier. A similar analysis was conducted on all the coatings to determine both the critical loads (Lc 1 and Lc 2 ) and to get the maximum load (Lc 2 ) that the coatings sustain before failure. Figure 9 shows the zoomed-in images of the other coatings deposited at 300 and 500 • C, respectively, corresponding to the positions of the critical loads. It was observed that the scratch resistance of the coating deposited at 300 • C did not significantly change as compared to the coating deposited at room temperature (RT). However, as the deposition temperature increased to 500 • C, delamination, or complete failure of the coating along the scratch was not observed whereby Lc 2 location could not be determined, suggesting a significant improvement in the scratch resistance of the coating.
In summary, the current study, presented a few results on ablation and deposition of Y/Sialon onto glass substrates. It was observed that the surface condition and processing temperature plays a vital role in the adhesion and other properties of the deposited films at various temperatures. It was observed that PLD at higher deposition temperatures tends to enhance both thickness and adhesion properties of the films. Substrate heating during processing modified the amorphous nature to nanocrystallites, resulting in improved adhesion and surface properties. Table 4 summarizes the properties of similar film materials and deposition processes from the literature, which help in viewing the better performance of the current results as compared to the existing data.
Conclusions
In this work, transparent and scratch-proof Y/Sialon thin films were deposited at various substrate temperatures using PLD of Y/Sialon target which was fabricated by spark plasma sintering. The deposited films were studied by using FESEM, TEM, XRD, XPS, AFM, and UV-vis spectroscopy techniques. The scratch test was investigated using a ramping/progressive load whereby the load is increased linearly from 0 to a maximum load of 15 N with a loading rate of 15 N/min. The UV-vis results demonstrated that the transparency of the films was high and decreased with increasing substrate temperature, which could be attributed to a change in film thickness with deposition temperature. FESEM and AFM images illustrated that the coating is homogenous, continuous over a large area without voids. Y/Sialon target exhibited a higher hardness (HV 10 ) 16 GPa and fracture toughness was (K IC ) 6.87 MPa·m 1/2 . A coefficient of thermal expansion 3.24 ppm·K −1 which is little less than silicon nitride (Si 3 N 4 ) and density of 3.31 g/cm 3 little high than Si 3 N 4 . A scratch test revealed that both 300 and 500 • C coatings possess the tough and robust nature of the film, which can resist much harsh loads and shocks. According to the present studies, it was found that fabricated films have the potential for use in high technology applications, including protector for smart-phone screens, army vehicle windows, and liquid crystal displays. | 5,523 | 2020-11-01T00:00:00.000 | [
"Materials Science",
"Physics"
] |
Spontaneous Hopf fibration in the two-Higgs-doublet model
We show that energetic considerations enforce a Hopf fibration of the Standard Model topology within the 2HDM whose potential has either an $SO(3)$ or $U(1)$ Higgs-family symmetry. This can lead to monopole and vortex solutions. We find these solutions, characterise their basic properties and demonstrate the nature of the fibration along with the connection to Nambu's monopole solution. We point out that breaking of the $U(1)_{\rm EM}$ in the core of the defect can be a feature which leads to a non-zero photon mass there.
Introduction:
The vacuum manifold of the Standard Model of Particle Physics is a 3-sphere (M = S 3 ) implying that there are no stable topological configurations in 3D since π 2 (M) = I, where π n (M) is the n-th homotopy group of the manifold.However, there are interesting topological solutions with one unstable mode in 3D (sphalerons [1]) and 2D (electroweak vortices [2,3]) characterised by π 3 (M) = Z.Nambu suggested a monopolelike configuration [4] which can be understood via a local Hopf fibration S 3 ∼ = S 2 × S 1 [5], which we discuss in more detail in [6], such that π 2 (S 2 × S 1 ) = Z.However, this configuration is known to be unstable and, if it were to be realised, it would need to be combined with a string to form so-called "dumbbell" configurations [7,8].The dynamics of such configurations has been linked with the production of primordial magnetic fields [9,10] and monopoles are being actively searched for in laboratory experiments [11].
The particle spectrum is well understood, for example [12,15].There are 5 Higgs particles: two of which are CP even with masses M h and M H , a CP odd pseudoscalar with M A and two charged Higgs particles with M H± .In the Standard Model alignment limit the h particle is that detected by experiments at the LHC and a wide range of measurements suggest that this limit should be close to being the case [16][17][18][19][20].If there is a global U (1) PQ symmetry then M A = 0 and if this is extended to SO(3) HF then in addition one has M H = 0.
Topological defect solutions [21,22] associated with these symmetries were studied in detail in the context *<EMAIL_ADDRESS>†<EMAIL_ADDRESS>of the 2HDM in [14].In particular there can be domain wall [14,[23][24][25][26], global vortex [14] and global monopole solutions [27].Motivating the present study is the observation, based on field theory simulations from random initial conditions, that the vacuum is not neutral in the core of these defects [23,26,27] contrary to the assumption of [14].We will see that this has profound implications.One should note that there are also several papers discussing non-topological configurations [28][29][30][31] and other compound structures of topological defects [32,33] (without any neutral vacuum violation) in the 2HDM that can be dynamically stable in certain regions of the parameter space.Here, we will focus purely only on topologically stabilised objects.Accidental symmetries can be gauged using a covariant kinetic term where σ µ = (σ 0 , σ a ) are the Pauli matrices including the identity.W a µ and Y µ are the Standard Model gauge fields, with coupling constants g and g ′ respectively, and V a µ are the new gauge fields associated with the accidental symmetries with coupling constant g ′′ -see [6] for more details on the symmetry transformations.For the purposes of this work, we set g ′ = 0 in order to simplify the defect solutions, but note that we do not expect any changes in the qualitative features for non-zero g ′ .
Gauging the symmetries provides a natural mechanism for removing the Goldstone modes associated with the accidental symmetries allowing for a potentially viable model.In particular in the case of a U (1) PQ symmetry the Goldstone mode with M A = 0 becomes a massive gauge boson.There can be interesting models constructed with these symmetries, for example, models that can generate masses for neutrinos [34][35][36].
In this paper we will show that the Hopf fibration associated with the Nambu monopole is realised on energetic grounds within the 2HDM when there is either a SO(3) HF or U (1) PQ symmetry; something that we term "Spontaneous Hopf Fibration" (SHF).We will find monopole and vortex solutions for the case where the symmetries are gauged (although these can be easily adapted to the global limit).
Parameterizations and topology: The 8 fields of the 2HDM can be reparameterized as using 5 fields f 1,2,+ , ξ and χ, and U L ∈ SU (2) L which has 3 degrees of freedom.The constant v SM = 246 GeV is the Standard Model vacuum expectation value and f + = 0 corresponds to a neutral vacuum with zero photon mass.These degrees of freedom can also be encoded using bilinear forms, , which are useful for understanding the topology of the vacuum manifold and the associated defects [14,27,37].In particular, the neutral vacuum violation discovered in the core of defects in [23,27] can be traced by R + = R µ R µ , with R + = 0 corresponding to a neutral vacuum.One finds that two of the U L degrees of freedom are encoded in na , with the other associated with rotations about this axis, and the hypercharge degree of freedom, U Y , is encoded in R ∝ exp [iχ].In contrast, R µ is invariant under the Standard Model symmetries and contains the degrees of freedom that will, in general, change the potential.The R 0 component is |Φ| 2 and so does not contribute to the topology of the vacuum manifold.The topological non-triviality of the vacuum manifold can, therefore, be most easily extracted by looking at the remaining three-vector R a , which will contain all of the degrees of freedom associated with any additional symmetry transformations, U H .
A notable difference between this topology and that of the 't Hooft-Polyakov monopole [38,39] and Nielsen-Olesen vortex [40] is that the topology lives in a space associated with these bilinear forms, rather than the fields themselves, which means that a half twist in field space can be topologically non-trivial and, in general, there is a factor of 2 difference between the topological degree of a field configuration and what one might naively expect.A consequence of this is that simple field configurations with unit winding often have discontinuities which must be resolved by the attachment of another soliton.It is for this reason that the Nambu monopole [4] has a string "emanating" from one of its poles.
In the SO(3) HF case R a only contributes to the potential with a term ∝ R a R a , so rotations between the three components of R a are a symmetry of the potential, generating an S 2 component of the vacuum manifold.Note that the topology is not S 3 as one might expect because U H cannot perform rotations about R a -this degree of freedom is already contained within U L for rotations about n a .Similarly, in the U(1) PQ case, the R 3 component splits off from the other two but there remains a symmetry for rotations between R 1 and R 2 , which is responsible for an S 1 direction.In general, there is an additional S 1 × S 3 associated with the hypercharge and isospin symmetries because the degeneracy between U Y and one of the directions in U L is broken by f + .Since this results in a massive photon, we avoid this scenario and choose the parameters so that f + = 0 in the vacuum, restoring the degeneracy so that the SM symmetries only contribute S 3 .Therefore, the topology of the vacuum manifolds associated with the accidental symmetries are M = S 2 × S 3 for the case of SO(3) HF and M = S 1 × S 3 for U (1) PQ [14], which admit monopoles and vortices due to the non-trivial homotopy groups π 2 (S 2 × S 3 ) = Z and π 1 (S 1 × S 3 ) = Z, respectively.
Monopole solutions: In the SO(3) HF symmetric model, using 3D spherical polar coordinates (r, θ, ϕ), we can construct the monopole ansatz for the scalar field [6,27] where k = k(r) and k + = k + (r) are functions constructed from f 1,2,+ that retain the ability to change the potential energy while the other degree of freedom (as well as ξ) is used to wind around the vacuum manifold.This ansatz has the property that R a = n a = (k 2 − k 2 + )r a .The feature Ra = ra is necessary for a monopole configuration with unit winding (or related to this by a homomorphism) and this structure, by itself, would give rise to a 2HDM equivalent of the Nambu monopole -with a divergence in the gradient energy that necessitates the emergence of a string from one of the poles.However, the isospin degrees of freedom, contained within U L , can resolve this divergence when we also have that na = ra , which gives the appearance of an underlying topological complexity in the structure of n a , but it occurs for energetic reasons that are only indirectly topological.The degree of freedom associated with rotations about n a contains no structure -only the possibility for global transformations -and it is this effect that we have termed SHF.The S 3 of the SM becomes S 2 ×S 1 with the S 2 part inheriting the same twists as the S 2 of the SO(3) HF symmetry and the S 1 part containing nothing except possible global rotations.
For the gauge fields, we choose to work in the temporal gauge such that the time components of the gauge fields are zero and make the ansatz After the standard rescaling to reduce the number of significant parameters in the model by two, we find that the energy functional under this ansatz is The variation of E (black) and R+ at the monopole core (dotted orange) as a function of the mass ratio ϵ, with the other parameters fixed to λ = 1 and g2 = 2.The inset plot presents the field profiles of an example solution (ϵ = 1) and also displays the energy density, ε, and R+.
where the remaining parameters are g = g ′′ /g, λ = λ/g 2 and ζ 4 = λ 4 /λ.Neutral vacuum violation occurs when both k and k + are simultaneously non-zero and we choose, by convention, k + to be zero in the vacuum.We can perform a simple analysis (neglecting gradient energy contributions) to predict when there will be neutral vacuum violation in the core of the monopole by looking at the effective mass . If a monopole were to have k + = 0 everywhere, then the effective mass at the core of the monopole (where k = 0) would be − λ 2 , which is always negative and independent of ζ 4 .The presence of this negative mass term indicates that the energy would be reduced if k + ̸ = 0 and therefore we expect neutral vacuum violation to be a generic effect in the core of 2HDM monopoles.
In figure 1 we present the energy and R + (0) (fixing v SM = 1) as a function of the mass ratio ϵ = M H± /M h = 1 2 √ −ζ 4 and we also show an example solution in the inset plot.As expected, for all values of ϵ presented here, there is neutral vacuum violation in the core of the monopole, although it decreases as ϵ grows and appears to be approaching zero, while conversely, the energy grows with ϵ and appears to be approaching a maximum.Perhaps the most noticeable feature of the solution is that k + k + = 0 at the centre -in fact this is enforced by the gradient energy and is true for all values of ϵ.The effects of λ can be broadly described as changing the length scale ratio between the scalar fields and the vector fields and, similarly, g changes the length scale ratio between the two gauge fields.
Vortex solutions: In the U(1) PQ symmetric case, using plane polar coordinates (r, θ), we can make the vortex ansatz Φ = Φ(r, θ) from equation ( 2) with f i = f i (r) for i = 1, 2, + and ξ = θ [6] which has a similar property to the monopole in that Rb = nb = rb , where b ∈ [1, 2], although now, unlike the monopole case, n b = 0 in the vacuum.The remaining components are We note that, in a similar way to the Nambu monopole solution, there is a stringlike configuration, characterised by π 1 (S 2 × S 1 ) = Z and with structure only in R b , where the divergence in the gradient energy is resolved by attaching a domain wall to one side.This is related to an unstable (due to the tension in the wall) configuration in the SM where nb = rb .Once again, for the 2HDM configuration, we can use the isospin rotations to resolve the divergence without the domain wall.The SHF acts here, again, to split S 3 → S 2 × S 1 , but now it is the S 1 part that inherits the twists of the U(1) PQ .Now we choose to make a gauge transformation that absorbs the phase winding of the scalar field into W a i and V 3 i (the only new gauge field in the U(1) PQ model) so that we can make the ansatz Again, if we make the standard rescalings then we can express the energy per unit length of the string as where g = g ′′ /g, η = η 2 /η 1 , λ1 = λ 1 /g 2 and ζ i = λ i /λ 1 so that the model is left with 6 parameters (η i is defined by the relationship ).We can perform an effective mass analysis here too, just as in the monopole case.The relevant effective mass is ]/4 and if the string were to have f + = 0 everywhere, as well as f 1 (0) = 1 and f 2 (0) = 0, then this would take the value λ1 [−2ζ 2 η2 + ζ 3 ]/4, so the neutral vacuum violation in the core is not generic but depends upon the sign of ζ 3 −2ζ 2 η2 .Note that, f 2 (0) = 0 is enforced by the winding of the string but f 1 (0) = 1 is an overly simplistic assumption, even if f + = 0 everywhere, so this approach will be less accurate for vortices than for the monopoles.
In figure 2 we present the field profiles of a string solution with a separate inset plot showing the energy density and R + for the same solution and in figure 3 we show how R + , at the core of the string, varies with the mass ratios ϵ = M H± /M h and δ = M H /M h , in the alignment limit and with tan β = 1 (which sets the vacuum values of the fields so that 4 implicitly in these plots.
From the profiles we see that the solution has retained a feature that is similar to one observed for the monopole solutions, namely that f 1 = f + at the centre.However, this is no longer guaranteed by the gradient energy and, in fact, is only an approximate equality that occurs in a subset of the parameter space.The parameters λ1 and g2 play a very similar role to the equivalent parameters from the monopole case but the other parameters have more complicated effects on the solutions that are difficult to broadly summarise in this way.From the contour plot we can see that there is a clear transition across the line which is approximately ϵ = δ, corresponding to ζ 3 = 2. Due to our fixed parameter choices we have ζ 2 = η2 = 1 and therefore this is consistent with the behaviour that we predicted from the effective mass analysis.
Discussion and conclusions: The solutions that we have presented in this paper are evidence of a new mechanism -that we have called "Spontaneous Hopf Fibration" -at work in the 2HDM.It allows for a topologically non-trivial subspace of the vacuum manifold to imprint itself onto another, topologically trivial section of the manifold.This results in solitons that appear to have topological structure in the SM degrees of freedom but, in fact, it is purely caused by energetics.The coupling between the S 3 of the SM to the rest of the vacuum manifold, in the gradient energy term, causes a Hopf fibration of the space S 3 → S 2 × S 1 , with the appropriate component of this space taking on structure to match the winding around S 2 for monopoles and S 1 for strings.
In [27] we present evidence from simulations of the global 2HDM model in which stable monopoles form that have neutral vacuum violation in the cores and a structure in the bilinear vectors that is the same as what one would expect from the solutions presented here.In [6] we do the same for the case of strings.These simulations suggest that the solutions we have found are those most relevant to the study of topological defects in the 2HDM.
A phenomenologically relevant consequence of the SHF in the 2HDM is that it allows for the neutral vacuum condition to be violated inside the core of the defects, generating a non-zero mass for the photon.In the case of strings, this is dependent upon the choice of parameters, however in the monopole case it is predicted to always occur if the gradient energy contributions are neglected.The interaction between photons and superconducting defects has been analysed in [41] for a toy model but this work has opened up the possibility for novel interactions between standard model particles and 2HDM defects which is deserving of more investigation.In [27] it was observed that the additional structure of FIG.3: A contour plot showing how R+ at the string core varies with the mass ratios ϵ and δ in the alignment limit and with λ1 = g2 = tan β = 1.Note that the dark blue colour on most of the lower right region of the plot is off the bottom of the colour scale -corresponding to a neutral vacuum.global 2HDM monopoles did not affect the scaling of their number density, but other potential cosmological consequences warrant further studies of these defects.
We would like to conclude by emphasising that, although we have discussed SHF in the context of the 2HDM, we suggest that it could be a more general effect that can occur in other models that have vacuum manifolds constructed from coupled subspaces, when at least one of them is topologically non-trivial.
I. THE HOPF FIBRATION
To understand the Hopf fibration it is useful to first understand the Hopf map, which is a many-to-one map from the 3-sphere onto the 2-sphere: S 3 → S 2 .One can embed the 3-sphere in C 2 using the Hopf coordinates where 0 ≤ η ≤ π/2 and 0 The Hopf map can then be expressed as We can now define new angular coordinates α = 2η, , where we have used the periodicity of ξ 0 and ξ 1 to set the ranges of β and ζ.In these new coordinates it becomes clear that n a describes a two-sphere that does not depend upon ζ, which itself describes different points on S 3 (great circles) that all get mapped to the same point on S 2 .The Hopf fibration is therefore an example of a fibre bundle where S 3 ∼ = S 2 × S 1 locally, but not globally, and the S 2 corresponds to unique points on n a , while S 1 is represented by the remaining angular coordinate ζ.This can be visualised by performing a stereographic projection from the point z 0 = 1, z 1 = 0 onto R 3 and compactifying the full, infinite space into a sphere of radius 1 by re-defining the spherical radius as r ′ = tanh(r).We plot the resulting surfaces for four different values of α in Figure 1 and represent the value of β with colour so that the S 1 directions are represented by lines of constant colour.
II. SYMMETRIES OF THE MODEL
With the inclusion of accidental symmetries, the field can be parameterized as *<EMAIL_ADDRESS>†<EMAIL_ADDRESS>new gauge symmetry is Φ → (U H ⊗ σ 0 )Φ and . This expression can be used for both symmetry cases under consideration here but in the SO(3) HF case all 3 components of V a µ are used, with a ∈ {1, 2, 3}, whereas in the U(1) PQ case only V 3 µ is used with V µ 1 = V µ 2 ≡ 0. The effect of the new symmetry transformation on the bilinear vector, R a , is simply a rotation R a′ = R ab H R b , which corresponds to a 3D rotation in the SO(3) HF symmetric case and a 2D rotation about the 3 axis in the U(1) PQ case.Likewise, the n a vector rotates under isospin transformations
III. MONOPOLE SOLUTION
Our ansatz is generated by applying to Φ, where k and k + are now functions of the radius only.For the gauge fields, we use an expression which will cancel the gradient energy of the monopole far from its core and that leaves the system with a spherically symmetric energy density, and therefore functions in the static equations of motion that only depend on the radius.Using this ansatz, the energy of the configuration is found to be given by where We can remove two of the parameters by rescaling lengths with r = gv SM r which gives where the remaining parameters are g = g ′′ /g, λ = λ/g 2 and ζ 4 = λ 4 /λ.The resulting set of static equations of motion for the gauged monopole are The boundary conditions for this system of equations are k(0 where we have assumed that ζ 4 < 0, so that the minimum energy vacuum state is one where f + = 0, to prevent a massive photon.
IV. VORTEX SOLUTION
Similar to the monopole case, our ansatz for the scalar field can be generated by the application of where However, the gradient energy of this string ansatz is much more complicated than in the monopole case, due to the additional scalar degree of freedom.Therefore, we have found it simpler to make a gauge transformation to absorb these windings in U H and U L into V 3 i and W 3 i respectively.The rest of the ansatz for the gauge fields was chosen so that the energy density respects the cylindrical symmetry of the string, which gives rise to onedimensional equations of motion, and also so that any additional gauge components which can reduce the energy of the system are not fixed to zero.The resulting energy is where We can now rescale lengths with r = gv SM η 1 r and field magnitudes with f i = η 1 f1 and h 2 = gv SM η 1 h2 to give where g = g ′′ /g, η = η 2 /η 1 , λ1 = λ 1 /g 2 and ζ i = λ i /λ 1 so that the model has 6 significant parameters.The equations of motion are (dropping the tilde on all quantities for ease and clarity) We use the boundary conditions f , where we have defined ζ34 = (ζ 3 + ζ 3 )/2 and once again assumed that ζ 4 < 0. This corresponds to a finite energy solution with the possibility of neutral vacuum violation at the core of the string.
V. STRING SOLUTION FROM RANDOM INITIAL CONDITIONS
We have performed simulations of the U(1) PQ symmetric 2HDM, in the global limit, from random initial conditions and in figure 2 we present evidence of neutral vacuum violation localised to the topological defects which has structure in the bilinear vectors that matches The objects that form in these simulations have features that lead us to believe that they are very similar to the string solutions we have found.We see in figure 2 that neutral vacuum violation occurs in the vast majority, but not all, of the cores of the strings, which is as we expected from our mass analysis since ϵ < δ and α = β.We performed another simulation with the values of ϵ and δ interchanged in which strings form but neutral vacuum violation in the core is significantly more rare, as expected.
Secondly and perhaps more convincingly, figure 3 shows that there is a winding in the 1 and 2 components of both R a and n a around the string.We also colour the vectors with the third component of the vector which shows only small contributions from R3 .Since this is the global limit, the current term in the gradient energy leads to a rotation (as a function of the radius) between n3 and the other two components, which would be absorbed into the W 2 r gauge field component for our solutions.It is for this reason that there is also evidence of a winding around the string in the n3 component, which is less pronounced at greater distances from the string.
We have performed similar simulations for the case of monopoles and these are presented in our earlier work on global monopoles [1].Taken together with these simulations of vortices, they show that the solutions we have found are the topological solutions which are of most physical relevance.
5 FIG. 1 : 1 2
FIG. 1: A visualisation of the Hopf fibration for four different values of α, representing half the range of β with colours from β = π (blue) to β = 2π (red) and the full range of ζ is displayed.The angles α and β represent unique positions in n a (on S 2 ) while ζ parameterises the S 1 direction, which is clearly visible in these plots as lines of constant colour.
FIG. 2 :
FIG.2: Surfaces with R+ = 0.1 in a simulation with random initial conditions.There is a small blue box close to the right hand side that surrounds one of the strings.We present a close-up of this box in figure3that shows the structure of the bilinear vectors around the string.
FIG. 3 :
FIG. 3:The structure of the bilinear vectors, R a (top) and n a (bottom), surrounding a string that forms in simulations with random initial conditions.The vectors shown on the plots all have the same magnitude because they are the normalised 2vectors formed from the 1 and 2 components of n a and R a .The colour of each vector represents the 3 component of the normalised full 3-vector.Both vector fields have also been subjected to a global rotation to make the visualisation of the structure more apparent. | 6,372.8 | 2024-01-12T00:00:00.000 | [
"Physics"
] |
An integrated study of glutamine alleviates enteritis induced by glycinin in hybrid groupers using transcriptomics, proteomics and microRNA analyses
Glutamine has been used to improve intestinal development and immunity in fish. We previously found that dietary glutamine enhances growth and alleviates enteritis in juvenile hybrid groupers (Epinephelus fuscoguttatus♀ × Epinephelus lanceolatus♂). This study aimed to further reveal the protective role of glutamine on glycinin-induced enteritis by integrating transcriptome, proteome, and microRNA analyses. Three isonitrogenous and isolipidic trial diets were formulated: a diet containing 10% glycinin (11S group), 10% glycinin diet supplemented with 2% alanine-glutamine (Gln group), and a diet containing neither glycinin nor alanine-glutamine (fishmeal, FM group). Each experimental diet was fed to triplicate hybrid grouper groups for 8 weeks. The analysis of intestinal transcriptomic and proteomics revealed a total of 570 differentially expressed genes (DEGs) and 169 differentially expressed proteins (DEPs) in the 11S and FM comparison group. Similarly, a total of 626 DEGs and 165 DEPs were identified in the Gln and 11S comparison group. Integration of transcriptome and proteome showed that 117 DEGs showed consistent expression patterns at both the transcriptional and translational levels in the Gln and 11S comparison group. These DEGs showed significant enrichment in pathways associated with intestinal epithelial barrier function, such as extracellular matrix (ECM)-receptor interaction, tight junction, and cell adhesion molecules (P < 0.05). Further, the expression levels of genes (myosin-11, cortactin, tenascin, major histocompatibility complex class I and II) related to these pathways above were significantly upregulated at both the transcriptional and translational levels (P < 0.05). The microRNA results showed that the expression levels of miR-212 (target genes colla1 and colla2) and miR-18a-5p (target gene colla1) in fish fed Gln group were significantly lower compared to the 11S group fish (P < 0.05). In conclusion, ECM-receptor interaction, tight junction, and cell adhesion molecules pathways play a key role in glutamine alleviation of hybrid grouper enteritis induced by high-dose glycinin, in which miRNAs and target mRNAs/proteins participated cooperatively. Our findings provide valuable insights into the RNAs and protein profiles, contributing to a deeper understanding of the underlying mechanism for fish enteritis.
Introduction
Soya glycinin, accounting for 40% of the total protein in soy seed, has been identified as a major anti-nutritional factor and has a hexameric structure consisting of six subunits with the basic structure A-S-S-B (disulfide bond, where A and B represent the acidic and basic subunits, respectively) (1).Its antigenicities are relatively stable and are not easily destroyed at 100°C temperature treatment.Nowadays, the methods of mitigating soya glycinininduced enteritis or antigenicity are physical (2), chemical (3), biological (4), and the application of innovative feed additives (5).However, in-depth research is still needed to completely remove the immunogenicity of soya glycinin.High-dose glycinin can impair intestinal immune function, cause inflammation response, and ultimately inhibit growth performance in fish (5)(6)(7)(8).In general, soya glycinin-induced intestinal inflammation is accompanied by mRNA levels of zonula occludin-1 (zo-1), occludin and claudin-4 reduced as well as interleukin-1b and tumor necrosis factor-a increased (5,9).Transcriptomic techniques have been employed to investigate the differential expression of soybean meal-induced enteritis (SBMIE) and affect its immune system-related pathways including cytokine-cytokine receptor interactions, intestinal immune network for immunoglobulin A (IgA) production, nuclear factor NF-kB signaling pathway, Jak (Janus kinase)-STAT (Signal transducers and activators of transcription) signaling pathway, T-cell receptor signaling pathway and tumor necrosis factor (TNF) signaling pathway, which play key roles in responding to soybean meal stress in fish (10,11).The utilization of proteomics has provided valuable insights into the intricate molecular mechanisms by which fish respond to external stimuli such as feed additives.The influences of dietary tryptophan on the growth and physiology of snapper (Sparus aurata) were studied previously, and its proteomic data showed that dietary tryptophan did not affect growth but stimulated immunity in the fish (12).However, the integrated transcriptomic and proteomics analyses have been less studied in fish, and the integration of transcriptomic and proteomics analysis can provide more complete information compared to single omics, as well as the two can mutually validate the reliability of the data.
Although transcriptomic and proteomics technologies can provide a comprehensive understanding of overall molecular level changes, inconsistent expression levels may exist between mRNAs and proteins (13,14).In addition to deficiencies in high-throughput omics technology and incompleteness of mRNA/protein databases, the complex regulatory mechanisms underlying the translation of mRNAs into mature proteins may also lead to inconsistent results.MicroRNAs (miRNAs) are major regulators of cellular function (15), prominently contributing to post-transcriptional and translational gene expression through various mechanisms (16).In addition, miRNAs have been found to have important roles in regulating intestinal functions such as epithelial cell growth (17), mucosal barrier function (18), and the development of gastrointestinal disease (19)(20)(21).As an important aspect, mRNA expression levels in fish can be regulated by miRNA targeting.A miRNAome study on the intestinal immune function of turbot (Scophthalmus maximus L.) showed that differentially expressed miRNAs contribute to the enhancement of intestinal immune response and the prevention of host infection, where their target genes are implicated in diverse immune functions and inflammatory responses (22).Meanwhile, fish composition influences the expression levels of intestinal miRNAs and their target genes, as well as some pathways, such as cell adhesion molecules, ECM-receptor interaction, apoptotic signaling pathway, and cytokine-cytokine receptor interaction, were identified by small RNA sequencing (11,23).
The nutritional strategies of feed additives for aquatic animals have been studied separately at the mRNA, protein, or miRNA molecules.However, these molecules are interconnected and can mutually influence.mRNAs are transcribed from genes and act as templates for protein synthesis, while miRNAs exert regulatory control over protein translation or mRNA stability (24).This genetic information flow ultimately leads to the synthesis of proteins, which play various roles in biological processes.Thus, the integration of these three components (mRNAs, proteins, and miRNAs) is essential for a comprehensive study of fish intestinal health.
The national production of grouper is at 205,816 tons in 2022, which has become the third most productive species among marine economic fish species (25).Our previous study reported that the addition of purified high-dose glycinin to the diet reduced growth performance and caused enteritis in juvenile hybrid groupers (Epinephelus fuscoguttatus♀ × Epinephelus lanceolatus♂) (26).We also found that feed supplementation with 2% alanyl-glutamine enhanced growth performance and alleviated enteritis induced by glycinin in the same species (27).However, the potential protective mechanisms for glutamine to alleviate enteritis in fish based on multi-omics techniques have not been studied.This experiment aimed to further reveal the potential protective role of glutamine (Gln) against glycinin-induced enteritis in hybrid groupers by integrating transcriptomics, proteomics, and miRNA analyses.In addition, because Gln tends to become hot and less soluble during feed processing, a feed substitute for Gln, alanyl-glutamine, was used for the study (28)(29)(30).
Grouping and sample collection
Three experimental diets were prepared with equal levels of protein (48% crude protein) and lipid (12% crude lipid): a diet based on fishmeal (referred to as Group FM), a diet containing 10% glycinin (referred to as Group 11S), and glycinin diet supplemented with 2% alanine-glutamine (referred to as Group Gln).The feed formulation is based on our published articles (24).Juvenile hybrid groupers used in this experiment were obtained from a local commercial hatchery (Zhanjiang, Guangdong, China).Healthy and vigorous hybrid groupers (8.50 ± 0.01 g) were fed each diet for 8 weeks.After the feeding trial finished, distal intestine (DI) samples from the three groups were obtained to determine transcriptome, proteome, and miRNA levels.
Transcriptome sequencing and de novo assembly
A total of 1 mg of RNA from FM, 11S, and Gln experimental groups was utilized for the preparation of transcriptome library.Initial steps involved the generation of first-strand cDNA through PCR, followed by the subsequent generation of second-strand cDNA.Subsequent to PCR amplification of cDNA fragments along with adapters, the resulting products underwent purification using AMPure XP Beads.Subsequently, the purified double-stranded cDNA underwent end-repaired, A-tailing, and ligation to sequencing junctions.Ultimately, PCR enrichment yielded the final cDNA library.The library's quality was then assessed using the Agilent Technologies 2100 bioanalyzer, followed by sequencing on the Illumina platform.Raw data underwent filtration to eliminate adapter sequences and lowquality reads, resulting in a collection of high-quality clean reads, which was assembled to obtain a Unigene library for the species.Once high-quality sequencing data has been obtained, it needs to be assembled using Trinity software (31).Trinity-derived transcripts served as reference sequences (Ref), against which clean reads from each sample were aligned and compared.Finally, reliable transcripts were obtained by filtering the low-expression transcripts.Following the assembly process, the assembled All-Unigenes were subjected to comprehensive annotation against the publicly accessible protein databases, which encompassed GO (Gene Ontology), KOG (EuKaryotic Orthologous Groups), Swiss-prot, Nt (nonredundant nucleotide sequences), and Nr (non-redundant protein sequences).The quantification of gene expression level relied on the expected number of fragments per kilobase of transcript per million mapped reads (FPKM).Differentially expressed genes (DEGs) between the two groups were pinpointed using a criterion of fold change (FC) ≥ 1.5 and a false discovery rate (FDR) of < 0.05.The process of pathway assignments involved utilizing sequences to query the KEGG database, with KEGG terms having corrected Pvalues (Q-values) of ≤ 0.05 deemed significant.Transcriptome (de novo assembly) sequencing data have been submitted to the NCBI SRA database with the accession number PRJNA1008292.
Proteome sequencing and analysis
The quantitative proteomic analysis of gut tissues from hybrid groupers was carried out using a 4D-label-free approach at Jingjie PTM Biolabs Inc. (Hangzhou, China).As previously described by Jiang et al. (32), the gut samples were initially ground, lysed, and subjected to centrifugation to yield the supernatant.The protein concentration of the supernatant was measured.Following trichloroacetic acid precipitation and acetone washing, protein samples were dissolved in triethylamine borane and digested with trypsin to yield peptides.Subsequently, peptides were desalted through Strata X SPE column, separated using NanoElute ultrahigh-performance liquid system, and introduced into the capillary ion source for ionization.The mass spectrometry analysis was carried out using the timsTOF Pro (tims: trapped ion mobility spectrometry; TOF: time of flight) manufactured by Bruker in United States.
We employed the Maxquant search engine (v1.6.15.0) to process raw data from mass spectrometry.The transcriptome database of hybrid grouper (fasta format) was utilized as a reverse decoy database to facilitate the identification of matching proteins from the tandem mass spectra.Additionally, a reverse database was integrated to estimate the false discovery rate (FDR) resulting from random matches.Contaminated proteins within the identified list were excluded to minimize their impact.Cleavage enzyme specificity was designated as Trypsin/P, allowing for a maximum of 2 missing cleavages.Peptides were required to have a minimum length of seven amino acid residues, and a maximum of 5 modifications were considered.Precursor ion mass tolerance was set as 20 ppm for both the First search and Main search phases.Similarly, a mass tolerance of 20 ppm was applied to fragment ions.Fixed modifications encompassed carbamidomethyl on cysteine, while variable modifications encompassed methionine oxidation and protein Nterminal acetylation.To ensure robust identification quality, an FDR of 1% was maintained for protein and peptide identification.Differential proteins were identified after sample qualification.Their relative quantification differences between the two groups were assessed through a T-test, yielding the corresponding p-value.Furthermore, utilizing a p-value criterion of ≤ 0.05, protein ratios exceeding 1.2 was considered up-regulated, while a ratio less than 1/ 1.2 was considered down-regulated.Using the list of identified proteins, we conducted a subcellular localization analysis through the WoLF-PSORT database.The pathway analysis was executed utilizing the KEGG database.Furthermore, we employed a twotailed test to analyze enriched pathways and ascertain the enrichment of differentially expressed proteins.A significance threshold of P-value ≤ 0.05 was applied.The MS proteomics data have been submitted to the ProteomeXchange Consortium via the iProX partner repository with the dataset identifier PXD044757.
miRNA qPCR analysis
Screening of miRNAs regulating key genes associated with the intestinal barrier pathways based on a small RNA sequencing database in hybrid groupers.Small RNA transcriptome data were submitted to the SRA database under the accession number SUB7175134. Isolation of miRNA from the intestinal tract was conducted utilizing the RNAiso from small RNA Kit (Takara, China).Subsequently, mature miRNA's first-strand cDNA was conducted using the Mir-X ™ miRNA First-Strand Synthesis Kit (Takara, China).Quantitative analysis used the miRNA SYBR Green RT-qPCR Kit (Takara, China) with the provided miRNA reference gene (U6).The specific primers for the target miRNA used in this study are detailed in Supplementary Table 1.Relative quantitative was determined by the 2 -DDCT method (33).
Transcriptome and proteome validation
The identical samples employed for transcriptome analysis underwent RT-qPCR validation (n = 3).Primers were designed using Premier 5.0 and subsequently validated using the online Primer-BLAST program.Primer sequences are provided in Supplementary Table 1.For mRNA sequencing, 1 mg of RNA was subjected to reverse transcribed to generate cDNA.Real-time PCR assays were conducted using the CFX96 real-time PCR Detection System.The reference gene b-Actin was chosen based on a prior study (34).Similarly, relative quantitative was determined by the 2 - DDCT method (33).
Protein abundance levels were validated through the quantification of eight selected proteins using parallel reaction monitoring-mass spectrometry (PRM-MS) analysis conducted by Jingjie PTM BioLab Co., Ltd.(Hangzhou, China).Relative quantification using the PRM approach was employed, utilizing signature peptides derived from the target proteins identified based on the 4D-label-free data.Quantification was established with a minimum peptide count of 2, encompassing both unique and razor peptides.Protein extraction and trypsin digestion were conducted as previously outlined.Following the approach outlined in the earlier study (35), peptides were dissolved and then subjected to tandem mass spectrometry in conjunction with liquid chromatography (LC-MS/MS).Subsequently, the acquired MS data underwent processing utilizing Skyline software (v.3.6), which included the setting of several parameters.
Statistics analysis
Analysis of miRNA expression level was evaluated using a twotailed t-test (GraphPad Prism 8.0).For significant differences, * 0.01<P<0.05and ** 0.001<P<0.01between the two groups.The software GraphPad Prism 8.0 was used to generate the histograms.
Results mRNA sequencing analysis
A total of nine qualified libraries were subjected to sequencing, distributed across the FM, 11S, and Gln groups, with each group consisting of three biological replicates.Table 1 provides a concise overview of the sequencing and assembly details.The FM, 11S, and Gln groups yielded approximately 19.94, 18.05, and 18.26 Gb of clean reads, respectively.Over 91.72% of the reads exhibited Qscores at the Q30 level, and over 63.22% of the clean reads were successfully aligned.
In the comparison of Gln and 11S groups, 626 DEGs were enriched in 133 pathways, with the counts of DEGs within each enriched pathway ranging from 2 to 28 (Figure 3A).Among them, the leading 20 KEGG pathways showed significant enrichment in immune systemand human disease-related pathways, including NOD-like receptor signaling pathway (ko04621), C-type lectin receptor signaling pathway (ko04625), RIG-I-like receptor signaling pathway (ko04622), intestinal immune network for IgA production (ko04672), toll-like receptor signaling pathway (ko04620), salmonella infection (ko05132) and cardiac muscle contraction (ko04260).Additionally, pathways associated with intestinal epithelial barriers, including tight junction, focal adhesion and ECM-receptor interaction (ko04512), were also notably enriched (P < 0.05, Figure 3A).Upregulated genes showed significant enrichment in pathways linked to the immune system, including toll-like receptor signaling pathway, NOD-like receptor signaling pathway, C-type lectin receptor signaling pathway, RIG-Ilike receptor signaling pathway, intestinal immune network for IgA production and MAPK signaling pathway (ko04010; P < 0.05, Figure 3B).Additionally, they were significantly enriched in intestinal epithelial barrier-related pathways such as focal adhesion, regulation of actin cytoskeleton, ECM-receptor interaction, tight junction, and cell adhesion molecules (CAMs; P < 0.05).Downregulated genes showed significant enrichment in the ribosome, oxidative phosphorylation (ko00190), PPAR signaling pathway (ko03320), and cardiac muscle contraction (ko04260; P < 0.05, Figure 3C).
Differentially expressed proteins and subcellular localization analysis
As shown in Figure 4, a total of 169 DEPs were found in the 11S and FM comparison group, with 106 upregulated proteins and 63 downregulated proteins (FC >1.2).A total of 165 DEPs were found in the Gln and 11S comparison group, including 74 upregulated proteins and 91 downregulated proteins.Subcellular localization analysis showed that 78 proteins were situated in the cytoplasm (46.15%,Supplementary Figure 2A); 32 proteins were located in the mitochondria (18.93%); 19 proteins were localized in extracellular (11.24%); 17 proteins were localized in the nucleus (10.06%).Similarly, of the 165 DEPs in the Gln and 11S comparison group: 77 proteins were localized in the cytoplasm (46.67%); 32 proteins were localized in the mitochondria (19.39%); 19 proteins were localized in the nucleus (11.52%); 10 proteins were localized in the plasma membrane (6.06%);as well as 8 proteins were localized in the cytoplasm and nucleus (4.85%; Supplementary Figure 2B).
Integration analysis of the DEGs and DEPs
We performed a nine-quadrant plot classification of DEGs and DEPs (Figures 7A, C), quadrants 1 and 9 indicate that the mRNA is inconsistent with the corresponding protein differential expression pattern; quadrants 2 and 8 indicate that the mRNA is differentially expressed and the corresponding protein is unchanged; quadrants 3 and 7 suggest concordance between mRNA and corresponding protein differential expression; quadrant 4 and 6 indicate differential expression of protein and no change in corresponding mRNA; quadrant 5 indicates that both co-expressed mRNA and protein are non-differentially expressed.Then, KEGG enrichment pathway analysis was performed on differentially expressed mRNAs and proteins consistent with quadrants 3 and 7 in the 11S and FM comparison group (Figure 7B).The results showed that spliceosome (ko03040), NOD-like receptor signaling pathway, carbon metabolism (ko01200), protein export, necroptosis, pyruvate metabolism (ko00620), C-type lectin receptor signaling pathway and pentose phosphate pathway were significantly enriched in the TOP 20 pathways (P < 0.05).Similarly, KEGG enrichment pathway analysis of differential mRNAs and proteins consistently expressed in quadrants 3 and 7 in the Gln and 11S comparison group showed that pathways such as glycosaminoglycan biosynthesis (ko00532), NOD-like receptor signaling pathway, necroptosis, phagosome, sphingolipid mem, C-type lectin receptor signaling pathway, and ferroptosis and proteasome showed significant enrichment (P < 0.05, Figure 7D).Furthermore, the leading 20 KEGG pathways showed enrichment in immune system-related pathways, including NOD-like receptor signaling pathway and phagosome, along with intestinal barrier-related pathways including tight junction, adherens junction, and cell adhesion molecules.
miRNAs and their target genes involved in the intestinal epithelial barrier
We also focused on genes with inconsistent mRNA and corresponding protein differential expression patterns and further screened the genes and proteins associated with the intestinal epithelial barrier function in the first quadrant (Table 3).MHC-I showed an upregulation at the mRNA level and a downregulation at the protein level in the 11S and FM comparison group.As shown by the miRNA target gene profile, miR-143_2, miR-222, miR-192-3p_2, miR-34a-5p_2, and miR-21b_3p were able to target the mhc-I gene.Similarly, the Gln and 11S comparison group found that Colla1 and Col1a2 exhibited an upregulation at the mRNA level and a downregulation at the protein level.Moreover, miR-24, miR-212, and miR-18a-5p were able to target the colla1 gene, and miR-205a, miR-29a-3p, and miR-212 were able to target the colla2 gene.In addition, the expression levels of miR-18a-5p and miR-212 in the intestine of the Gln group were notably lower than those in Group 11S (P < 0.05, Figure 8), while miR-24 expression between Groups 11S and Gln showed no significant difference (P > 0.05).
Transcriptome and proteome validation
To validate the precision of the transcriptome findings of FM, 11S, and Gln groups.Ten genes (5 upregulation and 5 downregulation) were selected for qPCR validation in this experiment (Supplementary Figure 3).The agreement between RT-qPCR and transcriptome sequencing results underscores the enhanced accuracy of transcriptome sequencing.To validate the precision of the results of the three proteome groups (FM, 11S, and Gln), the DEPs validation analysis was then performed by PRM quantitative proteomics (Supplementary Figure 4).The results showed that the ribosomal protein L32 (RPL32), ribosomal protein S7 (RPS7), macrophage migration inhibitory factor (MMIF), malate dehydrogenase (MDH), and beta-hydroxysteroid dehydrogenase (b-HSD) proteins in the 11S and FM comparison group showed consistent expression levels between PRM and 4D-LFQ analyses (Supplementary Figure 4A).Moreover, the expression levels of CD45, RPL19, histone, annexin, and annexin max3 proteins in the Gln and 11S comparison group were consistent with the results of the 4D-LFQ analysis (Supplementary Figure 4B).
Discussion
We previously found that dietary Gln improved growth performance and alleviated intestinal inflammation induced by glycinin in hybrid grouper juveniles (27).However, the potential protective mechanism by which Gln alleviates enteritis in hybrid grouper remains unclear.On this basis, we further revealed its protective mechanism against soybean glycinin-induced hybrid grouper enteritis by integrating transcriptomic, proteomics, and miRNAs analyses.In the 11S and FM comparison group, the foremost 20 KEGG pathways involved in the immune systemand disease processes-related pathways, such as phagosomes and herpes simplex infection, as well as involved in the intestinal epithelial barrier-related pathways, such as tight junction, focal adhesion, apoptosis, and necroptosis, were significantly enriched.Analogous pathways have been identified in carnivorous fish that experience SBMIE, such as Atlantic salmon (Salmo salar) (10,36) and turbot (37,38).We also reported that these pathways above in hybrid groupers were enriched in the soybean meal substituted 50% of fishmeal (SBM50) and fishmeal comparison group (39).In addition, the downregulated genes were involved in intestinal epithelial barrier-related pathways such as tight junction, focal adhesion, ECM-receptor interaction, and cell adhesion molecules (CAMs), suggesting impaired intestinal development and increased intestinal permeability in fish fed 11S diet alone.When Gln was added to the 11S diet, the upregulated genes exhibited a pronounced enrichment in pathways associated with the immune system.These included the toll-like receptor signaling pathway, NOD-like receptor signaling pathway, C-type lectin receptor signaling pathway, RIG-I-like receptor signaling pathway, intestinal immune network for IgA production, and MAPK signaling pathway.Furthermore, there was significant enrichment in pathways related to the intestinal epithelial barrier, including focal adhesion, ECM-receptor interaction, tight junction, regulation of actin cytoskeleton, and cell adhesion molecules (CAMs).The above results suggested that Gln enhanced intestinal immune and intestinal epithelial barrier functions and reduced the occurrence of hybrid grouper enteritis induced by soybean 11S.Similar results have been observed in various fish species, showing that the addition of Gln in the feed was effective in alleviating the clinical symptoms of trinitrobenzene sulfonic acid-induced enteritis in grass carp (Ctenopharyngodon idella) (40) and soybean antigenic protein-induced enteritis in Jian carp (Cyprinus carpio var Jian) (9,41), and in promoting intestinal barrier function and hindgut morphology of soybean meal-induced enteritis in turbot (30,38).Proteins are the direct function executors of myriad life activities.Proteomics enables population assessment of protein expression levels, composition, and modification status in samples through high-throughput analysis, which in turn reveals protein functions, potential relationships between proteins, and the mining The fold change (FC) thresholds for the transcriptome and proteome were ≥1.5 and ≥1.2, respectively.Log2FC.xrepresents proteins and log2FC.yrepresents genes. of new proteins.The proteomics data of this study showed that a total of 169 DEPs were found in the comparison of the 11S and FM group, and 165 DEPs were found in the Gln and 11S comparison group.These DEPs of the two comparison groups were mainly distributed in the cytoplasm, with a percentage of 46.15% and 46.67%, respectively, suggesting that the DEPs may mainly play important functions in the cytoplasm.In addition, KEGG functional annotation was performed on these DEPs.In the Gln and 11S comparison group, the expression of upregulated proteins displayed significant enrichment in pathways associated with the immune system and human disease pathways, including NOD-like receptor signaling pathway, natural killer cell-mediated cytotoxicity, renal cell carcinoma, chronic myeloid leukemia, and acute myeloid leukemia, implying that these signaling pathways above may play an important role in glycinin-induced enteritis in hybrid groupers.
When soybean 11S feeds were supplemented with Gln, the upregulated DEPs showed significant enrichment in intestinal epithelial barrier-related pathways including tight junction and cell adhesion molecules (CAMs).Similar results have been observed in the jejunum of maternal and piglets, demonstrating that dietary Gln increased the translation levels of intestinal tight junction and cell adhesion molecule proteins (42).Notably, immune system-and human disease-related pathways, including Th1 and Th2 cell proliferation, Th17 cell proliferation, platelet activation, JAK-STAT signaling pathway, primary immunodeficiency, inflammatory bowel disease, and leishmaniasis were also significantly enriched in Gln and 11S comparison group, suggesting a close link between intestinal epithelial barrier function and immune system pathways in hybrid grouper with Gln supplementation in soya 11S feed.Correlation analysis of transcriptomic and proteomics data offers a more complete insight compared to single omics, and the two can mutually validate the reliability of the data.In this study, 2,057 genes were associated with the mRNA and protein levels in the FM, 11S, and Gln groups.The correlation coefficients of gene expression associated with the 11S-vs-FM and Gln-vs-11S comparison groups at the mRNA and protein levels were -0.10 and 0.09, respectively, indicating that the mRNA-protein correlation in this study was low.Huang et al. ( 43) correlated the transcriptome and proteome of Cyanobacteria at two points of time (24 h and 48 h) under nitrogen starvation and found correlation coefficients of 0.04 and -0.001, respectively.The process of translation from mRNA to protein is subject to complicated regulation, such as post-transcriptional regulation and protein translation modification, resulting in a weak correlation between transcriptome and proteome (44).In order to clarify the mechanism of protective effect of Gln in alleviating soybean 11S-induced grouper enteritis, differential expressed mRNAs and consistently expressed proteins were further analyzed for KEGG enrichment pathway.Genes such as myosin-1, tubulin alpha-2, alpha-actin, major histocompatibility complex class II (mhc-II), mucin-3B, mucosal pentraxin, leiomodin-1, cytoplasmic dynein 1 heavy chain 1, lysozyme, and eukaryotic translation initiation factor 5B were down-regulated at both mRNA and protein levels in the 11S and FM comparison group.In addition, myosin-1, cortactin, Wiskott-Aldrich syndrome protein, ras-related C3 botulinum toxin substrate 2, tenascin, cd4, mhc-I, mhc-II, lysozyme, and ikBa were upregulated at both the transcriptional and translational levels in the Gln and 11S comparison group.These genes participate in intestinal epithelial barrier pathways, including tight junction, adhesion junction, cell adhesion molecules (CAMs) and ECM-receptor interaction, as well as NOD-like receptor signaling pathway and NF-kB signaling pathway.Tight junctions are essential for animal organisms to establish a selective permeability barrier between neighboring cells.Myosin in tight junctions is the most important component of fish muscle proteins responsible for the contractile function of myogenic fibers and has ATPase activity, which binds actin and forms fibers under physiological conditions of low ionic strength (45).In addition, cortactin is associated with a variety of complex cellular processes, including cell motility, invasiveness, synaptogenesis, phagocytosis, tumorigenesis, and metastasis formation (46).Overexpression of cortactin could contribute to the emergence of invasive tumor phenotypes in a variety of ways, including enhanced actin polymerization, down-regulated epidermal growth factor receptor, and molecule interactions between cyclin D1 and CD44 proteins (46).The extracellular matrix (ECM) is a complex blend of structural and functional macromolecules, which hold a vital role in the development of tissues and organs, and the preservation of cells and tissue (47).The tenascin involved in the ECM was also reported in Gln on mouse mesangial cells (48), showing that the mRNA expression level of tenascin was not affected after treating the cells with 2 mM Gln compared to the control group (no Gln was added), which was inconsistent with the results of the present experiments, probably due to the differences in the different species, in vitro and in vivo experiments.After infection of bone marrow-derived macrophages (BMDMs) using the bacterium Leishmania donovani, supplementation with Gln can significantly increase the gene expression of mhc-II (49), which is similar to the results of the present experiment.The MHC also comprises the most polymorphic genes in the vertebrate genomes that are closely related to immune response (50).IkB is an inhibitor of NF-kB, and NF-kB activity is inhibited when it is present.The IKK complexes, encompassing IKKa, IKKb, and IKKg, are capable of initiating the phosphorylation of IkB.The phosphorylation event prompts the degradation of IkB, subsequently culminating in the activation of NF-kB.This activation involves various subunits of NF-kB, including NF-kB p52, NF-kB p65, and c-Rel.As a result, there is an up-regulation in the expression of pro-inflammatory cytokines like tnf-a (51).The present study and our previous results (27) also found that dietary Gln down-regulated ikkb, nf-kb, tnf-a, il-1b, ifn-a, and hsp70 mRNA expression levels as well as upregulated IkB expression at both mRNA and protein levels and ultimately reduced the occurrence of inflammation in hybrid groupers.Similar results were found for another amino acid, Met-Met, showing that suitable dietary Met-Met down-regulated the gene levels of nf-kb p65, c-rel, ikkb, and ikkg and up-regulated the gene levels of ikBa in the intestinal tract of juvenile grass carp (52).In addition, lysozyme can remove the residual cell wall after the action of antibacterial factors, enhance the antibacterial sensitivity of other immune factors, synergize with other immune factors to resist the invasion of foreign pathogens, which increases the activity of serum lysozyme and improve its immunity accordingly (53).Our previous results also showed that soybean 11S reduced the intestinal lysozyme activity of hybrid grouper at both the transcriptional and protein levels, whereas the supplementation of Gln in the 11S feed increased the lysozyme activity at both mRNA and protein levels.This suggests a potential enhancement in the intestinal immune function of the hybrid grouper.
The primary factor contributing to the limited correlation between transcriptome and proteome data arises from the intricate regulation occurring at multiple stages of gene expression.This includes transcription of DNA into mRNA and subsequent translation of mRNA into protein.Diverse factors exert control over these processes, encompassing both transcriptional and translational levels, as well as post-translational modifications.These multifaceted regulatory mechanisms lead to variations in mRNA transcript numbers, protein localization, abundance, and functionality.Consequently, these dynamic changes disrupt the alignment between mRNA and its corresponding protein, resulting in the observed reduced correlation between the two.We next focused on the role of miRNAs in post-transcriptional and translational control of gene expression from the miRNA level (16) to further explain genes that are inconsistent at the mRNA and protein levels.The intestinal miRNA expression profile and their target genes were obtained from the previous experiment, showing that the mhc-I gene (up-regulated at transcriptional level and downregulated at translational level) could be regulated by miR-143_2, miR-222, miR-192-3p_2, miR-34a-5p_2, and miR-21b-3p.The result implied that these miRNAs likely have a significant role in regulating the target mRNA/protein (MHC-I).In addition, the target genes of miR-24, miR-24-3p, miR-18a-5p, and miR-212 in the Gln and 11S comparison group were type I collagen a1; the target genes of miR-205a, miR-29a-3p, and miR-212 were type I collagen a2.Collagen has strong biological activity and function and plays a crucial role in mediating cell migration, differentiation, and proliferation (54).In this experiment, collagen a1 and collagen a2 genes were up-regulated at transcriptional level and down-regulated at translational level, suggesting that these miRNAs above may inhibit the translational level of collagen a1 and collagen a2 genes.The qPCR results of miRNA further confirmed that miR-18a-5p and miR-212 expression levels were significantly affected in the Gln and 11S groups.Notably, the down-regulation of miR-212 expression targeted collagen a1 and collagen a genes.Our early miRNA data showed that miR-212 had significantly higher expression levels (log 2 FC=2.182) in the SBM50 and FM comparison group (39).MiR-212 is a potent therapeutic target in mouse intestinal epithelial cells, where it affects a variety of T cells.Inhibition of miR-212/132 led to the induction of Treg1 and CD4+ cells and caused a decrease in Th17 cells (55).During chronic HIV/ SIV infection, the disrupted expression of miR-212 in colonic epithelial cells can contribute to the disruption of the epithelial barrier by down-regulating the expression of occludin and PPARg (56).The increased expression levels of Collagen a1 and Collagen a2 proteins, miR-18a-5p and miR-212 will be a key point.Notably, further validation of the targeting relationship of these miRNAs with target genes by dual luciferase reporter is needed.
In conclusion, enteritis induced by soybean glycinin was affected by mRNA and protein levels.By integrated transcriptome and proteome, 117 genes showed consistent expression patterns at both the transcriptional and translational levels in the Gln and 11S comparison group.Further found that the intestinal epithelial barrier pathways mediated the molecular mechanism in Gln alleviation of grouper enteritis induced by soybean glycinin.In addition, some miRNAs such as miR-212 and miR-18a-5 play key regulatory roles in Gln alleviation of hybrid grouper enteritis.Our findings provide valuable insights into the RNAs and protein profiles, contributing to a deeper understanding of the underlying mechanism for fish enteritis.
Data availability statement
The datasets presented in this study can be found in online repositories.The names of the repository/repositories and accession number(s) can be found below: NCBI via accession ID downregulation genes (FC > 1.5).Likewise, 626 DEGs were identified in the Gln and 11S comparison group, with 328 upregulation and 298 downregulation genes.Principal component analysis (PCA) was used to assess the similarities within samples a n d w h e t h e r t h e s a m p l e s c o u l d b e g r o u p e d w e l l (Supplementary Figure
FIGURE 1
FIGURE 1Analysis of histogram and volcano plot of differentially expressed genes (DEGs) in both 11S-vs-FM and Gln-vs-11S comparison groups.The horizontal and vertical axis in the volcano plot represents the DEGs value and -log10 p-value, respectively.In this visualization, upregulated DEGs are marked by red dots.Conversely, downregulated DEGs are indicated by green dots.Genes displaying no significant difference in expression are marked by gray dots.
FIGURE 4
FIGURE 4Analysis of histogram and volcano plot of differentially expressed proteins (DEPs) in both 11S-vs-FM and Gln-vs-11S comparison groups.The horizontal and vertical axis in the volcano plot represents the DEPs value and -log10 p-value, respectively.In this visualization, upregulated DEPs are denoted by red dots.Conversely, downregulated DEPs are denoted by green dots.Proteins displaying no significant difference in expression are marked by gray dots.
5
FIGURE 5 Enrichment analysis of KEGG pathway for differentially expressed protein (DEPs) in 11S-vs-FM comparison group.Log2 Fold enrichment is displayed on the horizontal axis, while the vertical axis denotes KEGG pathway names.Bubble size signifies protein counts within each pathway.The enriched P-value is represented by a color.(A) Enrichment by upregulated DEPs.(B) Enrichment by downregulated DEPs.
6
FIGURE 6Enrichment analysis of KEGG pathway for differentially expressed protein (DEPs) in Gln-vs-11S comparison group.Log2 Fold enrichment is displayed on the horizontal axis, while the vertical axis denotes KEGG pathway names.Bubble size signifies protein count within each pathway.The enriched P-value is represented by color.(A) Enrichment by upregulated DEPs.(B) Enrichment by downregulated DEPs.
Top 20 KEGG
enrichment analysis with differential mRNAs consistent with the corresponding differential protein expression (quadrants 3 and 7) in both the 11S-vs-FM and Gln-vs-11S group.The horizontal axis (A, C) represents the log2 ratio of protein, and the vertical axis (A, C) denotes the log2 ratio of transcript.The horizontal axis (B, D) indicates the rich factor, and the vertical axis (B, D) denotes the name of the KEGG pathway.Bubble size (B, D) indicates protein counts within each pathway.Enriched-log 10 P-value is represented by a color (B, D).
FIGURE 8
FIGURE 8Targeted miRNA levels were analyzed by quantitative PCR (qPCR) in 11S and Gln groups.Screening of miRNAs regulating key genes associated with the intestinal barrier pathways based on a small RNA sequencing database in hybrid groupers.*0.01<P<0.05 and ** 0.001<P<0.01.
TABLE 1
Overview of mRNA sequencing datasets from the intestine (9 samples).
TABLE 2
Differential genes and proteins associated with the intestinal epithelial barrier in quadrants 3 or 7.
TABLE 3
miRNAs targeting genes and proteins related to the intestinal barrier function. | 8,082.2 | 2023-11-22T00:00:00.000 | [
"Biology",
"Environmental Science",
"Agricultural and Food Sciences"
] |
Vascular physiology drives functional brain networks
We present the first evidence for vascular regulation driving fMRI signals in specific functional brain networks. Using concurrent neuronal and vascular stimuli, we collected 30 BOLD fMRI datasets in 10 healthy individuals: a working memory task, flashing checkerboard stimulus, and CO2 inhalation challenge were delivered in concurrent but orthogonal paradigms. The resulting imaging data were averaged together and decomposed using independent component analysis, and three “neuronal networks” were identified as demonstrating maximum temporal correlation with the neuronal stimulus paradigms: Default Mode Network, Task Positive Network, and Visual Network. For each of these, we observed a second network component with high spatial overlap. Using dual regression in the original 30 datasets, we extracted the time-series associated with these network pairs and calculated the percent of variance explained by the neuronal or vascular stimuli using a normalized R2 parameter. In each pairing, one network was dominated by the appropriate neuronal stimulus, and the other was dominated by the vascular stimulus as represented by the end-tidal CO2 time-series recorded in each scan. We acquired a second dataset in 8 of the original participants, where no CO2 challenge was delivered and CO2 levels fluctuated naturally with breathing variations. Although splitting of functional networks was not robust in these data, performing dual regression with the network maps from the original analysis in this new dataset successfully replicated our observations. Thus, in addition to responding to localized metabolic changes, the brain’s vasculature may be regulated in a coordinated manner that mimics (and potentially supports) specific functional brain networks. Multi-modal imaging and advances in fMRI acquisition and analysis could facilitate further study of the dual nature of functional brain networks. It will be critical to understand network-specific vascular function, and the behavior of a coupled vascular-neural network, in future studies of brain pathology.
Introduction
Imaging neuroscience has advanced a new theory of brain function based on the interconnectedness of neuronal activity in multiple brain regions (Friston, 2011). These regions form structural and functional networks that are consistent across individuals (Damoiseaux et al., 2006) and intrinsic to brain activity during active processing or in the resting state (Friston, 2011;Smith et al., 2009). To provide efficient and targeted support for such neuronal networks, we hypothesize that the cerebrovasculature has also evolved characteristics of functional networks.
It is well established that local blood flow is tightly coupled to local neuronal activity to protect brain metabolism (Damoiseaux et al., 2006;Karbowski, 2014). This coupling is what underpins the Blood Oxygenation Level Dependent (BOLD) contrast mechanism in functional magnetic resonance imaging (fMRI) of brain activity, and this technique has been used to characterize functional networks in thousands of neuroimaging studies of the human brain . Several functional brain networks are robustly identified in human subjects, in both task-activation and resting-state datasets (Cole et al., 2014;Damoiseaux et al., 2006;Smith et al., 2009), and are frequently characterized in patient cohorts to better understand the mechanisms of pathology.
However, the vasculature can also regulate local blood flow in response to physical and chemical signals, independently of local neuronal activity (Kuschinsky and Wahl, 1978). Inhalation of air with elevated levels of carbon dioxide (CO 2 , a potent vasodilator) is frequently used to drive a vascular response and a resulting BOLD signal increase. The resulting maps of cerebrovascular reactivity are frequently used to assess impairment in vascular function, such as in multiple sclerosis (Marshall et al., 2014), Alzheimer's disease (Glodzik et al., 2013), and stroke (Krainik et al., 2005;Pillai and Mikulis, 2015). This type of hypercapnia challenge impacts arterial blood gas tensions systemically, influencing all of the cerebrovasculature simultaneously. However, even in healthy individuals the characteristics of local vascular responses to CO 2 vary across the brain (Bright et al., 2009) and may demonstrate regions of coordinated vascular regulation.
In a previous fMRI study to improve our methodology for mapping this regional variation in vascular regulation, we used a breath-hold paradigm to induce changes in end-tidal CO 2 (Bright and Murphy, 2013). In further exploratory analyses, we averaged the resulting BOLD-weighted data across subjects and used Independent Component Analysis (ICA) to decompose the data into spatially independent "network" maps and associated time-series. In these results, we identified that the Default Mode Network (DMN) was represented by two components: one DMN time-series showed clear BOLD signal increases lagging the end-tidal CO 2 effects, thereby reflecting the vasodilatory effect of the stimulus. Interestingly, the second DMN component time-series exhibited BOLD signal decreases during and preceding the actual breath-hold itself, potentially reflecting deactivation of this network and reduced neural activity during the active, attentional portion of the paradigm. (See Supplemental Figure 1 for a summary of these preliminary results.) Based on this observation, we hypothesize that functional brain networks may be comprised of two, distinct but coupled systems: one primarily driven by neuronal activity and one driven more by vascular regulation. Extending this premise, vascular regulation may occur in a coordinated manner across multiple, long-distance brain regions, mimicking or contributing to known functional brain networks.
To test this hypothesis, we developed a protocol to probe both physiological systems across multiple brain networks, employing concurrent and orthogonal neuronal and vascular stimuli. We decompose the resulting BOLD signal changes using ICA and identify the relative influence of neuronal and vascular factors on functional brain networks. Our results provide further evidence for the dual-nature of functional brain networks, and highlight the importance of characterizing vascular function as well as neuronal function within specific brain networks.
Methods
Whole-brain functional MRI neuroimaging data were collected during stimuli designed to simultaneously probe neuronal and vascular systems throughout the brain. These data were then decomposed to identify network structures reflecting either neuronal or vascular mechanisms. Data are publicly available through the Open Science Framework (DOI 10.17605/OSF.IO/NYQZV).
Neuronal and vascular stimuli
A 3-back working memory task (centrally presented, digits 0-9, presented for periods of 0.5 s at 1.5 s intervals) was delivered in a 30-s block design, with an extended (60 s) off-period in the middle of the paradigm. Participants were asked to press a button when the digit presented was the same as that presented three stimuli previously. A visual stimulus consisting of a radial flashing checkerboard pattern was also presented in a block design in the second half of the scan (8 Hz, 70% contrast, with neutral center to allow simultaneous presentation of the working memory task). These stimuli were presented using a rear projector and screen viewed through a mirror on the head coil.
During these neuronal tasks, four 1-min blocks of passive hypercapnia were used as a concurrent vascular stimulus. A gas mixture with increased levels of carbon dioxide (CO 2 ) was delivered to the subject via a face mask, manually adjusted to target an end-tidal CO 2 increase of þ5 mmHg. Inhalation of CO 2 alters arterial blood gas tensions, which results in vasodilation and enhanced blood flow throughout the body. It is known that the response of local vessels to this systemic stimulus varies across the brain, in amplitude and dynamics, and these variations can be observed using BOLD fMRI (Bright et al., 2009). We hypothesize that these variations in the response to hypercapnia will reveal spatial patterns of coordinated vascular regulation, or vascular networks.
All three stimulus paradigms were designed to be mutually orthogonal: the correlation between each of the idealized stimulus designs was zero. A schematic of each stimulus is presented in Fig. 1. The neuronal stimulus timings were convolved with the canonical hemodynamic response function (SPM). The end-tidal CO 2 data were extracted using bespoke code (MATLAB, MathWorks, Natick, MA, USA), convolved with the same hemodynamic response function, and used as a scan-specific measure of the vascular stimulus evoked by the hypercapnia challenge (Bright and Murphy, 2013). Note that there may be slight collinearity between the neural and vascular stimuli following convolution, and depending on the precise end-tidal CO 2 changes induced in each participant.
In a follow-up study (Replication) that did not involve the hypercapnia stimulus, the participant's end-tidal CO 2 levels were allowed to fluctuate naturally, and a nasal cannula was used to monitor respiratory gas content in lieu of the face mask.
Expired gas content was continuously monitored via a sampling port on the face mask, and O 2 and CO 2 data were recorded using a capnograph and oxygen analyzer (AEI Technologies, PA, USA). End-tidal CO 2 data were extracted and convolved with a hemodynamic response function (Fig. 2); the hypercapnia achieved in this protocol (averaged across the Fig. 1. Schematic of neuro-vascular stimulus paradigm. The neuronal stimuli (working memory task and flashing checkerboard pattern) were presented in a block design, which was convolved with a hemodynamic response function to model the resulting BOLD signal. Four 1-min blocks of hypercapnia were induced via gas inhalation, and modeled in a subject-specific manner by extracting the end-tidal CO 2 data and convolving with a hemodynamic response function. study) was 5.8 AE 1.1 mmHg above baseline levels.
The study cohort size was not determined via calculation, but was determined to be sufficiently large given the literature studying neuronal (Damoiseaux et al., 2006) and vascular (Curtis et al., 2014) networks. This study was approved by the Cardiff University School of Psychology Ethics Committee, and all volunteers gave written informed consent.
Datasets were then detrended using second order polynomials and converted into units of percentage change (%BOLD). The 30 preprocessed %BOLD datasets were averaged together to reduce the influence of any signals not time-locked to the stimulus paradigm (i.e., resting fluctuations and other noise confounds).
Network analysis
The average dataset was decomposed into spatially independent networks using independent component analysis, as implemented in the MELODIC tool in FSL (dimensionality fixed to output 30 components; each comprised of a network map and associated time-series). Because BOLD-weighted signals are both directly influenced by changes in the vasculature, and indirectly influenced by neuronal activity via neurovascular coupling, the temporal characteristics of signal changes were used to determine whether they reflect primarily neuronal or vascular mechanisms. Three 'neural networks' were identified using temporal correlation values of the component time-series and stimulus timings as follows. The component with the maximal negative correlation with the 3-back stimulus was identified as the neuronal Default Mode Network (DMN), which is robustly de-activated during working memory tasks (Hampson et al., 2006;Raichle et al., 2001;Shulman et al., 1997). The component demonstrating maximum positive correlation with the 3-back stimulus was identified as the neuronal Task Positive Network (TPN) (Fox et al., 2005;Spreng, 2012). Finally, the component exhibiting maximum positive correlation with the visual stimulus was identified as the neuronal Visual Network (VN).
Component maps were thresholded using a mixture model and an alternative hypothesis testing approach (Beckmann and Smith, 2004) (threshold level 0.5) and spatial similarity between all 30 components was quantified using Dice's overlap coefficient (Dice, 1945). For each neuronal network, we identified the additional component map with the greatest spatial overlap. Thus, three pairs of spatially coupled components were identified.
Finally, dual regression (Filippini et al., 2009) was used to extract the time-series associated with these 6 components within each of the original 30 datasets. The normalized R 2 value was defined as the time-series variance (R 2 ) explained by one stimulus normalized by the variance explained by the full stimulus model, which is the percentage of explained variance attributed to one stimulus. Paired two-tailed Student t-tests were used to compare the normalized R 2 values of each component pairing (DMN, TPN and VN) for each of the stimuli. Normality of the pair-wise differences was assessed using the Lilliefors test, and significant non-zero differences in the temporal signatures of the component pairs were identified (*p < 0.05, Bonferroni corrected for multiple comparisons).
Fig. 2.
End-tidal CO 2 data for all scans. Data for 30 scans (3 repeated scans per participant) were convolved with a hemodynamic response function, and represented the scan-specific vascular stimulus. For illustration purposes, data were normalized to the baseline end-tidal CO 2 level (mean value in the first 100 s of the scan).
Replication and generalizability
Eight of the original 10 subjects were scanned, three times each, using a reduced stimulus paradigm consisting of only the working memory and visual stimuli. Thus, in these scans, end-tidal CO 2 was allowed to fluctuate naturally rather than be driven by the gas inhalation stimulus. As such, these scans follow more "typical" task-activation fMRI experiments, and they will allow us to assess the replicability and generalizability of our observations.
Data were pre-processed as described above. Because the end-tidal CO 2 levels were allowed to fluctuate naturally, these fluctuations will vary across scans and individuals would not be robustly present in the group-average dataset. Thus, independent component analysis could not be applied to isolate vascular-neuronal network pairs in the averaged data as was done in the original analysis. Instead, using dual-regression the time-series associated with the network maps identified in the original dataset were extracted for the replication data.
Results
Following Independent Component Analysis, three resulting components were identified as "neuronal networks", demonstrating maximum temporal correlation with the neuronal stimulus paradigms: the Default Mode Network, Task Positive Network, and Visual Network were identified (Fig. 3).
Using dual-regression, we extracted the time-series associated with each of these components in the original datasets (all time-series provided in Supplemental Figure 2). Each time-series was analyzed to determine the relative contributions of the neuronal and vascular stimuli to the BOLD contrast dynamics, as summarized by the normalized R 2 values described above. We observed that all three functional brain networks probed in our study were composed of spatially similar pairs of components where one was significantly more associated with the appropriate neuronal stimulus and the other significantly more associated with the vascular stimulus (Fig. 4). For reference, a summary of the other components in the ICA decomposition is provided (Supplemental Figure 3).
In Fig. 4, the networks on the left of each pairing were identified as maximally temporally correlated with the neuronal stimuli; thus, by design, the working memory stimulus and visual stimulus explain a large proportion of the signal variance (high normalized R 2 values). Interestingly, the networks on the right of each pairing, which were identified as being spatially similar, show a significantly reduced relationship with these neuronal stimuli. The bottom row shows that all networks show a relationship with the hypercapnia stimulus (represented by the end-tidal CO 2 data from individual scans), however the networks on the right of each pairing show significantly greater normalized R 2 values in all cases. Combined, these results suggest that the pairs of spatially similar networks consist of one network representing the neuronal stimuli and one network more reflective of the vascular stimuli. (Note, as a control, we see the expected minimal relationship between the DMN and TPN and the visual stimulus, or the VN and the working memory stimulus.) When examining the Replication dataset, similar phenomena were also observed (Fig. 5): the normalized R 2 values demonstrate the same differentiation between the more 'neuronal' and more 'vascular' components. This demonstrates that our observations are also present in more "typical" fMRI data in the absence of overt hypercapnia challenges, although it is clear that the effects are more variable across individual scans. However, we observe one new effect in the Replication data that was not present in the original results. Specifically, the working memory stimulus explains significantly more variance in the "more vascular" VN, whereas no relationship was found in the original data (Fig. 5, top right plot).
Why is the working memory stimulus driving signal fluctuations in the "more vascular" visual network component in the Replication data? The 3-back task was presented visually, so it is plausible that it would activate the visual processing systems. However, this is not observed in the original dataset, suggesting another mechanism is responsible. It is also known that task-correlated breathing changes are a common, confounding contributor to fMRI data (Birn et al., 2009), and thus end-tidal CO 2 may become time-locked to the neural stimulus. Indeed, Fig. 6A presents the end-tidal CO 2 time-series of all scans in the Replication Fig. 3. Identification of spatially similar component pairs for three functional brain networks. 1) The three components with maximum temporal correlation with the neuronal stimuli were identified as 'neuronal' networks. 2) For each neuronal network, an additional component with the maximal spatial overlap was identified.
3) The temporal characteristics of these spatially similar components were used to assess the underlying neuronal or vascular mechanisms. dataset, showing a strong negative correlation between the group-average end-tidal CO 2 trace and the working memory stimulus design (Pearson correlation coefficient r ¼ À0.68). Averaging over the ten blocks of the 3-back task, this coupled relationship is even more apparent (r ¼ À0.91). Furthermore, Fig. 6B shows the BOLD signal changes evoked by the working memory stimulus in the two VNs, clearly showing a BOLD signal decrease in the "more vascular" network; this agrees with the concurrent decrease in end-tidal CO 2 , while directly countering the argument that the working memory stimulus cues activate the visual cortex (which would be a positive BOLD signal change). Thus, the seemingly paradoxical relationship between the "more vascular" VN time-series and the working memory task paradigm is likely caused by vascular physiology becoming time-locked to that neuronal stimulus.
Discussion
Our findings provide the first evidence for network-specific behavior of cerebrovascular regulation, and suggest the brain's blood supply may be regulated in networks that spatially mirror known neuronal networks. Using ICA to decompose group-averaged fMRI data, we identified three functional networks associated with working memory and visual stimuli. In the remaining components, three additional networks were identified to have similar spatial features and high spatial overlap as measured by the Dice coefficient. The time-series of these spatially-similar networks were dominated by the vascular stimulus. Although the inhaled carbon dioxide challenge used as a vascular stimulus in this study is known to induce systemic vasodilation and BOLD signal increases (Liu, B De Vis, & Lu, 2018), our results suggest that the vasodilatory effects show regional variation and that may drive BOLD signal changes in specific functional brain networks or sub-networks.
The spatial similarity of the "more vascular" networks and neuronal networks may derive from patterns in neurovascular anatomy (W€ alchli et al., 2015): because neuronal and vascular growth processes track each For each functional brain network pair, one was found to be significantly more associated with the appropriate neuronal stimulus and the other significantly more associated with the vascular CO 2 stimulus (*p < 0.05, paired t-tests, corrected for multiple comparisons).
Fig. 5.
The spatial maps extracted in the original dataset were applied to a second dataset to test the replicability and generalizability of our primary observations. In these new data, no hypercapnia stimulus was administered and end-tidal CO 2 was allowed to fluctuate naturally. Eight of the original 10 participants were re-scanned, 3 times each, using only the working memory and visual stimuli. The networks identified in the original data were regressed onto the new data, and the associated time-series were extracted and analyzed as before. Significant differences in the normalized R 2 values, in good agreement with the original observations in the first study, are indicated by asterisks (*p < 0.05, paired two-tailed Student t-tests, Bonferroni corrected for multiple comparisons). Note an unexpected, significant relationship between the "more vascular" Visual Network data and the working memory stimulus, not observed in the original dataset (Fig. 4).
M.G. Bright et al. NeuroImage 217 (2020) 116907 other during development (Quaegebeur et al., 2011), remote brain regions that establish neuronal links may also establish similar vascular, astrocytic, or other glial anatomy that influences local hemodynamic regulation. In the fully developed brain, environmental factors and repetitive activities (e.g., exercise) that impact the expression of neurotrophic factors may also simultaneously alter local angiogenesis (Black et al., 1990;Ding et al., 2004;Swain et al., 2003), allowing for ongoing and co-regulated plasticity of neuronal and vascular networks. By coordinating blood flow across brain regions that typically exhibit synchronous neuronal activity, such vascular networks would also provide the most efficient hemodynamic support for increased network metabolism. The mechanisms by which long range vascular synchronizations could occur are not known. However, arteries and arterioles are not just conduits of blood but rather a collection of ion channels that are gated by voltage, calcium, pressure and other mechanical factors that lead to emergent dynamics such as vasomotion (Haddock and Hill, 2005;Nilsson and Aalkjaer, 2003). If the arterioles supporting each neural network are slightly different in cellular structure, they may have variable responses to particular stimuli. Similar fluctuations in arterial CO 2 and pressure could lead to differential fluctuations in BOLD signal. Isolated vessels show spontaneous oscillations in diameter with the typical frequency range for intrinsic oscillations (Gustafsson et al., 1994;Haddock and Hill, 2005;Osol and Halpern, 1988) and the amplitude and frequency of oscillation can be modulated by pressure (Achakri et al., 1995). When neural activity is drastically reduced using muscimol infusion, there is only a minimal reduction in the amplitude of arterial diameter oscillations, cerebral blood volume fluctuations and tissue oxygenation (Winder et al., 2017;Q. Zhang et al., 2019).
The structure of the vascular tree could also play a role. CO 2 may take time to traverse the vascular tree leading to timing differences (Tong and Frederick, 2014) and local pressure change differences along the tree could lead to divergent autoregulatory processes. Although vessels might receive similar inputs, there may be disparate drives across the networks, for example, the reactivity of the vessel due to the current state of vasodilator release. To examine the role of vascular transit variability on our results, we compared the timeseries of the three "more vascular" networks identified: using cross-correlation we observed different non-zero temporal offsets between the network timeseries and the averaged P ET CO 2 timeseries, and between the network timeseries (Supplemental Figure 7). Vascular transit delays have been identified as a Fig. 6. Evidence for task-correlated changes in vascular physiology and its effect on the "more vascular" networks in the Replication dataset. A) In the absence of a hypercapnia gas inhalation stimulus, end-tidal CO 2 fluctuated with each individual's natural variations in breathing. The group average endtidal CO 2 trace across all scans in the Replication dataset (red, standard deviation shown in gray) is plotted, with the 3-back working memory stimulus paradigm (blue) provided as a reference. The block average across the 10 blocks of the 3-back task is also shown. Pearson correlation coefficients between the CO 2 data and the stimulus are given (r ¼ À0.68 across the entire time-series, r ¼ À0.91 in the block-average data). B) The average end-tidal CO 2 data and the blockaverage BOLD response evoked by the 3back working memory task (blue bars) in the "more neural" and "more vascular" visual networks (thin lines represent the data from each individual scan, thick lines represent the average of 30 scans). These results demonstrate that the task-correlated changes in end-tidal CO 2 appear to drive the signal fluctuations in the "more vascular" visual network. Because these effects manifest as negative BOLD signal changes time-locked to the working memory task, and visual activation during the working memory task would evoke positive BOLD signal changes, this is further evidence for a vascular driver of this functional network system. possible source of low frequency oscillations in resting state data, and implicated in the decomposition of fMRI data using ICA techniques (Tong et al., 2015;Tong et al., 2013). Furthermore, we also observed differences in the temporal dynamics of the BOLD response to the hypercapnia blocks, beyond a simple lag offset (Supplemental Figure 8). These variations may reflect interactions between vascular transit "path lengths" and the dispersion of vasodilatory signals, or heterogeneity in the local response to the same vasodilatory signals.
There may also be a more active synchronized long-distance control of arteriolar diameters. Sympathetic innervation of vessels is known to be bilateral (Revel et al., 2012). It is also known that fluctuations in arterial diameter are bilaterally symmetrical (Porret et al., 1995). In the mouse brain, co-fluctuations in diameters of pairs of arterioles in the same hemisphere reduce with distance but are highly correlated in the transhemispheric site (Mateo et al., 2017). Correlations in the signals are reduced in acallosal mice suggesting some input from callosal connections.
In addition to providing localized, responsive hemodynamic support for neuronal metabolism, the vasculature may also have a synergistic role in network brain activity: another interpretation of our findings is that vascular physiology modulates neuronal activity to drive the splitting of functional brain networks. Thus, the "more vascular" networks identified in this study may still fundamentally represent neuronal systems, but are somehow modulated by CO 2 levels, whereas the associated "neuronal networks" are not specifically affected. There is emerging evidence that vascular physiology can influence neural activity (Croal et al., 2015;Hall et al., 2011;Xu et al., 2011), and our lab has demonstrated that end-tidal CO 2 changes, during gas inhalation and during resting fluctuations in breathing, can modulate neuronal rhythms as measured using magnetoencephalography (MEG) (Driver et al., 2016). It has been further hypothesized that the vasculature may be directly involved in the brain's information processing (the so-called hemo-neural hypothesis (Moore and Cao, 2008)), modulating the excitability of neural circuits via chemical, physical, and thermal mechanisms.
Importantly, these concepts are not mutually exclusive: there may be network-specific variation in vascular anatomy and regulation, neurovascular coupling, and vascular modulation of neural activity. It is not possible to differentiate these mechanisms in the current study. However, regardless of the precise origin of the observed relationships, we have demonstrated the dual nature of functional brain networks. It will be critical to ascertain how vascular physiology influences our interpretation of neuronal activity and connectivity within these systems.
Furthermore, our results support the recent work of Zhang and colleagues, who postulated the existence and importance of a "vascularneural network" in understanding brain pathology (Zhang et al., 2012). This is an extension of the idea of the neurovascular unit, which has been a critical "conceptual framework" for understanding neurodegenerative disease and cerebrovascular injury (del Zoppo, 2012). The neurovascular unit includes endothelial cells, astrocytes, pericytes, and neurons, which must all interact in concert to maintain healthy neural function; the combined behavior of the unit must be considered when characterizing disease processes or developing new neuroprotective strategies (Zhang et al., 2012). However, the neurovascular unit spans less than a millimeter, and does not include upstream arteriolar supply vessels or downstream venous drainage. By linking these components, the vascular-neural network construct provides a useful integrated model that better describes systemic and focal neurovascular pathology, including in Alzheimer's Disease, Parkinson's Disease, multiple sclerosis, and autoimmune diseases of the central nervous system (Zhang et al., 2012).
At present, we may be missing key factors in disease progression by ignoring such long-range vascular systems. We know that early stages of ischemia affect both neurons and their supply microvessels in concert (del Zoppo, 2012). In Alzheimer's Disease, vascular damage can precede and drive neurodegeneration (Suri et al., 2015;Zlokovic, 2011). Our results indicate that such pathological changes in neuro-vascular interactions could be network specific. This has been supported by recent research showing that healthy adults with vascular risk factors showed impairments in cerebrovascular reactivity to CO 2 that were specific to the Default Mode Network, which is considered central to pathological mechanisms in aging and Alzheimer's Disease (Haight et al., 2015). Improving our understanding of vascular network behavior (or the vascular-neural network construct), and how the vasculature in specific functional networks is susceptible to early pathological impairment, may offer new windows for targeted protective therapies.
There is some existing evidence in the literature for pairs of spatiallysimilar networks observed in resting state fMRI data. Braga and Buckner identified two similar, but distinct, networks that both resembled the canonical Default Mode Network (Braga and Buckner, 2017). Other networks, including the dorsal attention network and fronto-parietal network were also fractioned into two distinct, parallel networks within individual datasets. The authors hypothesize that these are neuronal sub-networks, but the role of vascular physiology in network-specific BOLD signals was not considered. It may be that one or more of these observed sub-networks is primarily a vascular network, or that vascular regulation is altering BOLD signal fluctuations in specific sub-networks to drive their differentiation.
To further explore the spatiotemporal differences between the network pairs, we isolated voxels that were unique to the "neural" component, unique to the "vascular" component, or common to both networks (i.e., overlapping). Getting the average timeseries in these voxel groups, we again examined the correlation with the neural and vascular stimulus models (Supplemental Figure 4). All voxels groups demonstrated significant correlation with the vascular stimulus, as might be expected with a global vasodilatory stimulus. The voxels unique to the "more neural" network or common to both networks correlated significantly with the neural stimuli, however the voxels unique to the "more vascular" network do not. We interpret these findings to mean that differentiation of network pairs is likely due to voxels unique to each network. Of more interest, a network predominantly reflecting global vascular effects is selectively including many voxels specific to one neural network. In this study, we highlight this novel observation, that a vascular stimulus results in the extraction of spatial patterns of covarying signals that show certain nodes of voxels typically associated with neural network patterns.
There are numerous challenges involved with using BOLD fMRI to study simultaneous neural and vascular properties of the brain. Because there is inherently a maximum possible BOLD signal change, occurring when all venous hemoglobin is fully oxygenation, modulating the baseline oxygenation levels may reduce observed task activation responses (i.e., a "ceiling effect"). This effect is generally expected to occur at much more extreme hypercapnia stimuli than used in this paper (Gauthier et al., 2011), but may subtly impact the activation patterns observed during normocapnia versus hypercapnia. In addition, the BOLD contrast mechanism reflects local levels of deoxygenated hemoglobin, but this is constantly modulated by both direct vasoactive pathways and indirect neurovascular coupling mechanisms. Alternative imaging modalities such as EEG and MEG may provide better direct insight into the neural processes underpinning functional brain networks, however it is not yet fully understood how vascular physiology may manifest in these data (Driver et al., 2016) or how network activity fluctuations in these different modalities relate back to fMRI signals (Tewarie et al., 2016). In this study, the dual nature of BOLD fMRI contrast, in conjunction with dual stimulus types, facilitates our ability to probe the dual nature of functional brain networks, but perhaps future research should employ multi-modal imaging to best explore these phenomena in greater detail.
The decomposition of spatially similar network pairs is also very sensitive to the details of how ICA is employed. We opted to average together 30 individual scans prior to decomposition, imitating the methodology of our first observations in breath-hold data (Supplemental Figure 1). Tensor ICA is an alternative approach to decompose signal features common across multiple datasets; the results of Tensor ICA decomposition of the 30 individual datasets (with automatic dimensionality determination) are summarized in Supplemental Figure 5. Interestingly, this approach showed mixed success in isolating the network pairs observed in our primary analysis: in the 12 output components, only one VN component was identified and (perhaps surprisingly) no clear DMN was identified. When dimensionality was fixed to output 30 components, we could identify candidate network pairs for VN and TPN, but again no clear DMN was present. The relationships between these 30 component timeseries and the neurovascular stimuli are summarized in Supplemental Figure 6. Tensor ICA did not appear to maintain the polarity of the signal changes, making identification of "deactivation" during the 3-back task more challenging. The results of Tensor ICA are difficult to robustly interpret, and as such do not readily support or contradict our original analyses. Still, Tensor ICA may be an appropriate tool in future studies to explore these neurovascular phenomena.
Furthermore, the spatial ICA algorithm maximizes the spatial independence of the resulting components, which is likely not an ideal approach to identify spatially similar features in fMRI data. However, temporal ICA is not well suited to fMRI data, particularly when acquired using "typical" sampling rates of 1-2 s, due to the small number of degrees of freedom in each dataset. We also arbitrarily opted to decompose the data into 30 components; further assessment of other output dimensionality at this step in the analysis did impact the identification of network pairs, suggesting that the precise "splitting" of networks is highly dependent on this analysis choice. Similar observations have been observed by other research groups, where increasing ICA dimensionality facilitates the differentiation of sub-network structures (Dipasquale et al., 2015). Further studies into neuronal and vascular network properties should carefully assess the role of dimensionality on our observations, adopting rapid-sampling EPI (using simultaneous multi-slice acceleration to achieve sub-second sampling (Feinberg and Setsompop, 2013)) and testing the utility of temporal ICA at differentiating the neuronal and vascular features in the data.
Conclusions
We have shown that functional brain networks can be split into two spatially similar networks during concurrent neuronal and vascular stimuli. One of these networks is dominated by the neuronal stimulus paradigm, as expected, whereas the other network appears dominated by vasodilatory responses to changes in arterial CO 2 levels. This suggests that vascular regulation may be coordinated across long-distance brain regions, mimicking the structure of neuronal networks, or that neurovascular relationships vary in a network-specific manner. It will be critical to consider how the underlying vascular function influences the observation and interpretation of network brain activity and connectivity in future neuroimaging studies.
Declaration of competing interest
The authors declare no competing financial interests. | 7,840 | 2018-12-03T00:00:00.000 | [
"Biology",
"Psychology"
] |
Characteristics Research of a High Sensitivity Piezoelectric MOSFET Acceleration Sensor
In order to improve the output sensitivity of the piezoelectric acceleration sensor, this paper proposed a high sensitivity acceleration sensor based on a piezoelectric metal oxide semiconductor field effect transistor (MOSFET). It is constituted by a piezoelectric beam and an N-channel depletion MOSFET. A silicon cantilever beam with Pt/ZnO/Pt/Ti multilayer structure is used as a piezoelectric beam. Based on the piezoelectric effect, the piezoelectric beam generates charges when it is subjected to acceleration. Due to the large input impedance of the MOSFET, the charge generated by the piezoelectric beam can be used as a gate control signal to achieve the purpose of converting the output charge of the piezoelectric beam into current. The test results show that when the external excitation acceleration increases from 0.2 g to 1.5 g with an increment of 0.1 g, the peak-to-peak value of the output voltage of the proposed sensors increases from 0.327 V to 2.774 V at a frequency of 1075 Hz. The voltage sensitivity of the piezoelectric beam is 0.85 V/g and that of the proposed acceleration sensor was 2.05 V/g, which is 2.41 times higher than the piezoelectric beam. The proposed sensor can effectively improve the voltage output sensitivity and can be used in the field of structural health monitoring.
Introduction
Micro-electro-mechanical system (MEMS) acceleration sensors, with the advantages of low cost, low power consumption, high compatibility with integrated circuit (IC) process and high integration [1][2][3][4], have a wide range of applications in automotive electronics, structural health monitoring, navigation and other fields [5][6][7][8]. MEMS acceleration sensors usually include piezoresistive [9,10], capacitive [11] and piezoelectric acceleration sensors [12,13]. Piezoresistive accelerometer usually consists of a deformable structure and varistors. Under the action of external acceleration, the deformation of the deformable structure causes the resistance of the varistor to change, thereby measuring the acceleration. The varistors are usually made into a Wheatstone bridge structure to improve the output sensitivity. The piezoresistive accelerometer has the advantages of good stability, wide measurement range, and is also limited by ambient temperature [14][15][16]. A capacitive acceleration sensor is composed of fixed plates and movable plates. The gap or area of the plate capacitor changes while the external acceleration is applied. The applied acceleration is obtained by measuring the change of capacitance. High sensitivity and zero frequency response are its advantages, while its disadvantages are high impedance and nonlinearity [17][18][19]. The working principle of the piezoelectric acceleration sensor is similar to that of the piezoresistive type, but the difference is that the piezoresistive material is replaced by piezoelectric material. Compared with piezoresistive and capacitive MEMS acceleration sensors, piezoelectric MEMS acceleration sensors have advantages of low power cost and high range Sensors 2020, 20, 4988 2 of 14 of applying frequency; at the same time, they are also limited by high output impedance, weak output signal and so on [20][21][22][23].
The researchers are committed to improving the structure of the piezoelectric acceleration sensor so as to improve its performance, especially its sensitivity. For example, Jin Xie et al. present a MEMS piezoelectric in-plane resonant accelerometer with a two-stage microleverage mechanism. The sensitivity of the device is 28.4 Hz/g and the relative sensitivity is 201 ppm/g at the base frequency around 140.7 kHz, which are 57% and 268% higher than previously reported data [24]. Qiang Zou et al. reported novel single-and tri-axis piezoelectric-bimorph accelerometers that are built on parylene beams with ZnO thin films. A highly symmetric quad-beam bimorph structure with a single proof mass is used for tri-axis acceleration sensing. The unamplified sensitivities of the x-axis, y-axis, and z-axis are 0.93, 1.13, and 0.88 mV/g, respectively [13]. At the same time, the researchers also studied the doped piezoelectric materials in order to improve the piezoelectric properties so as to improve the sensitivity. Ramany et al. presented a nano-electro-mechanical systems accelerometer using undoped zinc oxide nanorods and 1 wt. (Weight) %, 3 wt.% and 5 wt.% of vanadium-doped zinc oxide nanorods as an active layer. The highest sensitivity of 3.528 V/g was acquired for 5 wt.% of vanadium-doped zinc oxide with maximum output voltages of 2.30 V and 2.9 V at 9 Hz resonant frequency and 1 g acceleration, respectively [25].
In this work, by taking advantage of the high gate sensitivity of MOSFET, in order to improve the output sensitivity and reduce the output impedance of piezoelectric acceleration sensors, we designed a piezoelectric MOSFET acceleration sensor (PMAS) structure. The PMAS with high sensitivity can be used in acceleration monitoring under special frequency vibration environments, such as health monitoring on turning tools.
Basic Structure
The structure of PMAS is shown in Figure 1a. It consists of a piezoelectric beam and an N-channel depletion MOSFET. The piezoelectric beam is a two-ended device: one end is connected with the source of MOSFET as the ground terminal of PMAS, the other end is connected with the gate of MOSFET to control the output current of MOSFET, and the drain of MOSFET is the current output terminal of PMAS. The direction of the measured acceleration is parallel to the z-axis as shown in Figure 1a. The acceleration applied by the vibration system is reciprocating up and down along the z-axis. Therefore, the output signal of the piezoelectric beam is a sinusoidal signal with a certain frequency. The N-channel depletion MOSFET is chosen to ensure that the MOSFET works in the triode region. As shown in Figure 1b, a load resistance R L is used in series with PMAS to convert the current signal into a voltage signal for output.
Sensors 2020, 20, x FOR PEER REVIEW 2 of 14 have advantages of low power cost and high range of applying frequency; at the same time, they are also limited by high output impedance, weak output signal and so on [20][21][22][23]. The researchers are committed to improving the structure of the piezoelectric acceleration sensor so as to improve its performance, especially its sensitivity. For example, Jin Xie et al. present a MEMS piezoelectric in-plane resonant accelerometer with a two-stage microleverage mechanism. The sensitivity of the device is 28.4 Hz/g and the relative sensitivity is 201 ppm/g at the base frequency around 140.7 kHz, which are 57% and 268% higher than previously reported data [24]. Qiang Zou et al. reported novel single-and tri-axis piezoelectric-bimorph accelerometers that are built on parylene beams with ZnO thin films. A highly symmetric quad-beam bimorph structure with a single proof mass is used for tri-axis acceleration sensing. The unamplified sensitivities of the x-axis, y-axis, and z-axis are 0.93, 1.13, and 0.88 mV/g, respectively [13]. At the same time, the researchers also studied the doped piezoelectric materials in order to improve the piezoelectric properties so as to improve the sensitivity. Ramany et al. presented a nano-electro-mechanical systems accelerometer using undoped zinc oxide nanorods and 1 wt. (Weight) %, 3 wt.% and 5 wt.% of vanadium-doped zinc oxide nanorods as an active layer. The highest sensitivity of 3.528 V/g was acquired for 5 wt.% of vanadium-doped zinc oxide with maximum output voltages of 2.30 V and 2.9 V at 9 Hz resonant frequency and 1 g acceleration, respectively [25].
In this work, by taking advantage of the high gate sensitivity of MOSFET, in order to improve the output sensitivity and reduce the output impedance of piezoelectric acceleration sensors, we designed a piezoelectric MOSFET acceleration sensor (PMAS) structure. The PMAS with high sensitivity can be used in acceleration monitoring under special frequency vibration environments, such as health monitoring on turning tools.
Basic Structure
The structure of PMAS is shown in Figure 1a. It consists of a piezoelectric beam and an Nchannel depletion MOSFET. The piezoelectric beam is a two-ended device: one end is connected with the source of MOSFET as the ground terminal of PMAS, the other end is connected with the gate of MOSFET to control the output current of MOSFET, and the drain of MOSFET is the current output terminal of PMAS. The direction of the measured acceleration is parallel to the z-axis as shown in Figure 1a. The acceleration applied by the vibration system is reciprocating up and down along the z-axis. Therefore, the output signal of the piezoelectric beam is a sinusoidal signal with a certain frequency. The N-channel depletion MOSFET is chosen to ensure that the MOSFET works in the triode region. As shown in Figure 1b, a load resistance RL is used in series with PMAS to convert the current signal into a voltage signal for output. The structure of the piezoelectric beam is shown in Figure 2a, which consists of a silicon cantilever beam substrate with a proof mass and a piezoelectric multilayer structure. Silicon cantilever beam substrate is fabricated by lithography and inductive coupled plasma (ICP) etching technology. The piezoelectric multilayer structure, including electrodes (Pt top electrodes and Pt/Ti composite bottom electrode) and a ZnO piezoelectric layer, was deposited by direct-current (DC) and radio frequency (RF) magnetron sputtering under the optimized parameters [26], respectively. Then, 5 wt% Li-doped ZnO was used as the piezoelectric layer. In general, ZnO is an n-type semiconductor due to the defects of oxygen vacancy and zinc interstitial atoms in the process of fabrication. After doping lithium as an acceptor impurity, the resistivity of ZnO is increased, thereby enhancing its piezoelectric properties. The piezoelectric beam is designed to be 9800 × 5800 × 500 µm 3 in size. Figure 2c shows the dimension of the cantilever beam. l b , w b and h b are the length, width and height of the cantilever beam, respectively. l m , w m and h m are the length, width and height of the proof mass, respectively. l b × w b × h b was designed to be 6000 × 2400 × 80 µm 3 , l m × w m × h m was designed to be 1000 × 2700 × 395 µm 3 . After the piezoelectric beam was manufactured, it was rigidly pasted on the customized test printed circuit board (PCB), and the piezoelectric beam's electrodes were connected with the PCB plate electrodes by a chip press welder (KNS4526, Kullicke & Soffa, Haifa, Israel). The structure of the piezoelectric beam is shown in Figure 2a, which consists of a silicon cantilever beam substrate with a proof mass and a piezoelectric multilayer structure. Silicon cantilever beam substrate is fabricated by lithography and inductive coupled plasma (ICP) etching technology. The piezoelectric multilayer structure, including electrodes (Pt top electrodes and Pt/Ti composite bottom electrode) and a ZnO piezoelectric layer, was deposited by direct-current (DC) and radio frequency (RF) magnetron sputtering under the optimized parameters [26], respectively. Then, 5 wt% Li-doped ZnO was used as the piezoelectric layer. In general, ZnO is an n-type semiconductor due to the defects of oxygen vacancy and zinc interstitial atoms in the process of fabrication. After doping lithium as an acceptor impurity, the resistivity of ZnO is increased, thereby enhancing its piezoelectric properties. The piezoelectric beam is designed to be 9800 × 5800 × 500 μm 3 in size. Figure 2c shows the dimension of the cantilever beam. lb, wb and hb are the length, width and height of the cantilever beam, respectively. lm, wm and hm are the length, width and height of the proof mass, respectively. lb × wb × hb was designed to be 6000 × 2400 × 80 μm 3 , lm × wm × hm was designed to be 1000 × 2700 × 395 μm 3 . After the piezoelectric beam was manufactured, it was rigidly pasted on the customized test printed circuit board (PCB), and the piezoelectric beam's electrodes were connected with the PCB plate electrodes by a chip press welder (KNS4526, Kullicke & Soffa, Haifa, Israel).
Operating Principle
As shown in Figure 3a, the piezoelectric beam does not deform without external force. The centers of positive and negative charges in the piezoelectric layer coincide with each other; the whole piezoelectric layer is electrically neutral and there are no charge outputs. The PMAS also has a certain output when it is not affected by acceleration due to the existence of a conductive channel in the Nchannel depletion MOSFET even when the gate voltage is 0. When external acceleration is applied, according to Newton's first and second laws, the proof mass will produce forces opposite to the acceleration direction due to the inertia, which can cause the piezoelectric beam to be deformed. At this time, the bending creates the electric dipole moment in the piezoelectric layer, forcing the positive and negative charge centers to separate. The same amount of charges of different signs are generated on the upper and lower surfaces of the piezoelectric layer. The generated charge is used as the gate signal of the MOSFET, which can directly control the width of the channel, thereby controlling the drain current of the MOSFET.
The stress analysis of the piezoelectric beam is shown in Figure 4. dZnO is the distance from the ZnO thin film to the bottom of the Si substrate, dn is the distance from the neutral plane of the multilayer structure to the bottom of the Si substrate, h1, h2, …, h6 represent the thickness of Si, SiO2, Ti, Pt, ZnO and Pt layers, respectively.
Operating Principle
As shown in Figure 3a, the piezoelectric beam does not deform without external force. The centers of positive and negative charges in the piezoelectric layer coincide with each other; the whole piezoelectric layer is electrically neutral and there are no charge outputs. The PMAS also has a certain output when it is not affected by acceleration due to the existence of a conductive channel in the N-channel depletion MOSFET even when the gate voltage is 0. When external acceleration is applied, according to Newton's first and second laws, the proof mass will produce forces opposite to the acceleration direction due to the inertia, which can cause the piezoelectric beam to be deformed. At this time, the bending creates the electric dipole moment in the piezoelectric layer, forcing the positive and negative charge centers to separate. The same amount of charges of different signs are generated on the upper and lower surfaces of the piezoelectric layer. The generated charge is used as the gate signal of the MOSFET, which can directly control the width of the channel, thereby controlling the drain current of the MOSFET.
The stress analysis of the piezoelectric beam is shown in Figure 4. d ZnO is the distance from the ZnO thin film to the bottom of the Si substrate, d n is the distance from the neutral plane of the multilayer structure to the bottom of the Si substrate, h 1 , h 2 , . . . , h 6 represent the thickness of Si, SiO 2 , Ti, Pt, ZnO and Pt layers, respectively. The main assumptions are as follows: (1) since the length and width are far greater than the thickness of the cantilever beam, it is considered that the cantilever beam has pure bending deformation and ignores the shear stress; (2) the beam bending caused by the residual stress is ignored; (3) the beam bending caused by the residual stress is ignored; (4) it is assumed that there is no relative sliding between each thin films; (5) the cantilever beam is in an open environment and there are no upper and lower fixed plates, so the influence of air damping is ignored [27][28][29]. According to the Euler-Bernoulli's equation, when the free end of the cantilever beam is subjected to a concentrated transverse force F, the stress on the cross section A of ZnO thin film is: where MA is the bending moment, I is the moment of inertia of the multilayer structure. The equivalent width of layer i relative to Si substrate is: The main assumptions are as follows: (1) since the length and width are far greater than the thickness of the cantilever beam, it is considered that the cantilever beam has pure bending deformation and ignores the shear stress; (2) the beam bending caused by the residual stress is ignored; (3) the beam bending caused by the residual stress is ignored; (4) it is assumed that there is no relative sliding between each thin films; (5) According to the Euler-Bernoulli's equation, when the free end of the cantilever beam is subjected to a concentrated transverse force F, the stress on the cross section A of ZnO thin film is: where MA is the bending moment, I is the moment of inertia of the multilayer structure. The equivalent width of layer i relative to Si substrate is: The main assumptions are as follows: (1) since the length and width are far greater than the thickness of the cantilever beam, it is considered that the cantilever beam has pure bending deformation and ignores the shear stress; (2) the beam bending caused by the residual stress is ignored; (3) the beam bending caused by the residual stress is ignored; (4) it is assumed that there is no relative sliding between each thin films; (5) the cantilever beam is in an open environment and there are no upper and lower fixed plates, so the influence of air damping is ignored [27][28][29].
According to the Euler-Bernoulli's equation, when the free end of the cantilever beam is subjected to a concentrated transverse force F, the stress on the cross section A of ZnO thin film is: where M A is the bending moment, I is the moment of inertia of the multilayer structure. The equivalent width of layer i relative to Si substrate is: where w i is the width of each thin film, E i is the young's modulus, i represents Si, SiO 2 , Ti, Pt, ZnO and Pt layers, respectively. Therefore, the equivalent sectional area S i can be expressed as: The equivalent second moment of inertia of layer i is: The distance from the i layer to the bottom of Si substrate is: The distance from the neutral plane of the multilayer structure to the Si substrate is: The second moment of inertia is: Substituting Equations (6) and (7) into (1), the cross-section stress of ZnO is: According to the Hooke's law and the theories in mechanics of materials, a deflection δ will appear when a force F is applied at the free end of the cantilever beam, and the deflection δ of the free end can be expressed as [30,31]: where k is the stiffness of the cantilever beam.
The fundamental resonant frequency is [32]: Sensors 2020, 20, 4988 where E is the modulus of elasticity, I is the area moment of inertia, α n is a constant which its value depends on the mode of cantilever beam's vibration, m is the mass of the cantilever beam, L is the length of the cantilever beam.
Considering the influence of the mass proof on the resonant frequency of cantilever beam, the first mode resonant frequency of the cantilever beam can be expressed as [33][34][35]: where ρ is the density of Si. The moment of inertia of multilayer structure is ignored due to the thickness of the multilayer structure being much less than that of the cantilever beam. It can be seen that l c is inversely proportional to the f frequency of the cantilever beam, and the change of l c has a greater impact on resonant frequency. Based on the piezoelectric effect, the surface charge density of ZnO is [26]: Therefore, under the external acceleration, the charges generated by the piezoelectric beam are: The piezoelectric beam can be approximately considered as a parallel plate capacitor with a dielectric inside, so the output voltage V B of the piezoelectric beam is: In this case, due to the MOSFET works in the triode region, the relationship between source drain current I DS and gate voltage V GS can be expressed as follows [36]: where, µ n is dependent effective mobility, W is the channel width, L is the effective channel length, C ox is the insulation capacitance, V GS is the gate voltage which is equal to V B , V T is the threshold voltage. The output voltage of PMAS is: Therefore, when the piezoelectric beam is connected to the gate of MOSFET, the output voltage of the piezoelectric beam is equal to V GS . With Equations (13)- (16), the relationship between V out and F is: It can be seen from Equation (17) that the output voltage of the PMAS is proportional to the channel width-to-length ratio of the MOSFET. Figure 5 shows the fabrication process of the piezoelectric beam. The n-type <100> orientation silicon wafer was cleaned by the standard Radio Cooperation of America (RCA) process (Figure 5a), and the silicon dioxide layer was grown by the thermal oxidation method as the isolation layer (Figure 5b). In the manufacturing process of Pt/ZnO/Pt/Ti piezoelectric multilayer structure, the lift-off process is selected to complete the fabrication of the piezoelectric multilayer structure. As shown in Figure 5c-h, firstly, the photoresist is evenly coated on the surface of the substrate layer, then patterned by photolithography. The Pt/Ti composite electrode is coated by RF magnetron sputtering, and the photoresist is removed by the stripping solution to complete the fabrication of the bottom electrode. ZnO layer and top electrode were also prepared by the lift-off process as the Pt/Ti bottom electrode. After that, the cantilever structure is released by twice photolithography and inductively coupled plasma (ICP) etching to complete the fabrication of the piezoelectric beam (Figure 5i-j). In order to improve the piezoelectric properties of the ZnO piezoelectric layer, we doped ZnO with lithium. Li atoms with a small atomic radius as acceptor impurities can increase the resistivity of the ZnO piezoelectric thin film and increase the output impedance, thereby achieving the purpose of improving the piezoelectric properties of ZnO. During the preparation process, most of the doped lithium atoms will replace the positions of the zinc atoms and cause the decrease of the lattice constant. As a result, the residual stress in ZnO thin film is compressive stress [37].
Fabrication Technology
It can be seen from Equation (17) that the output voltage of the PMAS is proportional to the channel width-to-length ratio of the MOSFET. Figure 5 shows the fabrication process of the piezoelectric beam. The n-type <100> orientation silicon wafer was cleaned by the standard Radio Cooperation of America (RCA) process (Figure 5a), and the silicon dioxide layer was grown by the thermal oxidation method as the isolation layer (Figure 5b). In the manufacturing process of Pt/ZnO/Pt/Ti piezoelectric multilayer structure, the liftoff process is selected to complete the fabrication of the piezoelectric multilayer structure. As shown in Figure 5c-h, firstly, the photoresist is evenly coated on the surface of the substrate layer, then patterned by photolithography. The Pt/Ti composite electrode is coated by RF magnetron sputtering, and the photoresist is removed by the stripping solution to complete the fabrication of the bottom electrode. ZnO layer and top electrode were also prepared by the lift-off process as the Pt/Ti bottom electrode. After that, the cantilever structure is released by twice photolithography and inductively coupled plasma (ICP) etching to complete the fabrication of the piezoelectric beam (Figure 5i-j). In order to improve the piezoelectric properties of the ZnO piezoelectric layer, we doped ZnO with lithium. Li atoms with a small atomic radius as acceptor impurities can increase the resistivity of the ZnO piezoelectric thin film and increase the output impedance, thereby achieving the purpose of improving the piezoelectric properties of ZnO. During the preparation process, most of the doped lithium atoms will replace the positions of the zinc atoms and cause the decrease of the lattice constant. As a result, the residual stress in ZnO thin film is compressive stress [37]. Figure 6 shows the test system for the proposed acceleration sensor. It consisted of a standard vibrator (Dongling ESS-050, Dongling Vibration Test Instrument Co., Ltd., Suzhou, China), an Figure 6 shows the test system for the proposed acceleration sensor. It consisted of a standard vibrator (Dongling ESS-050, Dongling Vibration Test Instrument Co., Ltd., Suzhou, China), an oscilloscope (DSO-X 4154A, Agilent Technologies Inc., Santa Clara, CA, USA), a semiconductor characteristic analysis system (4200SCS, KEITHLEY 4200, Keithley, Cleveland, OH, USA) and a control computer. The system can apply acceleration from 0 to 30 G; the lower limit of the frequency is 50 Hz, and the upper limit is 20,000 Hz. oscilloscope (DSO-X 4154A, Agilent Technologies Inc., Santa Clara, CA, USA), a semiconductor characteristic analysis system (4200SCS, KEITHLEY 4200, Keithley, Cleveland, OH, USA) and a control computer. The system can apply acceleration from 0 to 30 G; the lower limit of the frequency is 50 Hz, and the upper limit is 20,000 Hz.
Frequency Characteristic of the Piezoelectric Beam
The frequency characteristic of the piezoelectric beam was analyzed by the sweep mode of the vibrator. The range of the excitation frequency was set from 20 to 2000 Hz, and applied acceleration was 1 g constant along the z-axis direction of the piezoelectric beam. The piezoelectric beam was rigidly connected with the vibration table by a customized fixture. When the excitation frequency reached a certain value, the output of the piezoelectric beam reached its maximum value for the first time. At this time, the excitation frequency was the first resonance frequency of the piezoelectric beam. Figure 7a shows the relationship between the output voltage and the excitation frequency of the piezoelectric beam. When the excitation frequency reached 1072 Hz, the output of the piezoelectric beam reached the maximum value of 0.649 V. There is only one peak within 50-2000 Hz, which proves that 1072 Hz is the first-order resonance frequency of the piezoelectric beam.
According to Figure 7a, the quality factor of the piezoelectric beam can be roughly analyzed as shown in Figure 7b. The quality factor Q represents the energy dissipated by the body in overcoming the internal friction in resonance.
where Es is the mechanical energy stored by the oscillator in the resonant state and Ec is the energy dissipated by the oscillator in the resonant state every cycle, f is the resonance frequency and f1, f2 is the half power point frequency (−3 dB). From Figure 5b we can estimate that Q is 529.1. Table 1 shows the design dimensions of piezoelectric beams. By substituting the data in Table 1 into Equation (10), the theoretical resonance frequency of the piezoelectric beam is 995 Hz, which is lower than the test result 1072 Hz. This is mainly due to the deviation of the thickness and length of the cantilever beam from the design value during the manufacturing process.
Frequency Characteristic of the Piezoelectric Beam
The frequency characteristic of the piezoelectric beam was analyzed by the sweep mode of the vibrator. The range of the excitation frequency was set from 20 to 2000 Hz, and applied acceleration was 1 g constant along the z-axis direction of the piezoelectric beam. The piezoelectric beam was rigidly connected with the vibration table by a customized fixture. When the excitation frequency reached a certain value, the output of the piezoelectric beam reached its maximum value for the first time. At this time, the excitation frequency was the first resonance frequency of the piezoelectric beam. Figure 7a shows the relationship between the output voltage and the excitation frequency of the piezoelectric beam. When the excitation frequency reached 1072 Hz, the output of the piezoelectric beam reached the maximum value of 0.649 V. There is only one peak within 50-2000 Hz, which proves that 1072 Hz is the first-order resonance frequency of the piezoelectric beam.
IDS-VDS Characteristic of MOSFET with Piezoelectric Beam
The I-V characteristic and transfer characteristic curves of the MOSFET were tested by 4200SCS, as shown in Figure 8a. Considering that the maximum limited current of 4200SCS is 0.1 A, the voltage range of VGS was set from −2.5 V to −0.5 V with an increment of −0.5 V. Figure 8b shows the transition characteristic curve of MOSFET, VDS is 5 V in testing. It can be concluded that the pinch-off voltage of MOSFET (VGS(off)) is −2.5 V; with the increase of VGS, the IDS also increases. When VGS reaches −0.45 V, the limit current of the test instrument 0.1 A is reached. According to Figure 7a, the quality factor of the piezoelectric beam can be roughly analyzed as shown in Figure 7b. The quality factor Q represents the energy dissipated by the body in overcoming the internal friction in resonance.
where E s is the mechanical energy stored by the oscillator in the resonant state and E c is the energy dissipated by the oscillator in the resonant state every cycle, f is the resonance frequency and f 1 , f 2 is the half power point frequency (−3 dB). From Figure 5b we can estimate that Q is 529.1. Table 1 shows the design dimensions of piezoelectric beams. By substituting the data in Table 1 into Equation (10), the theoretical resonance frequency of the piezoelectric beam is 995 Hz, which is lower than the test result 1072 Hz. This is mainly due to the deviation of the thickness and length of the cantilever beam from the design value during the manufacturing process.
I DS -V DS Characteristic of MOSFET with Piezoelectric Beam
The I-V characteristic and transfer characteristic curves of the MOSFET were tested by 4200SCS, as shown in Figure 8a. Considering that the maximum limited current of 4200SCS is 0.1 A, the voltage range of V GS was set from −2.5 V to −0.5 V with an increment of −0.5 V. Figure 8b shows the transition characteristic curve of MOSFET, V DS is 5 V in testing. It can be concluded that the pinch-off voltage of MOSFET (V GS(off) ) is −2.5 V; with the increase of V GS , the I DS also increases. When V GS reaches −0.45 V, the limit current of the test instrument 0.1 A is reached. The test circuit shown in Figure 1b, 4200SCS, is connected with two output terminals of PMAS for data acquisition. The output voltage of the piezoelectric beam was used as gate voltage of MOSFET. In order to ensure that the output voltage of the piezoelectric beam can reach the maximum value, the test frequency was set at 1072 Hz, which is the resonance frequency of the piezoelectric beam. The I-V characteristic curves of the MOSFET are shown in Figure 9 (V DD = 5 V, R L = 10 kΩ and applied acceleration was 1.5 g). Under the external acceleration generated by the stander vibration system, the proof mass drove the piezoelectric beam to vibrate up and down, which makes the I DS of MOSFET fluctuate in a certain range. The maximum and minimum values of the wave range correspond to the maximum deformation of upward and downward bending of the piezoelectric beam, respectively. When V DS is a certain value, the difference between the upper and the lower limit is ∆I DS . With the increase of external excitation acceleration, the vibration amplitude of the piezoelectric beam increases and further increases the width of the curves. That means ∆I DS will increase with as acceleration increases. Because the signal produced by the piezoelectric beam is sinusoidal under the action of the stander vibration system, ∆I DS can be used as an important parameter to measure the output sensitivity of PMAS. In the test circuit of Figure 1b, by selecting appropriate load resistance R L , PMAS can amplify the signal of the piezoelectric beam so as to improve the sensitivity of output voltage.
Sensors 2020, 20, x FOR PEER REVIEW 10 of 14 sinusoidal under the action of the stander vibration system, ΔIDS can be used as an important parameter to measure the output sensitivity of PMAS. In the test circuit of Figure 1b, by selecting appropriate load resistance RL, PMAS can amplify the signal of the piezoelectric beam so as to improve the sensitivity of output voltage. Figure 10a shows the output voltage curves of the piezoelectric beam and PMAS, where the peak-to-peak value of the output sinusoidal curves are defined as output voltage Vpiezo and VPMAS. Vpiezo increased from 0.287 V to 1.314 V at the excitation frequency of 1072 Hz, and the acceleration range was from 0.2 to 1.4 g. It can be concluded that Vpiezo increases with the increment of excitation acceleration, and the relationship between them is approximately linear. Figure 10b shows the output voltage curve of the PMAS (VPMAS). In the conditions of excitation frequency of 1072 Hz and applied acceleration range from 0.2 to 1.4 g, VPMAS increased from 0.327 V to 2.744 V. Compared with Vpiezo, at the same condition VPMAS increased significantly and VPMAS was approximately linear with the excitation acceleration. Figure 10a shows the output voltage curves of the piezoelectric beam and PMAS, where the peak-to-peak value of the output sinusoidal curves are defined as output voltage V piezo and V PMAS . V piezo increased from 0.287 V to 1.314 V at the excitation frequency of 1072 Hz, and the acceleration range was from 0.2 to 1.4 g. It can be concluded that V piezo increases with the increment of excitation acceleration, and the relationship between them is approximately linear. Figure 10b shows the output voltage curve of the PMAS (V PMAS ). In the conditions of excitation frequency of 1072 Hz and applied acceleration range from 0.2 to 1.4 g, V PMAS increased from 0.327 V to 2.744 V. Compared with V piezo , at the same condition V PMAS increased significantly and V PMAS was approximately linear with the excitation acceleration.
Sensitivity Characteristic of PMAS
sinusoidal under the action of the stander vibration system, ΔIDS can be used as an important parameter to measure the output sensitivity of PMAS. In the test circuit of Figure 1b, by selecting appropriate load resistance RL, PMAS can amplify the signal of the piezoelectric beam so as to improve the sensitivity of output voltage. Figure 10a shows the output voltage curves of the piezoelectric beam and PMAS, where the peak-to-peak value of the output sinusoidal curves are defined as output voltage Vpiezo and VPMAS. Vpiezo increased from 0.287 V to 1.314 V at the excitation frequency of 1072 Hz, and the acceleration range was from 0.2 to 1.4 g. It can be concluded that Vpiezo increases with the increment of excitation acceleration, and the relationship between them is approximately linear. Figure 10b shows the output voltage curve of the PMAS (VPMAS). In the conditions of excitation frequency of 1072 Hz and applied acceleration range from 0.2 to 1.4 g, VPMAS increased from 0.327 V to 2.744 V. Compared with Vpiezo, at the same condition VPMAS increased significantly and VPMAS was approximately linear with the excitation acceleration. According to Equation (13), the theoretical output voltage of the piezoelectric beam is 2.110 V under the acceleration condition of 1 g, which is higher than the actual output voltage of 1.141 V. The reason is that the thickness of the cantilever beam deviates from the theoretical value due to the uniformity error of ICP etching. The thickness of the cantilever beam is not uniform, which leads to the uneven distribution of stress when the cantilever beam is forced to bend. At the same time, there will be defects inside the ZnO piezoelectric thin film during the manufacturing process, which causes the charge generated by the stress to be less than the theoretical value. Figure 11 shows the comparison of V PMAS and V piezo with the increase of acceleration. It can be seen that under the same external acceleration, the output voltage of PMAS is significantly higher than that of piezoelectric beam. It means that MOSFET in PMAS plays a role of amplification. After that, two groups of output curves are fitted linearly, and it can be concluded that the two groups of curves have good linearity and the slopes of the fitting curves are the output voltage sensitivity of the two devices. The sensitivity of PMAS is 2.05 V/g, which is 2.41 times higher than that of the piezoelectric beam at 0.85 V/g. It proved that the PMAS structure can effectively improve the sensitivity of the piezoelectric beam. The charge output of the piezoelectric beam as the gate control voltage of the MOSFET can effectively convert the output charge into an output current, and it can also increase the load capacity of the piezoelectric acceleration sensor. However, the proposed sensor has a relatively narrow frequency range, which limits its application field. In addition, it can be seen from Equation (14) that the width-to-length ratio of the MOSFET is directly proportional to the output current. The I DS can be improved by adjusting the aspect ratio of the MOSFET during the manufacturing process, thereby further improving the output sensitivity of the PMAS.
Sensitivity Characteristic of PMAS
According to Equation (13), the theoretical output voltage of the piezoelectric beam is 2.110 V under the acceleration condition of 1 g, which is higher than the actual output voltage of 1.141 V. The reason is that the thickness of the cantilever beam deviates from the theoretical value due to the uniformity error of ICP etching. The thickness of the cantilever beam is not uniform, which leads to the uneven distribution of stress when the cantilever beam is forced to bend. At the same time, there will be defects inside the ZnO piezoelectric thin film during the manufacturing process, which causes the charge generated by the stress to be less than the theoretical value. Figure 11 shows the comparison of VPMAS and Vpiezo with the increase of acceleration. It can be seen that under the same external acceleration, the output voltage of PMAS is significantly higher than that of piezoelectric beam. It means that MOSFET in PMAS plays a role of amplification. After that, two groups of output curves are fitted linearly, and it can be concluded that the two groups of curves have good linearity and the slopes of the fitting curves are the output voltage sensitivity of the two devices. The sensitivity of PMAS is 2.05 V/g, which is 2.41 times higher than that of the piezoelectric beam at 0.85 V/g. It proved that the PMAS structure can effectively improve the sensitivity of the piezoelectric beam. The charge output of the piezoelectric beam as the gate control voltage of the MOSFET can effectively convert the output charge into an output current, and it can also increase the load capacity of the piezoelectric acceleration sensor. However, the proposed sensor has a relatively narrow frequency range, which limits its application field. In addition, it can be seen from Equation (14) that the width-to-length ratio of the MOSFET is directly proportional to the output current. The IDS can be improved by adjusting the aspect ratio of the MOSFET during the manufacturing process, thereby further improving the output sensitivity of the PMAS. Table 2 shows the performance comparison of piezoelectric acceleration sensors. The proposed sensor has certain advantages in sensitivity, but does not dominate in terms of chip size. It can also be seen from the comparison that piezoelectric acceleration sensors have higher sensitivity, but at a disadvantage in terms of measurement range and load capacity. Table 2 shows the performance comparison of piezoelectric acceleration sensors. The proposed sensor has certain advantages in sensitivity, but does not dominate in terms of chip size. It can also be seen from the comparison that piezoelectric acceleration sensors have higher sensitivity, but at a disadvantage in terms of measurement range and load capacity.
Conclusions
In summary, this paper proposed a high sensitivity piezoelectric acceleration sensor. It consisted of a piezoelectric beam and an N-channel depletion MOSFET. Utilizing the advantage of MOSFET's high input impedance, the output signal of the piezoelectric beam was used to drive the MOSFET, thereby converting the output charge of the piezoelectric beam into output current, and improving the sensitivity of the piezoelectric acceleration sensor. The results show that the resonance frequency of the piezoelectric beam was 1072 Hz and the sensitivity of the proposed sensor was 2.05 V/g at the resonance frequency, which is 2.41-times higher than that of the piezoelectric beam. This research provides a good foundation for the integration of piezoelectric MOSFETs in the future. In future work, continuing to improve the sensitivity of piezoelectric acceleration sensors is an important research direction for us. We will improve the output sensitivity of the PMAS by optimizing the width-to-length ratio of MOSFETs and improving the manufacturing process of piezoelectric materials, while at the same time optimizing the cantilever beam structure to increase its frequency range.
Author Contributions: C.A. and X.Z. wrote the manuscript; X.Z. and D.W. designed the project; C.A. performed the experiments; C.A. and X.Z. contributed to the data analysis. All authors have read and agreed to the published version of the manuscript.
Conflicts of Interest:
The authors declare no conflict of interest. | 9,513 | 2020-09-01T00:00:00.000 | [
"Engineering",
"Physics",
"Materials Science"
] |
Synthesis of the demospongic compounds, (6Z,11Z)-octadecadienoic acid and (6Z,11Z)-eicosadienoic acid Molecules
B. A. Kulkarni, S. Chattopadhyay*, A. Chattopadhyay and V. R. MamdapurBio-Organic Division, Bhabha Atomic Research Centre, Mumbai - 400 085, India. Tel. 91-22-5563060; Fax 91-22-5560750 (bod@magnum.barct1.ernet.in)Received: 20 December 1996 / Accepted: 10 January 1997 / Published: 29 January 1997Abstract: A stereoselective synthesis of (6 Z , 11Z )-octadecadienoic acid ( 1) and (6 Z , 11Z )-eicosadienoic acid ( 2)from easily accessible pentane-1,5-diol ( 3) is described. Thus, compound 3 on pyranylation and oxidation gavethe aldehyde 5 which was converted to the acid 7 by Wittig reaction with a suitable phosphorane. Its depyranylationand oxidation furnished the key aldehyde 9 which upon Wittig reaction with n-heptylidene and n-nonylidenephosphoranes, respectively followed by alkaline hydrolysis afforded the title acids.Keywords : Euryspongia rosea , phospholipid fatty acids, stereoselective synthesis, (6 Z , 11 Z )-octadecadienoicacids, (6 Z , 11 Z )-eicosadienoic acids, Wittig olefination.IntroductionThe marine environment [1] constitutes an exhaustible treas-ury of organisms generating a plethora of secondarymetabolites. In this regard, sponges, the primitive multicel-lular organisms have recently been the targets [2] of lipidchemistry not only for the product fatty acids but also due totheir biosyntheses. It is now believed that a combination ofde novo biosynthesis, dietary intake, and incorporation ofmicroorganic symbionts are responsible for the genesis ofthese varied types of novel fatty acids in sponges. Besidesthe presence of very long chain fatty acids, sponges haveprovided fatty acids with unusual unsaturation patterns, sub-stitutions with oxygenated functionalities (hydroxy, methoxy,acetoxy) and methyl branching.Recently, from the marine sponge, Euryspongia rosea ,two such compounds viz. (6Z , 11Z )-octadecadienoic acid ( 1)and (6 Z , 11 Z )-eicosadienoic acid ( 2) have been isolated [3]from the phospholipid fraction. This type of ∆
Introduction
The marine environment [1] constitutes an exhaustible treasury of organisms generating a plethora of secondary metabolites.In this regard, sponges, the primitive multicellular organisms have recently been the targets [2] of lipid chemistry not only for the product fatty acids but also due to their biosyntheses.It is now believed that a combination of de novo biosynthesis, dietary intake, and incorporation of microorganic symbionts are responsible for the genesis of these varied types of novel fatty acids in sponges.Besides the presence of very long chain fatty acids, sponges have provided fatty acids with unusual unsaturation patterns, substitutions with oxygenated functionalities (hydroxy, methoxy, acetoxy) and methyl branching.
Recently, from the marine sponge, Euryspongia rosea, two such compounds viz.(6Z, 11Z)-octadecadienoic acid (1) and (6Z, 11Z)-eicosadienoic acid (2) have been isolated [3] from the phospholipid fraction.This type of ∆ 6,11diunsaturation present in these compounds is rather scarce in both plant and animal kingdoms.Earlier, a similar olefination pattern was found exclusively in the fatty acids of phosphatidylcholine of Teytrahymena species [4].Our interest in these compounds stems from the reported [5] antifungal activities of some of the olefinic acids.However, the low natural abundance of 1 and 2 precludes their systematic bioassay.Hence, in continuation of our work [6,7] on the syntheses of marine natural products, we have developed a stereoselective synthesis of both these compounds from a single synthon, amenable from commercially available inexpensive materials.This has also led to unequivocal structural assignment of compound 2. Earlier, in connection with GLC study of some related fatty acids, compound 12, the progenitor of 1 was prepared [8] via an acetylenic route.However, to the best of our knowledge, this is the first synthesis of 2.
Results and Discussion
The synthesis was based on a "building-block" approach consisting of coupling between C 5 -and C 6 -units to furnish the common intermediate 9. Subsequent addition of the appropriate C 7 -and C 9 -moieties to it gives 1 and 2 respectively after proper functionalization.The stereo-selectivities of the incipient olefins were fixed by Z-selective Wittig reactions (Scheme 1).
Commercially available, pentane-1,5-diol (3) was monopyranylated to the compound 4 which on oxidation with "buffered PCC" [9] gave the aldehyde 5. Its Z-selective Wittig olefination [10] with the known phosphor-ane generated from 6 [11] furnished compound 7.Although Wittig reactions with carboxylic acids bearing phosphoranes are reported in the literature [12], we encountered difficulty in the isolation step leading to poor yield of the Wittig product.Consequently, a modified work-up was employed (see Experimental).Acidic deprotection of 7 led to the hydroxy compound 8 with concomitant esterification.Its oxidation followed by a second Wittig reaction of the resultant aldehyde 9 with the C 7 -phosphorane, generated from 10 [13] under the above condition afforded the ester 12 with 97% isomeric purity (by capillary GLC analysis).This was converted to 1 by alkaline hydrolysis.
The (Z)-geometry of the two olefinic bonds was established by the absence of any IR band at 960-980 cm -1 .Further confirmation of this was accomplished by the 13 C NMR spectrum of 12 which exhibited signals due to the allylic carbons at δ 27.2 and 27.8 ppm, characteristic of the internal (Z)-alkenes [14,15].
Likewise, the Wittig reaction of 9 with C 9 -phos-phorane, generated from 11 [16] gave 13 with 98% isomeric purity (by capillary GLC analysis) whose exclusive (Z)-geometry was also confirmed by 13 C NMR analysis as above.Its alkaline hydrolysis afforded the acid 2. The mass spectral data of 1 and 2 were consistent with the reported values [3].
Experimental Section
All bps are uncorrected.The IR spectra were scanned with a Perkin-Elmer 783 spectrophotometer.The PMR spectra were recorded in CDCl 3 with a Bruker AC-200 (200 MHz) spectrometer.The mass spectra (70 eV) were recorded with a Shimadzu GCMS-QP 1000A spectrometer using the direct probe injection.The GLC analyses were carried out on a Shimadzu GC-16A chromatograph fitted with a flame ionization detector and a quartz capillary column (OV-17).Anhydrous reactions were carried out under Ar using freshly dried solvents.All organic extracts were dried over anhy- | 1,323 | 1997-06-30T00:00:00.000 | [
"Chemistry"
] |
HFT-CNN: Learning Hierarchical Category Structure for Multi-label Short Text Categorization
We focus on the multi-label categorization task for short texts and explore the use of a hierarchical structure (HS) of categories. In contrast to the existing work using non-hierarchical flat model, the method leverages the hierarchical relations between the pre-defined categories to tackle the data sparsity problem. The lower the HS level, the less the categorization performance. Because the number of training data per category in a lower level is much smaller than that in an upper level. We propose an approach which can effectively utilize the data in the upper levels to contribute the categorization in the lower levels by applying the Convolutional Neural Network (CNN) with a fine-tuning technique. The results using two benchmark datasets show that proposed method, Hierarchical Fine-Tuning based CNN (HFT-CNN) is competitive with the state-of-the-art CNN based methods.
Introduction
Short text categorization is widely studied since the recent explosive growth of online social networking applications (Song et al., 2014).
In contrast with documents, short texts are less topic-focused in texts.
Major attempts to tackle the problem is to expand short texts with knowledge extracted from the textual corpus, machine-readable dictionaries, and thesauri (Phan et al., 2008;Wang et al., 2008;Chen et al., 2011;Wu et al., 2012). However, because of domain-independent nature of dictionaries and thesauri, it is often the case that the data distribution of the external knowledge is different from the test data collected from some specific domain, which deteriorates the overall performance of categorization. A methodology which maximizes the impact of pre-defined domains/categories is needed to improve categorization performance.
More recently, many authors have attempted to apply deep learning techniques including CNN (Wang et al., 2015;Zhang and Wallace, 2015;Wang et al., 2017), the attention based CNN (Yang et al., 2016), bag-of-words based CNN (Johnson and Zhang, 2015a), and the combination of CNN and recurrent neural network (Lee and Dernoncourt, 2016;Zhang et al., 2016) to text categorization. Most of them demonstrated that neural network models are powerful for learning features from texts, while they focused on single-label or a few labels problem. Several efforts have been made to multi-labels (Johnson and Zhang, 2015b;Liu et al., 2017). Liu et al. explored a family of new CNN models which are tailored for extreme multi-label classification (Liu et al., 2017). They used a dynamic max pooling scheme, a binary cross-entropy loss, and a hidden bottleneck layer to improve the overall performance. The results by using six benchmark datasets where the label-set sizes are up to 670K showed that their method attained at the best or second best in comparison with seven state-of-the-art methods including FastText (Joulin et al., 2017) and bag-of-words based CNN (Johnson and Zhang, 2015a). However, all of these attempts aimed at utilizing a large volume of data.
We address the problem of multi-label short text categorization and explore the use of a HS of categories. The lower level of categories are finegrained compared to the upper level of categories. Moreover, it is often the case that the amount of training data in a lower level is much smaller than that in an upper level which deteriorates the overall performance of categorization. We propose an approach which can effectively utilize the data in the upper levels to contribute categorization in lower levels by applying fine-tuning to the CNN which can learn a HS of categories and incorporate Figure 1: HFT-CNN model granularity of categories into categorization. We transferred the parameters of CNN trained from upper to lower levels according to the HS, and finely tuned parameters. The main contributions of our work can be summarized: (1) We propose a method that maximizes the impact of pre-defined categories to alleviate data sparsity in multi-label short texts. (2) We empirically examined a finetuning with CNN that fits to learn a HS of categories defined by lexicographers, and (3) The results show that our method is competitive to the state-of-the-art CNN based methods by using two benchmark datasets, especially it is effective for categorization of short texts consisting of a few words with a large number of labels.
2 Hierarchical Fine-Tuning based CNN 2.1 CNN architecture Similar to other CNN (Johnson and Zhang, 2015a;Liu et al., 2017), our HFT-CNN model shown in Figure 1 is based on (Kim, 2014). Let x i ∈ R k be the k-dimensional word vector with the i-th word in a sentence obtained by applying skip-gram model provided in fastText 1 . A sentence with length n is represented as x 1:n = [x 1 , x 2 , · · · , x n ] ∈ R nk . A convolution filter w ∈ R hk is applied to a window size of h words to produce a new feature, c i = f (w·x i:i+h−1 +b) where b ∈ R indicates a bias term and f refers to a non-linear activation function. We applied this convolution filter to each possible window size in the sentence and obtained a feature map, m ∈ R n−h+1 . As shown in Figure 1, we then apply a max pooling operation over the feature map and obtain the maximum valuê m as a feature of this filter. We obtained multiple filters by varying window sizes and multiple 1 https://github.com/facebookresearch/fastText features. These features form a pooling layer and are passed to a fully connected layer. In the fully connected layer, we applied dropout (Hinton et al., 2012). The dropout randomly sets values in the layer to 0. Finally, we obtained the probability distribution over categories. The network is trained with the objective that minimizes the binary crossentropy (BCE) of the predicted distributions and the actual distributions by performing stochastic gradient descent.
Hierarchical structure learning
Our key idea is to use a fine-tuning technique in CNN to tackle the data sparsity problem, especially a lower level of a HS. Following a HS, we transferred the parameters of CNN trained in the upper levels to the lower levels which are worse trained because of the lack of data, and then finely tuned parameters of CNN for lower levels ( Figure 1). This approach can effectively utilize the data in the upper levels to contribute categorization in the lower levels.
Fine-tuning is motivated by the observation that the earlier features of CNN contain more generic features that should be effective for many tasks, but later layers of the CNN becomes progressively more specific to the details of the classes contained in the original dataset. The motivation is identical to a HS of categories as we first learn to distinguish among generic categories at the upper level of a hierarchy, then learns lower level distinctions by using only within the appropriate top level of the HS. We note that fine-tuning the last few layers are usually sufficient for transfer learning as the last few layers become more specific features. However, the HS consisting of deep level needs to finetune the early layers as well because the distance between the upper and lower level of categories is significant. For this reason, we transferred two layers shown in Figure 1, i.e., a layer obtained by word embedding and the convolutional layer. We used them as an initial parameter to learn the second level of a hierarchy. We repeated this procedure from the top level to the bottom level of a hierarchy. We note that a HS consists of many levels. We fine-tune between adjacent layers only because they are more correlated with each other compared to distant layers.
Multi-label categorization
Each test instance is classified into categories with probabilities/scores by applying HFT-CNN. We then utilize a constraint of a HS to obtain final results which differs from the existing work on non-hierarchical flat model (Johnson and Zhang, 2015a;Liu et al., 2017). This is done by using two scoring functions: One is a Boolean Scoring Function (BSF). Another is a Multiplicative Scoring Function (MSF). Both functions set a threshold value and categories whose scores exceed the threshold value are considered for selection. The difference is that BSF has a constraint that a category can only be selected if its ancestor categories are selected. MSF does not have such a constraint, i.e., we extracted all the categories whose scores exceeded the threshold value and sorted them in descending order as the system's assignments.
Data and HFT-CNN model setting
We selected two benchmark datasets having a HS from the extreme classification repository 2 : RCV1 (Lewis et al., 2004) and Amazon670K (Leskovec and Krevl, 2015). All the documents in RCV1 and item descriptions in Amazon670K are tagged by using Tree Tagger (Schmid, 1995). We used nouns, verbs, and adjectives. We then applied fastText. Each dataset has an official training and test sets. We used each fold in the experiments. We choose titles from the training and test set on RCV1. The maximum number of words in the title was 13 words. Each text of Amazon670K consists of a product name and its item description. We extracted the first 13 words from each item description and used them in the experiments. Table 1 presents the statistics on the datasets. We divided the training data into two folds; we used 5% to tuning the parameters, and the remains to train the models. Our model setting is shown in Table 2
Evaluation Metrics
We used the standard F1 measure. Furthermore, we evaluated our method by two rank-based evaluation metrics: the precision at top k, P@k and the Normalized Discounted Cumulated Gains, NDCG@k which are commonly used for comparing extreme multi-label classification methods (Liu et al., 2017). We calculated P@k and NDCG@k for each test data and then obtained an average over all the test data.
Basic results
We compared HFT-CNN with a method which has hierarchical-based categorization but without fine-tuning (WoFT-CNN) and Flat model to examine the effect of the fine-tuning. WoFT-CNN shows that we independently trained parameters of CNN for each level and trained parameters are not transferred. Flat means that we simply applied our CNN model. The results are shown in Table 3. The HFT-CNN is better than WoFT-CNN and Flat model except for Micro-F1 obtained by WoFT-CNN(M) in Amazon670K. We also found that the overall results obtained by MSF were better to those obtained by BSF.
Comparison with state-of-the-art method
We chose XML-CNN as a comparative method because their method attained at the best or second best compared to the seven existing methods in six benchmark datasets (Liu et al., 2017). Original XML-CNN is implemented by using Theano 4 , while we implemented HFT-CNN by Chainer 5 . In order to avoid the influence of differences in libraries, we implemented XML-CNN by Chainer and compared it with HFT-CNN. We used the author-provided implementation in Chainer's version of XML-CNN. We recall that we set convolutional filters with the window sizes to (2,3,4) and the stride size to 1 because of short text. To make a fair comparison, we also evaluated XML-CNN with the same window sizes and stride size as HFT-CNN.
Liu et al. evaluated their method by using P@k and NDCG@k. We used their metrics as well as F1 measure. We did not set a threshold value on BSF and MSF when we evaluated by using these metrics, but instead, we used a ranked list of cate- Table 3: Basic results: (B) and (M) refer to a BSF and MSF, respectively. Bold font shows the best result within each line. The method marked with " * " indicates the score is not statistically significant compared to the best one. We used a t-test, p-value < 0.05.
gories assigned to the test instance. The results are shown in Table 4. HFT-CNN with BSF/MSF has the best scores with statistical significance compared to both of the XML-CNNs. On RCV1, HFT-CNN(B) in P@1 and NDCG@1 were worse than XML-CNN(1), while HFT-CNN(M) with the same metrics were statistically significant compared to XML-CNN(1). This is not surprising because hierarchical fine-tuning does not contribute to the accuracy at the top level as the trained parameters on the top level have not changed in the level.
We also examined the affection on each system performance by the depth of a hierarchical structure. Figure 2 shows Micro-F1 at each hierarchical level. The deeper the hierarchical level, the worse the system's performance. However, HFT-CNN is still better than XML-CNNs. The improvement by MSF was 1.00 ∼ 1.34% by Micro-F1 and 3.77 ∼ 10.07% by Macro-F1 on RCV1. On Amazon670K, the improvement was 1.10 ∼ 9.26% by Micro-F1 and 1.10 ∼ 3.60% by Macro-F1. This shows that hierarchical fine-tuning fits to learn the hierarchical category structure.
We recall that we focused on the multi-label problem. Figures 3 illustrates Micro-F1 and Macro-F1 against the number of categories per short text. We can see from RCV1 in Figure 3 Metric RCV1 HFT(B) HFT(M) XML (1) that Micro-F1 obtained by HFT-CNN and XML-CNNs were not statistically significant difference in the number of categories, while Macro-F1 by HFT-CNN except for the number of 13 categories was constantly better to XML-CNNs. On Ama-zon670K data, when the number of categories assigned to the short text is less than 38, HFT-CNN was better than XML-CNNs or HFT-CNN was not statistically significant compared to XML-CNNs by both F1-scores. However, when it exceeds 39, HFT-CNN was worse than XML-CNNs. One possible reason is the use of BSF: a category can only be selected if its ancestor categories are selected. Therefore, once the test data could not be classified into categories correctly, their child categories also cannot be correctly assigned to the test data.
In contrast, as shown in Figure 5, HFT-CNN by MSF was better than XML-CNNs in both Micro and Macro F1 even in the deep level of a hierarchy. From the observations, a robust scoring function is needed for further improvement.
It is important to note that how the ratio of training data affects the overall performance as we focused on the data sparsity problem. Figure shows Micro and Macro-F1 against a ratio of the training data. Overall, the curves show that more training helps the performance, while the curves obtained by HFT-CNN drop slowly compared to other methods in both datasets and evaluation metrics. From the observations mentioned in the above, we can conclude that fine-tuning works well, especially in the cases that the number of the training data per category is small.
Conclusion
We have presented an approach to multi-label categorization for short text. The comparative results with XML-CNN showed that HFT-CNN is competitive, especially for the cases that there exists only a small amount of training data. Future work will include: (i) incorporating lexical semantics such as named entities and domainspecific senses for further improvement, (ii) extending the method to utilize label dependency constraints (Bi and Kwok, 2011), and (iii) improving the accuracy of the top ranking categories to deal with P@1 and NDCG@1 metrics. | 3,452.4 | 2018-01-01T00:00:00.000 | [
"Computer Science"
] |
Optimal Optical Receivers in Nanoscale CMOS: A Tutorial
The integration of optical receivers in nanoscale CMOS technologies is challenging due to less intrinsic gain and more noise compared to SiGe BiCMOS technologies. Recent research revealed that low-noise, high-gain, and low-power CMOS optical receivers can be designed by limiting the bandwidth of the front-end followed by equalization techniques that benefit from good switching characteristics offered by CMOS technologies. In this tutorial brief, the operation of decision-feedback equalization, feed-forward equalization, and continuous-time linear equalization is reviewed in the context of high baud-rate 2-PAM and 4-PAM modulation. Recent advances and techniques in 4-PAM optical receivers are reviewed and compared in terms of speed, sensitivity, bandwidth, and efficiency.
unscalable. CMOS optical receivers integrated with the SerDes obviate this problem and reduce size and cost.
Whereas SiGe BiCMOS offers high intrinsic gain, bandwidth, and low noise, nanoscale CMOS offers good switching circuits, including some recently reported equalizer circuits, but suffers from less gain and more noise. Therefore, there is a need to rearchitect receivers' analog front-ends to leverage nanoscale CMOS technologies' strengths.
The conventional way of supporting higher data rates in optical receivers is to extend the front-end bandwidth. However, this generally implies lower transimpedance [4], [5]. To break this trade-off, the bandwidth of the TIA can be intentionally limited below the conventional target of 0.5 × baud rate, allowing for higher gain at the cost of intersymbol interference (ISI). ISI can be corrected using equalization techniques suited to nanoscale CMOS implementation. This optimization is well studied for 2-PAM modulation [6] and many prototypes leveraging different equalization techniques were developed [7]- [12]. However, 4-PAM modulation is more susceptible to bandwidth limitations because ISI is three times larger (relative to eye height) than in 2-PAM. As a result, it is important to study this optimization in the context of 4-PAM.
Section II of this tutorial covers continuous-time linear equalization (CTLE), feed-forward equalization (FFE), and decision feedback equalizer (DFE)-based optical receivers. Section III compares these equalization techniques. Section IV reviews recent advances in optical receiver design with emphasis on 4-PAM optical receivers, where we also look at design trends. Finally, Section V concludes the tutorial.
II. FRONT-END OPTIMIZATION Shunt-feedback transimpedance amplifiers (SFTIA), particularly inverter-based, are the most popular nanoscale CMOS TIA in recent years [3], [12]- [16]. Inverters offer high linearity, high transconductance per unit bias current, self-biasing in a feedback configuration, and high swing. We consider the TIA in Fig. 1 (a) with the small-signal model in Fig. 1 (b).
In this model, the input capacitance, C IN , is the sum of the photodetector capacitance, C PD , the pad capacitance, C PAD , and the inverters' gate-to-source capacitances, C gs . C a is the capacitance of the following stage, and R a is the output resistance of the TIA. The combined transconductance of the NMOS and the PMOS devices is g m , and R f is the feedback resistor. Finally, the model includes the gate-to-drain capacitance, C gd , which is important due to the Miller effect. Specifically, 1/R f C gd could become the dominant pole for large transistor sizes and feedback resistances. where and Some parameters in this model are coupled. Namely, the transconductance, C gs and C gd are coupled by the technology transition frequency, f t , and, as the gain of the inverter, A = g m R a is the transistors' intrinsic gain, so R a and g m are coupled. Table I summarizes the numerical values that will be used in this brief alongside the relationships between coupled parameters. We will use g m and R f as our design parameters. Model values are based on [3], [12] which report a 64 Gb/s and 2-PAM, and a 100 Gb/s 4-PAM optical receivers, respectively. For our simulations, we target a bit rate of 64 Gb/s in the 2-PAM case and 64 Gbaud (128 Gb/s) in the 4-PAM case.
In the following subsections, we take the following approach: 1) describe and illustrate the equalization technique used, 2) calculate the worst-case eye opening assuming 2-PAM signaling using peak distortion analysis, 3) calculate the output-referred noise from the model, 4) calculated the worst-case signal-to-noise ratio, SNR WC at the output of the receiver, and finally 5) extend the conclusions to 4-PAM.
A. CTLE-Based Optical Receivers
We first consider a SFTIA followed by a CTLE stage that recovers a part of the bandwidth and reduces ISI. As such, the TIA in (1) can be redesigned to have 1/χ times less bandwidth compared to an unequalized (UE) implementation. An ideal unity-gain CTLE stage that recovers the full bandwidth has the transfer function: where, f TIA is the 3-dB bandwidth of the TIA preceding the CTLE, and Q is the quality factor of the CTLE, taken here as 1/ √ 2. The zeros of the CTLE perfectly cancel the poles of the TIA in (1), and the pole frequencies of the CTLE are χ times higher relative to those of the preceding TIA. It should be noted that a practical CTLE stage has more poles than zeros.
The transfer function from the input to the output is the product of (1) and (4). Thus, R f can be increased, reducing the bandwidth of the TIA while the CTLE stage recovers that bandwidth. Practically, the value of χ cannot be too large because it leads to: 1) excessive peaking in the CTLE stages leading to gain and group delay variations; 2) decreased tunability and increased susceptibility to PVT variations [18].
As the total bandwidth of the TIA/CTLE (χ f TIA ) becomes smaller, i.e., below 0.5× baud rate, the signal will not have sufficient time to settle, leading to a degradation of gain. Moreover, this leads to ISI, further reducing eye opening. This is illustrated in the pulse responses shown in Fig. 2 (a) for various χ f TIA , where precursors and postcursors are introduced when the bandwidth is far below the baud rate. From this, for 2-PAM, the worst-case eye opening, V ISI is calculated from the main cursor V A,0 and the i th pre/postcursors, V A,i : This method for finding eye-opening is peak distortion analysis (PDA), and is extensible to 4-PAM [19].
To understand the benefit of a CTLE, we next define the worst-case signal-to-noise ratio (SNR WC ) as a function of f 3dB /f baud and χ . We will use f 3dB to refer to the overall 3-dB bandwidth of the TIA/CTLE in CTLE-based receivers, and to the 3-dB bandwidth of the TIA in the FFE and DFEbased receivers. We begin by considering the noise sources in the SFTIA: the channel thermal noise, I 2 n,g m = 4kTγ g m , and the thermal noise of the feedback resistor, I 2 n,R f = 4kT/R f . The calculation of the noise at the output of the TIA can be simplified by splitting I 2 n,R f as in Fig. 1 [20]. The resulting TIA output power spectral density, S out , is where Z o is the output impedance of the TIA, We later use (6) in FFE and DFE noise calculations. Here, we are interested in the power spectral density at the output of the CTLE stage, S CE (s) Finally the worst-case signal-to-noise ratio (SNR WC ) is defined as the ratio of the eye-opening found from PDA to the RMS noise.
We plot SNR WC as a function of χ and f 3dB /f baud as shown in Fig. 2 (b) and (c), respectively. In constructing these plots, we sweep the values of g m and R f and pick the best achievable SNR WC for a given f 3dB /f baud or χ .
From Fig. 2 (b), we observe SNR WC improves as χ increases. However, this improvement is more pronounced when going from χ = 1 to χ = 1.5 compared to going from χ = 1.5 to χ = 2. This is because, while employing a CTLE with a reduced-bandwidth TIA helps in suppressing white noise, the colored noise is unaffected [6], [18], and using large values of χ provides only marginal improvement because the colored noise component dominates.
The worst-case SNR, SNR WC , is plotted as a function of f 3dB /f buad in Fig. 2 (c) for a UE TIA, and a TIA followed by a CTLE with χ = 2. For 2-PAM signaling, the optimal f 3dB in the UE case is 0.3× f baud and it increased to 0.39× f baud in the CTLE-based receiver. A lower f 3dB results in ISI that degrades SNR WC while f 3dB larger than 0.3× f baud increases the outputreferred integrated noise voltage also degrading SNR WC . The CTLE implementation has a 3 dB better SNR WC compared to the UE implementation. For 4-PAM modulation, in the CTLE-based receiver, the optimal f 3dB is around 0.53 × f baud compared to 0.45 in the UE implementation, and the CTLE provides around 4.7 dB of SNR WC improvement. We note that the optimal f 3dB for 4-PAM is 1.38 × higher (relative to baud rate) than for 2-PAM, significantly less than the 2× increase in data rate afforded by 4-PAM. We also note the bandwidth of the TIA in the CTLE-based implementation is less than that of the UE TIA.
B. Feed-Forward Equalization
A feed-forward equalizer (FFE)-based optical receiver can be modeled as shown in Fig. 3(a). Each FFE tap produces a delayed, scaled version of the input pulse. By adding a timeshifted and scaled version of the signal, pre-and post-cursors can be reduced. This operation is demonstrated in Fig. 3 (b) for a three-tap FFE. Once tap weights are set, the worst-case vertical eye opening is calculated from the equalized pulse response using (5).
When selecting tap weights in FFE-based receivers, noise enhancement of FFE should be considered. In FFE-based receivers, the FFE filter sums scaled and delayed versions of the same signal, and considering that noise at the output of the TIA is colored, noise samples present in these signals are correlated. This should be considered when calculating the output noise power. To calculate output noise power, we begin by calculating the autocorrelation of the noise at the output of the TIA, The output-referred RMS noise voltage at the output of an N-tap FFE can then be calculated as follows: Here, α i and α k is the i th and k th tap coefficients, respectively. As can be seen, tap coefficients appear both in V ISI and V n,FFE calculations. Therefore, the optimal coefficients maximize SNR WC as opposed to minimizing ISI or noise. Tap weights can be calculated using adaptive algorithms that minimize the error between the output of the FFE and a training sequence. Alternatively,they can be calculated from the pulse response and noise autocorrelation function [21].
Finally, SNR WC can be calculated using (10). Fig. 3 (c) plots SNR WC versus f 3dB /f baud for both 2-PAM and 4-PAM receivers. The optimal bandwidth of a 3-tap FFE-based receiver for the 2-PAM case is around 0.13 × f baud , and it offers 3.4 dB of SNR WC improvement. In the case of 4-PAM, the optimal bandwidth is 0.25 ×f baud with 4.5 dB of SNR WC improvement.
C. Decision Feedback Equalization (DFE)
Typical DFE-based optical receivers have a finite impulse response (FIR) feedback loop as shown in the block diagram in Fig. 4 (a). For an M-tap FIR DFE-based receiver, each tap is designed to eliminate the corresponding postcursor. This operation is illustrated in Fig. 4 (b) using two taps. The 2-PAM worst-case vertical eye opening is calculated as For an infinite-length DFE, all postcursors are removed, and the precursors limit the vertical eye opening. An infinite-length DFE can either be approximated with an analog feedback filter or a long digital FIR feedback filter. One challenge in DFE design is the feedback loop's timing requirement: the slicer output must propagate through the feedback filter to the slicer input within one baud interval. Digital DFE implementations address this with parallelism, implying a complexity and power consumption that increases exponentially with the number of taps [12]. Recently, however, novel DFE architectures break this difficult tradeoff allowing for the pipelining of DFE logic [22]- [24].
Feedback signals are produced from the noiseless signal at the output of the slicer, so a DFE output can be noiseless. The noise voltage at the input of the decision circuit, assuming a noiseless feedback loop, is Unlike FFE-based optical receivers that enhance noise (see (12)), the DFE loop has no impact on the output referred noise of the TIA. The SNR WC is calculated using (10) and plotted in Fig. 4 (c) versus f 3dB /f baud for both a 2-tap and infinite length DFE. The optimal bandwidth for 2-PAM signals is around 0.18 × the datarate while it is 0.22 × the baud rate for 4-PAM signals. An ideal infinite length DFE allows for a bandwidth reduction down to 0.04 × the baud rate before the impact of the precursors starts limiting SNR WC . A two-tap DFE improves SNR WC by around 4 dB in the case of 2-PAM and by around 5.5 dB in the case of 4-PAM. As the number of taps increases, DFE curves approach the infinite-length curve.
III. COMPARISON An overlay of the SNR WC curves for all types of receivers is shown in Fig. 5. As seen, a 2-tap DFE-based receiver exhibits optimal SNR WC . CTLE and 3-tap FFE-based optical receivers exhibit similar SNR WC improvement. FFE-based receivers exhibit less SNR WC improvement compared to DFEbased receivers because of the noise enhancement. Meanwhile, CTLE-based receivers provide less SNR WC improvement because, while they provide significant improvement in white noise suppression, they do not have any impact on colored noise. Finally, we note that SNR WC scales in proportion to the input current, I pp , without affecting the optimal bandwidth in each case. Table II summarizes some of the most recently published high-speed 2-PAM and 4-PAM receivers. The receiver in [3] uses a 2-tap FFE and a 2-tap DFE and lowers bandwidth to optimize the sensitivity. It was found that the optimal bandwidth for 4-PAM receivers is higher (relative to baud rate but not bit rate) than 2-PAM receivers for a given DFE size, especially when the effect of input jitter is included. This combination of 4-PAM modulation and input jitter amplification by the lower-bandwidth front-end [28] led to the choice of a 20 GHz front-end, which is 0.4 ×f baud . The number of taps is limited to two as more taps lead to increased power consumption while only providing marginal improvement in SNR. A high data rate of 100 Gb/s was achieved despite the low bandwidth.
IV. STATE-OF-THE-ART
Reference [13] describes a full-bandwidth 4-PAM receiver where dc-coupled CMOS inverters are used in the entire signal path. A bandwidth of 27 GHz (0.51×f baud ) is achieved by using series inductive peaking at the input TIA stage and shunt inductive peaking between stages. It achieves a data rate of 106.25 Gb/s. Reference [14] describes a low-power SFTIA with shunt inductive peaking. A record high-speed of 128 Gb/s is achieved. However, only electrical measurements are reported, and the DC gain is 59.3 dB. , which is low compared to other receivers. Reference [25] describes a fullbandwidth SFTIA that uses both shunt and series peaking to achieve a high bandwidth of 60 GHz to support 112 Gb/s 4-PAM modulation. Reference [27] describes a 50 Gb/s receiver that uses T-coils in the TIA stage along with a CTLE stage to achieve a bandwidth of 30 GHz. All four receivers have a f 3dB /f baud > 0.5.
The receiver described in [26] optimizes SNR performance at the slicer input by limiting the bandwidth of the SFTIA to 0.3 × baud rate and uses a 2-tap DFE to eliminate the resulting ISI. This receiver achieves a data rate of 32 Gb/s while using a front-end bandwidth of only 4.8 GHz. Similarly, [15] is an optimized 64 Gb/s receiver that limits the bandwidth of the front-end to 12 GHz (0.375 × baud rate) and eliminates ISI by using a 3-tap DFE.
Reference [12] is a 64 Gb/s low-bandwidth 2-PAM receiver in which the bandwidth of the TIA is only 15 GHz followed by a 1-tap DFE to remove the 1 st postcursor. According to [12], the number of taps is limited to one as more taps only resulted in minor SNR improvement. Compared to the 4-PAM receiver in [3], the ratio of bandwidth to baud rate is almost twice larger, which is in line with our findings.
It follows from Table I and this discussion that both full-bandwidth receivers that employ inductive peaking and limited-bandwidth equalized receivers are both still in use optical receivers. With the development of high-speed analog-to-digital converters and ADC-based front-ends that allow for sophisticated equalization, especially in sub-10nm CMOS, we anticipate that low-bandwidth front-ends may see even more use in the future.
V. CONCLUSION This tutorial brief covered the optimization of the frontend of optical receivers. We looked into different optimization techniques used and quantified the optimal bandwidths for 2-PAM and 4-PAM signaling. We found that the optimal bandwidth relative to the baud rate is higher in the case of 4-PAM modulation, but it is, in fact, lower relative to the bit rate. This is because of the 2 × bit rate increase offered by 4-PAM. A review of the state-of-the-art optical receivers was presented. The ongoing trends are the implementation of bandwidth extension techniques and equalization techniques to enable the design of 4-PAM receivers capable of achieving the data rates required by the 400G Ethernet standard and emerging 800G and 1.6T standards. We anticipate that lowbandwidth techniques will see use in ADC-based nanoscale CMOS optical front-ends. | 4,159.4 | 2022-06-01T00:00:00.000 | [
"Physics"
] |
Leontief Input-Output Method for The Fresh Milk Distribution Linkage Analysis
This research discusses about linkage analysis and identifies the key sector in the fresh milk distribution using Leontief Input-Output method. This method is one of the application of Mathematics in economy. The current fresh milk distribution system includes dairy farmers→collectors→fresh milk processing industries→processed milk distributors→consumers. Then, the distribution is merged between the collectors’ axctivity and the fresh milk processing industry. The data used are primary and secondary data taken in June 2016 in Kecamatan Jabung Kabupaten Malang. The collected data are then analyzed using Leontief Input-Output Matrix and Maple software. The result is that the merging of the collectors’ and the fresh milk processing industry’s activities shows high indices of forward linkages and backward linkages. It is shown that merging of the two activities is the key sector which has an important role in developing the whole activities in the fresh milk distribution.
INTRODUCTION
Mathematics is a subject which underlies and serves many other subjects that are needed in the development of modern science and technology [1].Mathematics functions as a tool to help finding the right solution to solve problems.Using mathematics, every problem that occurs in many kinds of fields can be solved through mathematical approach and a mathematical modeling is formed.One of these mathematical applications is the Leontief Input-Output Method.Leontief Input-Output Method is a mathematical modeling which is used to determine the value of the production output that needs to be expensed by the industries in the same economic system in order to preserve other industrial processes.This is the basic method to illustrate the economic activities as a linkage system between goods and services, and also to analyze the intersectoral or industrial linkages in economy [2].
Leontief Input-Output Method has different kinds of applications in economy.In the article from [3], and [4], the Leontief Input-Output Method has been applied in many modern science and technology development.Some of the method's application on the pollution control problems have been discussed in [5], [6], and [7].Basically, the Leontief Input-Output Method uses inverse matrix or linear system of equation to construct an economic model.This method shows how the output from one industry can be the input for other industries and determines the linkage between each of the industries or sectors in economy [8].This method is used to calculate the index of Backward Linkage that provides how much the sector demands from the other sectors of the economy, whereas the index of Forward Linkage provides the quantity of products demanded from the related sectors to the sector [9].This linkage analysis was originated by [10], which was then used as a mean to identify the key sector by [11].When one sector has high index values of backward and forward linkage, the sector has overall above average linkages of economy sector; thus, it is assumed that the sector is the key sector in the economy [9].
The aim of this article is to discuss about the Leontief Input-Output Method which is used to analyze the linkages and to identify the key sector in the fresh milk distribution.The fresh milk distribution system found in this article is: dairy farmers→collectors→fresh milk processing industries→processed milk distributors→consumers.However, in this research the collectors' and the fresh milk processing activities are merged.The data used are the primary and secondary data obtained through interview, questionnaire and observation in June 2016.These data were taken from the dairy farmers, Koperasi Agro Niaga Jabung as the collector and the fresh milk processing industry, and the distributors within the area of Kecamatan Jabung Kabupaten Malang.Next, the data were analysed using the Leontief Input-Output Matrix and Maple software.In the economic activity, fresh milk is a dairy product which might potentially improve the national economy in the future.Beside consumed freshly, milk is also the raw material for the fresh milk processing industry.Several products produced from fresh milk are pasteurized milk, butter, cheese, yoghurt, skimmed milk or non-fat milk and some other food products.This fact indicates that the growth in dairy farming industries will affect other industries.Furthermore, the growth in the fresh milk processing industry has strong linkages with the growth in the dairy farming sector.Most of this fresh milk processing industries are located in the area whose majority of the citizens are dairy farmers.Through this linkage analysis on the fresh milk distribution, the fresh milk processing industry has both backward linkages towards the raw material related to the dairy farmers, and forward linkages towards the food and beverage industry.
METHODS
This research is an industrial mathematic research focusing on the problems within the fresh milk distribution system in Kecamatan Jabung on June 2016.The objects of this research are several groups of dairy farmers in Kecamatan Jabung, Koperasi Agro Niaga Jabung as the collector as well as the fresh milk processing industry, and the processed milk distributors in Kecamatan Jabung.The explanation level of this research is up to description level.Thus, this research will describe the phenomenon of the research objects.
This research uses the qualitative and quantitative analysis techniques.The qualitative analysis is conducted through direct observation of the condition of the fresh milk distribution system in Kecamatan Jabung.The data is obtained through interview with several groups of dairy farmers, the KOPERASI as the fresh milk collector as well as the fresh milk processing industry, and the processed milk distributor in Kecamatan Jabung.The quantitative analysis includes income and added value analysis.The data obtained are then analyzed.Before analyzing the data, the first step is to process it using raw data tabulation.
To fill in the table 1, we conduct income and added value analysis for each of the activity.After that, we analyze backward and forward linkage using Input-Output Method.
Income Analysis
The amount of income from each sector within 1 month can be formulated as follows: where: I = Income of sector-i for 1 month Q =Quantity of Product sold by sector-i within 1 month P = Price per product sold by sector-i within 1 month.
Added Value Analysis
The amount of added value is obtained from the margin of operational cost and income of each of the sector within 1 month.These operational costs include production cost, trade system cost, maintenance cost, shipping cost, or other costs which are adjusted to the expense of each sector.The added value analysis can be formulated as follows: VA = I − OC (9) where: VA = Added value of sector-i within 1 month OC = Operational cost of sector-i within 1 month.
RESULT AND DISCUSSION
In this research, we merge the collectors' and the fresh milk processing industries' activities in the fresh milk distribution system model as follows: Using equation ( 8) and ( 9), the data is processed and the result is put into the inputoutput table of fresh milk distribution as follows: Table 2 shows that the output or the dairy farming product, that is the fresh milk, is used as the input or raw material for the collectors and the fresh milk processing industries, in this case is the Koperasi Agro Niaga (KAN) Jabung, with the total value of Rp7,856.859 million.The collectors distribute the fresh milk to the fresh milk processing industries located outside of Desa Jabung, but these collectors also function as the small scale fresh milk processing industries.Besides selling the fresh milk to the processing industries, the collectors also give an output in the form of cattle food that is used as the input for the dairy farmers with the total value of Rp2,526.24 million.Then, the output or the product from the fresh milk processing industries by the collectors is distributed to the processed milk distributors with the total value of Rp357.312 million.Next, the output from the distributors is sold to the consumers with the total value of Rp488.927 million.
Furthermore, substitute from ( 11) into Equation ( 6) and (7).Using Maple software, we obtain indices of backward linkages and forward linkages which can be seen on table 3. On table 3, we can see that the farmers' activity has low index of backward linkages, that is 0.9171 (BL<1), while the forward linkages have high index, that is 1.3730 (FL>1).These numbers show that the farmers' activity can only shove the activities in front of them, but it cannot attract the growth of the activities behind.Whereas for the distributors' activity, the indices of forward linkages and backward linkages are low, those are 0.4336 (FL<1) and 0.9274 (BL<1).These numbers show that the distributors' activity does not influence the growth of others' activities.
On the other hand, the merging of the collectors' and the fresh milk processing industries' activities has high indices of both forward linkages and backward linkages, those are 1.1935 (FL>1) and 1.1555 (BL>1).These numbers show that the merging of the two activities has high forward and backward linkages so that it can be concluded that this might be the key sector, which can improve the growth of all activities in the fresh milk distribution.
Thus, we know that the merging of the collectors' and the fresh milk processing industries' activities can be the key sector within the fresh milk distribution.Therefore, developments in the merging of the two activities will have great influences to the growth of the whole activities within the fresh milk distribution.
CONCLUSION
Leontief Input-Output Method can illustrate the intersectoral linkages and identify the key sector in the fresh milk distribution.It is clearly shown that the merging of the collectors' and the fresh milk processing industries' activities offers high indices of both backward linkages and forward linkages, so that it can be the key sector in the fresh milk distribution.The key sector has strong attraction and thrust towards the growth of upstream as well as downstream industries.Therefore, the merging of the two activities has great influences and needs more attention in terms of its relation with the developments of the whole activities in the fresh milk distribution.
Figure 1 .
Figure 1.Fresh Milk Distribution System Model by Merging the Collectors' and the Fresh Milk Processing Industries' Activities
Table 1 .
Input-OutputTable of Fresh Milk Distribution by Merging the Collectors' and the Fresh Milk Processing Industries' Activities (in Million Rupiah) The added value or the income of each of the activities are Rp3,510 million, Rp2,124.631 million, and Rp478.727million.From table 2, we can generate a matrix as follows
Table 2 .
Linkage Indices of Merging the Collectors' and the Fresh Milk Processing Industries' Activities in the Fresh Milk Distribution | 2,406.6 | 2016-11-30T00:00:00.000 | [
"Agricultural And Food Sciences",
"Economics"
] |
African Swine Fever Virus Manipulates the Cell Cycle of G0-Infected Cells to Access Cellular Nucleotides
African swine fever virus manipulates the cell cycle of infected G0 cells by inducing its progression via unblocking cells from the G0 to S phase and then arresting them in the G2 phase. DNA synthesis in infected alveolar macrophages starts at 10–12 h post infection. DNA synthesis in the nuclei of G0 cells is preceded by the activation of the viral genes K196R, A240L, E165R, F334L, F778R, and R298L involved in the synthesis of nucleotides and the regulation of the cell cycle. The activation of these genes in actively replicating cells begins later and is less pronounced. The subsequent cell cycle arrest at the G2 phase is also due to the cessation of the synthesis of cellular factors that control the progression of the cell cycle–cyclins. This data describes the manipulation of the cell cycle by the virus to gain access to the nucleotides synthesized by the cell. The genes affecting the cell cycle simply remain disabled until the beginning of cellular DNA synthesis (8–9 hpi). The genes responsible for the synthesis of nucleotides are turned on later in the presence of nucleotides and their transcriptional activity is lower than that during virus replication in an environment without nucleotides.
Introduction
African swine fever virus (ASFV) is the only species in the genus Asfivirus, family Asfarviridae, and order Asfuvirales. It is a large double-stranded DNA virus [1,2].
Many DNA viruses interfere with the cell cycle regulatory machinery. Some viruses, following infection, require de novo synthesis of deoxynucleotides to stimulate G1 to S phase cell cycle transition in cells [3]. Other viruses (like Herpesviruses) can cause cell cycle arrest to limit competition between the virus and host for cellular DNA replication resources. However, in most cases, the manipulation of viruses by the host cell cycle contributes to a favorable cellular environment for viral replication [4].
ASFV encodes up to 200 polypeptides that can have complex and subtle interactions with the host cell to avoid the host defenses. These viral proteins promote the replication of ASF virus in infected cells and the subsequent spread of the virus for further infections. However, there is still a lack of information on the role of many ASFV-encoded proteins in infected host cells. As cell cycle regulation is usually altered in continuous cell lines, the effects of viral replication on it may not necessarily be the same during natural infection in G0 cells (porcine macrophages) and in actively proliferating cells [5]. Thus, enzymes which are involved in nucleotide metabolism are non-essential for virus replication in dividing tissue culture cells, but their deletion reduces virus replication in macrophages [6,7]. Thus, we can assume different transcriptional activity of ASFV in various types of cells, depending Twelve healthy pigs (Landrace breed) of the same age (three months old) and weight (30-32 kg) were used for this study. Ten pigs were infected by intramuscular injection, and two pigs were used as uninfected controls with intramuscular injection of physiological solution.
Virus
The ASFV Armenia 2007 (Arm07) strain was used in all studies. The titer of ASFV for each intramuscular injection was 10 4 50% hemadsorbing doses (HADU50)/mL [6]. Virus titration was performed and expressed as log10 HADU50/mL for non-adapted cells. Animal experiments were carried out in accordance with the Institutional Review Board/Independent Ethics Committee of the Institute of Molecular Biology of NAS RA (reference number IRB00004079; 28 May 2018).
The animals were divided into five groups. Two animals in each group were euthanized on the second, third, fourth, sixth and seventh day post-infection (dpi). Infections were carried out using ASFV (genotype II) distributed in the Republic of Armenia and the Republic of Georgia. The titer of ASFV for each intramuscular injection was 10 4 50% hemadsorbing doses (HADU50)/mL. Virus titration was done as described previously and expressed as log10 HADU50/mL for non-adapted cells [14].
During necropsy, the inner organs were carefully removed and fixed in a 10% buffered formalin solution (pH 7.2) for histopathology studies.
Alveolar Macrophage Culture
Three-month-old pigs were euthanized and their lungs were removed. Cells obtained during bronchoalveolar lavage (BAL) were suspended in sterile Hank's balanced salt solution. They were centrifuged at 600 g for 10 min and then resuspended in RPMI 1640 with 5% fetal bovine serum (FBS) at an initial cell concentration of 3 × 10 5 cells per mL. After incubation for 3 h at 37 • C in a humidified CO 2 incubator, the adhered cells (porcine alveolar macrophages-PAMs) were washed three times with RPMI to remove contaminating non-adherent cells and then incubated in RPMI 1640 with 10% FBS [15].
Nucleotides in Cultural Medium
To evaluate the effect of the presence of nucleotides on the transcriptional activity of ASFV genes, PAMs were cultivated in the presence of each base, nucleosides, and nucleotides at 1 mM concentration [16]. Nucleotides were added to PAMs (24 h) immediately before virus infection.
Histopathological Studies
Tissue samples were fixed in 10% buffered formalin solution (pH 7.2) for a minimum of 24 h. Then, the hearts were sliced into approximately 0.8-1.0 cm thick sections and fixed in fresh formalin for at least seven more days. After the fixation, samples were dehydrated through a graded series of alcohols, washed with xylol, and embedded in paraffin wax by a routine technique for light microscopy (Microm HM 355, Thermo Fisher Scientific, Waltham, MA USA).
Paraffin-embedded samples were cut (5 µm) and stained with a trichromic stain by [18] with the previously described modification by [19].
Safranin and Indigo-Picro-Carmine Staining Technique
For visualization of cell cycle deviations, specimens were treated with the combined trichromic stain (using safranin and indigo-picro-carmine). This method allows simple estimations of cell cycle kinetic parameters in cultured and biopsy specimens. The technique was used according to the author's data; however, the fixation was carried out with 10% formalin (4-h fixation) according to [19]. This modification resulted in more pronounced differential staining of the cell cycle stages. This stain was used both for histological sections and for PAM cells.
Image Cytometry of the PAM/Cell Cycle Phase
All cells either uninfected or infected were fixed and stained with Feulgen-Naphthol Yellow protocol. In brief, DNA hydrolysis was performed in 5N HCl for 60 min at 22 • C. After rinsing with sulfite solution and distilled water, samples were put directly into a solution of 0.1% Naphthol Yellow S in 1% acetic acid (pH 2.8) for 30 min; they were then de-stained with 1% acetic acid three times for 0.5 min, then samples were dehydrated three times with tert-butanol and treated with xylol for 5 min [20].
Phases of the cell cycle were determined by examining Feulgen-stained PAM cultures with an image microspectrophotometry. The content of DNA in each sample was measured by a computer-equipped microscope-photometer SMP 05 (OPTON), and images were collected at the 575 nm wavelength. The quantity of DNA was first measured by image cytophotometry in conventional units (C.U.) [21,22]. Cytometric quantification of the DNAstaining of nuclear human cell integrated optical density (IOD) is equivalent to human DNA content. For the quantification of DNA IOD, values were evaluated by comparison with those from cells with the known DNA content. Therefore, the DNA content is expressed in a "c" scale in which 1c is half (haploid) of the nuclear DNA content in cells from a normal (non-pathological) diploid population in the G0/G1 cell cycle phase. Non-stimulated porcine lymphocytes were used as standards.
Mitosis in Feulgen stained slides was viewed at 400× magnification. More than 30 experiments and at least 20,000 cells were examined.
DNA Quantification
In order to measure DNA content (in conventional units) by image scanning cytometry, computer-equipped microscope-cytometer SMP 05 (OPTON) was used at 575 nm wave- lengths and at 1250× magnification. DNA content was expressed on a "c" scale, in which 1 c is the haploid amount of nuclear DNA that occurred in normal (non-pathologic) diploid populations in the G0/G1 phase. The DNA content of unstimulated swine lymphocytes was used as a diploid standard for measurements. DNA measurements identify nuclei as aneuploid if they deviate more than 10% from 2 c, 4 c, 8 c, or 16 c; i.e., if they are outside of 2 c ± 0.2, 4 c ± 0.4, 8 c ± 0.8, or 16 c ± 1.6 values.
The variability of DNA content in unstimulated lymphocytes did not exceed 10%.
ELISA Analysis
Protein levels of cyclin A and cyclin E were measured in PAM culture lysates using sandwich ELISA kits (MyBioSource, San Diego, CA, USA, MBS747375-Cyclin A; MBS753680-Cyclin E). Virus quantification was performed by detection of ASFV antigen (p72) using an ELISA (INgezim PPA DAS 2.0) kit. All experiments were done with the BioTek Epoch 2 microplate spectrophotometer.
Gene Expression Analysis by Quantitative Real-Time PCR
To determine ASFV expression in PAM cell lines, total viral RNA/DNA was isolated using the HiGene™ Viral RNA/DNA Prep Kit (BIOFACT) following the manufacturer's instructions. RNA/DNA samples were then reverse transcribed with a REVERTA-L kit (AmpliSens Biotechnologies).
Quantitative real-time PCR was performed as previously described [23,24] on an Eco Illumina Real- Time For alignment of the cDNA plots, Cq values were rescaled after comparing with viral genome copies amounts and modified in absolute amounts along the y-axis for better visualization.
Statistical Analysis
All in vitro experiments were conducted in triplicate. The significance of virusinduced changes was evaluated by a two-tailed Student's t-test for parametric values and a Mann-Whitney u-test for non-parametric values; p values < 0.05 were considered significant. SPSS version 17.0 software package (SPSS Inc., Chicago, IL, USA) was used for statistical analyses.
Cell Cycle Changes in Tissues of ASFV Infected Pig
To investigate the effect of ASF infection on the progression of the cell cycle in vivo conditions, liver samples from infected animals were obtained on the 2nd, 3rd, and 4th days after intramuscular injection. It is well known that hepatocytes keep the ability to reenter cell cycle. It results in another important characteristic of hepatocytes-physiological polyploidy. That is why liver histopathology serves as one of the best indices to detect deviations in the cell cycle [25]. Figure 1A illustrates a visualization of S cells in the healthy livers of three-month-old piglets, and Figure 1B shows S sells in the livers of ASFV infected piglets. The distribution of nuclei of hepatocytes by the ploidy classes provides a significant difference between healthy ( Figure 1C) and ASFV infected liver cells ( Figure 1D). As follows from Figure 1A (visualized by safranin and indigo-picro-carmine trichromic stain) a minority of hepatocytes were in the S stage (shown by triangle), but on the third day post infection by ASFV, a majority of cells were in the S stage ( Figure 1B, S cells arrowed). Nuclei distribution in hepatocytes by the ploidy classes in non-infected ( Figure 1C) and infected ( Figure 1D) tissues revealed a shift of the histogram of the liver nuclei distributed by the ploidy classes to the right, which indicates the synthesis of DNA in nuclei of hepatocytes under the influence of ASFV. In healthy hepatocytes, 10% of the cells were in S stage ( Figure 1C), but after ASFV infection more than 71% of cells were in S stage ( Figure 1D). At the same time, polyploid cells began to appear in significant quantities ( Figure 1D). Similar processes occur in quiescent cells in non-proliferating tissues, such as cardiomyocytes ( Figure 1F healthy heart section; Figure 1G porcine heart of third-day post infection).
It is well known that the amounts of ASF virus in internal organs are usually observed one to two days after infection and peak three to four days after infection. Figure 1G presents data describing the viral load in examined organs compared to the spleen on the third day post-infection.
Nuclear DNA Synthesis in Infected PAM
The measurements of DNA content in PAM were performed in order to study the cell cycle changes during ASFV infection in vitro. Changes in the DNA amount in PAM nuclei were evaluated by cell cytophotometry in Feulgen stained cells. This technique provides accurate changes in DNA amounts (starting from 5% of the total nuclear DNA). As followed from Figure 2A in a control population of PAM, only cells with normal DNA distribution are present in one peak, and the DNA histogram corresponded to the normal diploid cell population. During the first 8 h of infection, no difference was found in the amount of nuclear DNA compared to the control ( Figure 2B-3 hpi, Figure 2C-8 hpi). The first evidence about the additional synthesis of nuclear DNA occurs after 10 hpi ( Figure 2D); however, these data are insignificant (p < 0.1). The first significant increase of nuclear DNA in infected cells occurs at 12 hpi ( Figure 2E) and continued at 14 hpi ( Figure 2F) and 16 hpi ( Figure 2G).
It is well known that dexamethasone inhibits DNA synthesis by the induced arrest of the cell cycle in G0/G1 [26,27]. To evaluate the role of cell cycle progress in virus replication, we investigated viral levels under the influence of dexamethasone. Dexamethasone in infected PAMs partially stops the synthesis of DNA under the influence of ASFV ( Figure 3A). Dexamethasone-dependent PAM arrest in G0 or G1 phases leads to a significant decrease in the production of ASFV ( Figure 3B). The change in viral amount under the influence of dexamethasone was measured by the quantification of the levels of the viral protein-p72 ( Figure 3C). These data revealed a significant decrease in protein production, thereby confirming a decrease in viral replication when the cell cycle is blocked in the G0/G1 phases. significant decrease in the production of ASFV ( Figure 3B). The change in viral amount under the influence of dexamethasone was measured by the quantification of the levels of the viral protein-p72 ( Figure 3C). These data revealed a significant decrease in protein production, thereby confirming a decrease in viral replication when the cell cycle is blocked in the G0/G1 phases.
ASFV (Arm07) Increases Expression of Cyclin A and Cyclin E in PAM Cells
In our previous works it was shown that the PAM entered the S phase and started DNA synthesis upon exposure to the ASF virus [28]. To study cellular proteins, we measured the levels of cyclin A and E in PAM lysates. In intact PAMs, cyclins are not
ASFV (Arm07) Increases Expression of Cyclin A and Cyclin E in PAM Cells
In our previous works it was shown that the PAM entered the S phase and started DNA synthesis upon exposure to the ASF virus [28]. To study cellular proteins, we measured the levels of cyclin A and E in PAM lysates. In intact PAMs, cyclins are not synthesized. During the first 2 h after infection, the amounts of both cyclins were comparable to the control values. An increase in the synthesis of cyclins was observed at a later time. Infection with the ASF virus leads to the synthesis of cyclin A in PAM lysates ( Figure 4A) in significant amounts within three-time periods of 4 hpi (p < 0.05). The infection of PAM with ASFV also results in a significant but short-term increase in the content of cyclin E ( Figure 4B) at 3 hpi (p < 0.05). There was then a rapid decrease in the level of both cyclins to background values.
Viruses 2022, 14, 1593 9 of 17 rable to the control values. An increase in the synthesis of cyclins was observed at a later time. Infection with the ASF virus leads to the synthesis of cyclin A in PAM lysates ( Figure 4A) in significant amounts within three-time periods of 4 hpi (p < 0.05). The infection of PAM with ASFV also results in a significant but short-term increase in the content of cyclin E ( Figure 4B) at 3 hpi (p < 0.05). There was then a rapid decrease in the level of both cyclins to background values.
Measurement of the Transcriptional Activity of Viral Genes Involved in DNA Synthesis during the Pre-Synthesis of Cellular DNA
According to the accepted classification of ASFV genes [29], DNA replication divides the infection cycle into an early phase, before cellular DNA replication begins, and a late phase, after DNA replication begins. Undoubtedly, only those genes can participate in the regulation of the cell cycle that are expressed before cellular DNA replication. Based on the time required for macrophages to make the transition from the G0 to G1 phase, then from G1 to S phase, we investigated the transcriptional activity of several viral genes probably involved in the regulation of DNA synthesis. Consequently, all targeted genes were guided by two requirements. Their expression must begin before 8 h and their function relates (directly or indirectly) to DNA synthesis. Several genes of ASFV have been implicated in DNA synthesis. One of the most important genes is K196R involved in de novo nucleotide synthesis [8]. Although low levels of K196R transcripts were detected in proliferating VERO cells, we investigated the transcription pattern of this gene in G0 PAM. For better understanding and comparing viral infection particles to genome copies number, were used standard 10 fold dilution of infection virus represented in HADU 50/mL. These dilutions (10, 100 and1000 HADU50/mL) were measured by rtPCR to obtain viral genome copies in an amount that corresponded to the HADU dilutions ( Figure 5).
As shown in Figure 6A, an increase in transcripts of K196R was observed at 2, 5, and 8 hpi. The transcription level of R298L was then measured ( Figure 6B). This gene is serine/threonine protein kinase. Such enzymes play an important role in the regulation of cell proliferation [9]. We have shown that starting from 6 hpi the transcription of R298L is activated, and it continues until the end of the experiment (9 hpi). Next, the transcription level of A240L was measured ( Figure 6C). It is a thymidylate kinase involved in the thymidine 5'-diphosphate synthesis [10]. The measurement of the levels of mRNA (cDNA)
Measurement of the Transcriptional Activity of Viral Genes Involved in DNA Synthesis during the Pre-Synthesis of Cellular DNA
According to the accepted classification of ASFV genes [29], DNA replication divides the infection cycle into an early phase, before cellular DNA replication begins, and a late phase, after DNA replication begins. Undoubtedly, only those genes can participate in the regulation of the cell cycle that are expressed before cellular DNA replication. Based on the time required for macrophages to make the transition from the G0 to G1 phase, then from G1 to S phase, we investigated the transcriptional activity of several viral genes probably involved in the regulation of DNA synthesis. Consequently, all targeted genes were guided by two requirements. Their expression must begin before 8 h and their function relates (directly or indirectly) to DNA synthesis. Several genes of ASFV have been implicated in DNA synthesis. One of the most important genes is K196R involved in de novo nucleotide synthesis [8]. Although low levels of K196R transcripts were detected in proliferating VERO cells, we investigated the transcription pattern of this gene in G0 PAM. For better understanding and comparing viral infection particles to genome copies number, were used standard 10 fold dilution of infection virus represented in HADU 50/mL. These dilutions (10, 100 and1000 HADU50/mL) were measured by rtPCR to obtain viral genome copies in an amount that corresponded to the HADU dilutions ( Figure 5).
As shown in Figure 6A, an increase in transcripts of K196R was observed at 2, 5, and 8 hpi. The transcription level of R298L was then measured ( Figure 6B). This gene is serine/threonine protein kinase. Such enzymes play an important role in the regulation of cell proliferation [9]. We have shown that starting from 6 hpi the transcription of R298L is activated, and it continues until the end of the experiment (9 hpi). Next, the transcription level of A240L was measured ( Figure 6C). It is a thymidylate kinase involved in the thymidine 5'-diphosphate synthesis [10]. The measurement of the levels of mRNA (cDNA) of the gene showed transcriptional activity between 5 and 8 hpi in infected PAM. Also, mRNA levels of ASFV ribonucleoside-diphosphate reductase's large (F778R- Figure 6D) and small (F334L- Figure 6E) subunits [30] were examined in the infected PAM in dynamics of viral infection. The transcriptional activity of F778R was detected between 5 and 8 hpi and for F334L in 8-9 hpi. Therefore, the transcription of the large subunit precedes the synthesis of the small subunit. As shown in Figure 6F, dUTP nucleotidohydrolase (dUTPase) mRNA (E165R) levels in infected PAM were significantly increased in 5-9 hpi. mRNA levels of ASFV ribonucleoside-diphosphate reductase's large (F778R- Figure 6D) and small (F334L- Figure 6E) subunits [30] were examined in the infected PAM in dynam ics of viral infection. The transcriptional activity of F778R was detected between 5 and 8 hpi and for F334L in 8-9 hpi. Therefore, the transcription of the large subunit precedes the synthesis of the small subunit. As shown in Figure 6F, dUTP nucleotidohydrolase (dUTPase) mRNA (E165R) levels in infected PAM were significantly increased in 5-9 hpi and small (F334L- Figure 6E) subunits [30] were examined in the infected PAM in dynamics of viral infection. The transcriptional activity of F778R was detected between 5 and 8 hpi and for F334L in 8-9 hpi. Therefore, the transcription of the large subunit precedes the synthesis of the small subunit. As shown in Figure 6F, dUTP nucleotidohydrolase (dUTPase) mRNA (E165R) levels in infected PAM were significantly increased in 5-9 hpi. The cell cycle is under the control of viral genes (at least partially), since cell cyclins are quickly inactivated, and the viral serine/threonine protein kinase gene begins to actively transcribe starting from 6 hpi.
Changes in Transcriptional Activity of the Viral Genes Involved in DNA Synthesis Depend on the Presence of Nucleotides in Medium
It is well known that nucleotides are essential for many biological activities and are constantly generated de novo in all cells. Increased nucleotide synthesis is required for DNA replication and RNA production to enable protein synthesis as cells proliferate at different stages of the cell cycle, during which these actions are regulated at several levels [31]. Next we studied the transcriptional activity of the same viral genes in a medium containing nucleotides (thus, we simulated metabolic activity of a cell in G1 phase compared to a cell in G0 phase). As shown in Figure 6., the presence of nucleotides in the culture medium significantly changed the transcriptional profile of the viral genes associated with nucleotide metabolism ( Figure 7A-K196R, Figure 7B-R298L, Figure 7C-A240L, Figure 7D-F334L, Figure 7E-F778R, Figure 7F
Discussion
We have shown that the ASF virus manipulates the cell cycle both in G1 (cells of various pig tissues, for example, liver) and in G0 cells: PAM or porcine cardiomyocytes. At the early stage of infection, the cells exit the G0 phase and enter the G1 phase, followed by a transition to the S phase, and then to the G2. At a later stage, the cells are blocked in
Discussion
We have shown that the ASF virus manipulates the cell cycle both in G1 (cells of various pig tissues, for example, liver) and in G0 cells: PAM or porcine cardiomyocytes. At the early stage of infection, the cells exit the G0 phase and enter the G1 phase, followed by a transition to the S phase, and then to the G2. At a later stage, the cells are blocked in the G2 phase and the transition to the M phase is prevented. This manipulation leads to the progression of the cell cycle from G0 to G1 and then the S stage. It helps the virus to acquire the necessary nucleotides from the host cell.
This has been shown both in vitro and in vivo. The increase in DNA content in various cells and tissues in the acute form of ASF, which was previously discussed by us [14,28]. In animals, it is undoubtedly associated with viremia, which is observed by the end of day 1 after infection with virulent strains of the virus and, consequently, results in the infection of internal organs. Virulent strains of ASFV were detected in the inner organs, usually at the same time [32]. Despite the fact that the virus primarily infects various macrophages, other cells are also vulnerable to direct infection with this virus in in vivo experiments [33]. Therefore, we are inclined to consider that one of the main reasons for the accumulation of DNA in the nuclei of various pig cells in the acute form of ASF is direct damage of cells by the virus.
Similar processes take place under in vitro conditions. The PAM cell culture is characterized by initially resting cells in the G0 phase. However, soon after the onset of infection, DNA synthesis begins in the nuclei of these cells. Therefore, in the early stages of infection, the virus unblocks cells from the G0 phase and turns on the synthesis of cellular DNA (starting at 10 hpi). The unblocking of PAMs occurs by activating cellular mechanisms (activation of cyclin synthesis), which is probably one of the earliest cytopathological characteristics of ASF infection. The obtained data showed that cyclin A and E accumulate in an infected cell by 3-4 hpi and then their synthesis is abruptly interrupted. Only viral genes expressed before or immediately at that time can affect the synthesis of these cellular proteins. ASFV's early gene expression in infected cells is detectable as early as 1 hpi; however, in general, transcripts are abundant at 2 hpi with a plateau in accumulation at 2-6 hpi [29]. However, only two genes have been identified as immediate early genes and can accumulate up to 3 hpi. Those genes are L270L, a member of the multigene family 110, and I215L, a protein similar to the ubiquitin conjugating enzymes [29]. Karger et al. (2019) [34] showed the diffuse distribution of pI215L throughout the cytoplasm and nucleus and suggested that this might be a reflection of the ubiquitination of viral and host proteins. Data described previously by [35] about fermentative characteristics of I215L showed that it acts like a ubiquitin-conjugating enzyme. The authors [35] observed that the ASFV-I215L gene is actively transcribed from 2 hpi via qPCR. Therefore, the expression of this gene coincides with the disappearance of cyclins, and we can assume that it is (I215L) probably involved in the suppression of cyclins synthesis in PAM. It is worthy of note that the duration of G1 varies considerably between different cell types under in vitro conditions. Non-differentiated cells remain in the G1 phase for only two to three h, whereas differentiated cell lines remain 8-12 h or more in G1 [36,37]. This time is consistent with our data (described in results) on the onset of DNA biosynthesis in PAM cells starting from 10 hpi.
We can therefore conclude that the synthesis of cellular DNA started at 10-12 hpi. We cannot exclude the viral component in the synthesis of this DNA; however, undoubtedly due to the huge volume of nuclear DNA synthesis (at least three to four orders of magnitude greater than the synthesis of viral DNA), the revealed increase in the amount of DNA in the cell nucleus has a cellular but not viral origin. DNA synthesis in the nuclei of infected PAM does not lead to mitosis. Despite numerous studies, we were unable to identify single reliable mitosis in PAM under the influence of the ASF virus. As mentioned above, the reliable accumulation of DNA in the nuclei of macrophages at 14-16 hpi (Figure 2) was described. Therefore, we can assume that the activation of the cell cycle occurs only until the G2 phase, and then the cell cycle arrest and the prevention of the transition of cells to the M phase follows. The previous study described by [38] suggests that ASFV protein p17 can cause cell cycle arrest and affect the expression of cyclins, including Cyclin A and Cyclin E. However, the inhibition of the expression of cyclins in PAM occurred earlier than the synthesis of p17. Therefore, the blockage of cells in the G2 phase is at least a two-stage process and begins at an early stage of viral infection. It is known that one of the main functions of ubiquitin is the proteolytic degradation of proteins labeled with polyubiquitin chains (in which subsequent ubiquitin units are attached to the side amino groups of the previous ubiquitin molecule) using the 26S proteasome.
It is known that the ASF virus inhibits the translational activity of an infected cell in the early stages of infection as early as 8 h [39,40]. This explains the lack of an increase in cellular cyclin levels after 6 h from the onset of infection. Nevertheless, ASFV infected cells successfully enter from the G0 to G1 stage of the cell cycle. This happens under the influence of viral proteins.
In the nucleus of an infected cell, DNA synthesis starts at 10-12 hpi, and therefore we investigated the transcriptional activity of virus genes directly involved in DNA synthesis or genes involved in early transcription before replication.
Due to the limited pools of intracellular dNTPs, many large DNA viruses encode enzymes involved in nucleotide metabolism in order to increase the precursor pools of dNTPs required for viral DNA replication [5]. The primary target for ASFV replication is nondividing macrophages with low levels of dNTPs. This is evidenced by the virus-encoded thymidine kinase (K196R) which is non-essential for virus replication in dividing tissue culture cells, but its deletion dramatically reduces virus replication in macrophages [5,6,11]. Moreover, deletion of the thymidine kinase gene induces complete attenuation of the ASFV II genotype, and activity of this gene is required both for efficient replication in porcine macrophages and for virulence in swine. In other words, for replication in vivo, in primary target cells, wild strains of the virus require activation of this gene [8].
We studied the transcriptional activity of the ASFV thymidine kinase (TK) gene ( Figure 6C). TK is an enzyme that catalyzes the conversion of thymidine to thymidine monophosphate and then to thymidine triphosphate. The latter is incorporated into DNA since thymidine can only be incorporated into DNA in a phosphorylated form; thymidine kinase plays a key role in the process of DNA synthesis. Now our data explain at what time thymidine kinase of the virus is required for viral replication in G0 porcine macrophages.
The ASFV TK gene has been shown to be nonessential for the growth of ASFV in cultured hamster and monkey cells [41,42]. The inactivation of the TK gene in poxviruses and herpesviruses showed the gene to be nonessential for growth in cultured cells [6].
According to a study described previously by [43], the R298L belongs to the late genes, so its transcription starts after viral DNA replication. However, our data showed that the expression of R298L starts at 6 hpi and continues for a long time. Also, this difference is most likely caused by different types of cells (actively proliferating VERO and G0 PAMs) used in research. Similar genes in other viruses stimulate the infected cell cycle to support viral DNA synthesis [44]. Therefore, we can assume similar functions of the genes A240L ( Figure 6A) and R298L ( Figure 6B). Ribonucleoside diphosphate reductase (genes F778R Figure 6D and F334L Figure 6E), is an enzyme that catalyzes the formation of deoxyribonucleotides and its blocking inhibits DNA synthesis [5]. The E165R ( Figure 6F) gene encodes an enzyme that is involved in pyrimidine metabolism.
The presence of nucleotides in the culture medium dramatically changed the transcription profile of viral genes associated with nucleotide metabolism, completely turning them off at an early stage of viral infection.
Summarizing the data obtained on the change in the transcriptional activity of the virus genes in the presence of nucleotides, we can conclude that the genes responsible for the synthesis of nucleotides and the genes affecting the cell cycle behave differently.
The genes affecting the cell cycle simply remain disabled until the beginning of cellular DNA synthesis (8-9 hpi), and the genes responsible for the synthesis of nucleotides in the presence of nucleotides are turned on later and their transcriptional activity is lower than that during virus replication in an environment without nucleotides.
The obtained data with the highest degree of probability can be interpreted as manipulation of the cell cycle by the virus in order to gain access to the nucleotides synthesized by the cell.
By blocking cells (PAMs) in G0 phase, we deprive the virus of the source of nucleotides necessary for the synthesis of viral DNA. Earlier, it was shown that glucocorticoids, and in particular dexamethasone, are able to block the cell cycle in cells of monocytic origin [45,46]. It is well known that glucocorticoids inhibit the proliferation of various types of cells, regardless of their origin. In vitro studies in a variety of cell lines have indicated that glucocorticoids produced a reversible G1-block in the cell cycle. The analysis of DNA content revealed that glucocorticoids arrest cells before S-phase [26,27]. Indeed, the influence of the dexamethasone on infected PAM partially stops an increase of cellular DNA and this coincides with a decrease in the level of viral replication.
Thus, our data revealed that the ASF virus manipulates the cell cycle in infected cells, first by unblocking cells from the G0 phase and sequentially transferring them to the G1 and S phases, and then blocking them in the G2 phase. At least one of the reasons for such manipulation is the need for the virus to synthesize nucleotides by an infected cell. Our assumption is also supported by the data that the activation of the viral thymidine kinase gene occurs in G0 cells and is absent in actively proliferating cells. Overall, we can conclude that the S phase of the cell cycle seems to provide a comfortable environment for successful ASFV replication and completion of the infection cycle (Moore et, al). This phenomenon is also observed in some DNA viruses e.g., herpesviridae [47]. | 7,905.8 | 2022-07-22T00:00:00.000 | [
"Biology"
] |
Microglia in frontotemporal lobar degeneration with progranulin or C9ORF72 mutations
Abstract Objective To identify clinicopathological differences between frontotemporal lobar degeneration (FTLD) due to mutations in progranulin (FTLD‐GRN) and chromosome 9 open reading frame 72 (FTLD‐C9ORF72). Methods We performed quantitative neuropathologic comparison of 17 FTLD‐C9ORF72 and 15 FTLD‐GRN with a focus on microglia. For clinical comparisons, only cases with high quality medical documentation and concurring diagnoses by at least two neurologists were included (14 FTLD‐GRN and 13 FTLD‐C9ORF72). Neuropathological analyses were limited to TDP‐43 Type A to assure consistent assessment between the groups, acknowledging that Type A is a minority of C9ORF72 patients. Furthermore, only cases with sufficient tissue from all regions were studied (11 FTLD‐GRN and 11 FTLD‐C9ORF72). FTLD cases were also compared to age– and sex–matched normal controls. Immunohistochemistry was performed for pTDP‐43, IBA‐1, CD68, and GFAP. Morphological characterization of microglia was performed in sections of cortex blinded to clinical and genetic information. Results FTLD‐GRN patients had frequent asymmetric clinical features, including aphasia and apraxia, as well as more asymmetric cortical atrophy. Neuropathologically, FTLD‐C9ORF72 had greater hippocampal tau pathology and more TDP‐43 neuronal cytoplasmic inclusions. FTLD‐GRN had more neocortical microvacuolation, as well as more IBA‐1–positive ameboid microglia in superficial cortical layers and in subcortical white matter. FTLD‐GRN also had more microglia with nuclear condensation, possibly indicating apoptosis. Microglial morphology with CD68 immunohistochemistry in FTLD‐GRN and FTLD‐C9ORF72 differed from controls. Interpretation Our findings underscore differences in microglial response in FTLD‐C9ORF72 and FTLD‐GRN as shown by significant differences in ameboid microglia in gray and white matter. These results suggest the differential contribution of microglial dysfunction in FTLD‐GRN and FTLD‐C9ORF72 and suggest that clinical, neuroimaging and pathologic differences could in part be related to differences in microglia response.
Introduction
Frontotemporal lobar degeneration (FTLD) is clinically, neuropathologically and genetically heterogeneous. Among clinical presentations are changes in behavior, personality, and language. Some patients also have motor neuron disease. 1 The most common neuropathologic findings in FTLD are tauopathies or TDP-43 proteinopathies. 2,3 The most common genetic causes of FTLD-TDP are mutations in progranulin (GRN) 4,5 and chromosome 9 open reading frame 72 (C9ORF72). 6,7 Mutations in GRN account for about one fourth 8,9 and C9ORF72 for about one half 10 of familial FTLD-TDP. Clinically, almost all patients with mutations in GRN have frontotemporal clinical syndromes, with only rare reports of motor neuron disease. 11 In contrast, patients with C9ORF72 hexanucleotide repeat expansions often have motor neuron disease with or without clinical features of frontotemporal dementia. 12 The neuropathologic features of FTLD-TDP are focal cortical atrophy of frontal and temporal lobes with variable involvement of parietal lobe, as well as cytoplasmic inclusions in neurons and glia that are immunoreactive for TDP-43. 13 The relative density and distribution of neuronal and glial inclusions, as well as dystrophic neurites permits subtyping of FTLD-TDP into Types A, B, and C, as well as less common subtypes. 14 FTLD-GRN is almost always Type A, while FTLD-C9ORF72 can be Type B and less often Type A. 15 The pathogenesis of neurodegeneration in FTLD-TDP remains poorly understood. Several pathomechanisms have been hypothesized for FTLD-C9ORF72, including toxic gain of function of RNA, protein aggregation, and impairment of nucleocytoplasmic transport. 16,17 Microglial dysfunction has recently been suggested to play a role, based upon studies of C9orf72 knock-out mice. 18 Mechanisms thought to play a role in FTLD-GRN are based on the fact that progranulin has neurotrophic properties, while the proteolytic products of progranulin, the granulins, may be proinflammatory modulators. 19 Given that FTLD-C9ORF72 and FTLD-GRN may have different pathogenic mechanisms, we aimed to study clinical and neuropathological differences that might be related to this fact. Several previous studies have addressed clinicopathological characteristics of FTLD-TDP; however, few have stringently controlled for subtype. In this study, we hypothesize that different clinical features between GRN and C9orf72 could be driven by a differential role of microglia in neuroinflammation. To address this hypothesis, we measured microglial phenotypes based on density and morphology in cases of FTLD-GRN and FTLD-C9ORF72 with Type A TDP-43 pathology.
Case materials
All cases were submitted for diagnostic studies and research to the brain bank for neurodegenerative disorders at Mayo Clinic in Jacksonville, Florida. We identified 17 cases of FTLD-C9ORF72 with Type A TDP-43 pathology and 15 cases of FTLD-GRN. All cases were neuropathologically classified by a single neuropathologist (DWD), and genotyping was performed on DNA extracted from frozen brain tissue. For clinicopathological studies, cases were included only if they had good quality medical documentation and there was diagnostic concurrence by a minimum of two neurologists. The final set of cases for clinical comparisons was 13 FTLD-C9ORF72 and 14 FTLD-GRN. For quantitative microglial morphological studies, tissue from all brain regions had to be available for assessment. The final set of cases for pathologic studies was 11 FTLD-C9ORF72 and 11 FTLD-GRN. A summary of cases included in clinical and pathologic analyses are listed in Table 1 and additional details are provided in Table S1.
Clinical assessment
Cognitive and psychiatric features were abstracted from medical records of neuropsychological and psychiatric evaluations. The clinical diagnosis of included cases fulfilled the criteria for behavioral variant frontotemporal dementia (bvFTD), 20 progressive nonfluent aphasia (PNFA), 21 or corticobasal syndrome (CBS). 22 Clinical asymmetry was considered present for cases with diagnosis of PNFA and CBS or by asymmetry in motor findings from neurologic examinations.
Genetic analyses
Frozen brain tissue was used for genotyping. All FTLD-C9ORF72 cases had hexanucleotide repeat expansions in C9ORF72 using a repeat-primed polymerase chain reaction method to detect expansions of the GGGGCC hexanucleotide, 6 and for most cases, the expansions were confirmed with Southern blotting. 23 FTLD cases with GRN mutations were confirmed using Sanger sequencing. 5,9 Microscopic pathology Samples from the middle frontal gyrus (mFCtx), superior temporal gyrus (sTCtx), inferior parietal cortex (iPCtx), primary motor cortex (MCtx), hippocampus, basal ganglia, and medulla were cut at 5 µm thickness, mounted on glass slides and stained with hematoxylin and eosin (H&E). Thioflavin-S fluorescent microscopy was performed to evaluate senile plaques and neurofibrillary tangles. A Braak neurofibrillary tangle stage and a Thal amyloid phase were derived from quantitative data, as described previously. 24 The neuropathologic assessment included immunohistochemistry for phospho-TDP-43 or an antibody to a neoepitope in the midregion of TDP-43. 25 The TDP-43 Type A was assigned to each case using features described in the harmonized classification. 14
Image analysis
Digital microscopy methods have been previously described. 28 Briefly, immunostained sections were scanned on an Aperio ScanScope XT slide scanner (Leica Biosystems Aperio, USA) producing high resolution digital images. Analyses were performed using Aperio ImageScope software in defined regions of interest (ROIs). A color deconvolution algorithm was used to count the number of pixels that had strong immunoreactivity. The output was percentage of strong positive pixels relative to the total area of the ROI. For digital analysis of the hippocampus, several ROIs were defined, including the entire hippocampus, the dentate fascia, CA4 sector, and the parahippocampal gyrus. For microvacuolation analyses, H&E sections of mFCtx, sTCtx, and parahippocampal gyrus were evaluated. The percentage of the area occupied by microvacuolation was assessed in cortical layer II as the number of weak positive pixels divided by the total number of positive pixels according to previously published methods. 29 Severity of microvacuolation was also assessed by semiquantitative scores (none, mild, moderate or severe), and these results were similar to those obtained by image analysis (Fig. S1).
For measurement of cortical thickness in mFCtx and sTCtx, ImageScope ruler tool was used by drawing a line perpendicular to the pial surface and the gray-white junction.
Quantitative analysis of microglial phenotypes
Digital image analysis was used to assess microglial density based on signal intensity in cell bodies and processes. To further investigate the role of microglia in FTLD-TDP, we performed qualitative morphologic characterization of different microglia phenotypes using the Aperio counting tool for manual labeling of IBA-1-positive or CD68-positive microglia in fixed-sized ROI. Digital image analysis and manual counting of IBA-stained sections gave comparable results (Fig. S2). Individual microglial subpopulations were assessed in mFCtx, a severely affected region, and compared to MCtx, a minimally affected region and in the subcortical white matter. Microglia were categorized as ramified, elongated, rodshaped or ameboid (Fig. 1). Dystrophic microglia had evidence of fragmentation or beading of cell processes according to the criteria of Streit and Braak. 30 Elongated cells had a polarized appearance with minimal branching, with rodshaped nuclei, and process lengths between 75-150 µm. The number of IBA-1-positive ameboid, ramified, elongated, and dystrophic microglia were counted in each ROI. Cells were divided into CD68-low, which lacked immunostaining of cell processes, but had perinuclear granules (Fig. 1) and CD68high, which had immunostaining of cell processes. Microglial counts in defined ROIs were made by two observers, one blinded (SFR) to all clinical, genetic and pathologic information. Five neurologically normal individuals, matched for age and sex, were used as controls for this analysis (Table S2).
Statistical analyses
SigmaPlot (Systat Software, San Jose, CA) was used for statistical analyses. Due to small sample sizes, non-parametric Kruskal-Wallis analysis of variance (ANOVA) on ranks was performed on quantitative measures and differences in median values were assessed. Post hoc pairwise comparisons were performed between groups using Mann-Whitney rank sum test. For categorical data (sex, presence or absence of extrapyramidal signs, and symmetry/asymmetry of neuroimaging findings), a Chi-squared test was used to compare group differences. Fisher's exact test was used for pairwise categorical data when the count was less than 5. Correlative analysis was performed using Spearman rank order correlation. A P-value of < 0.05 was considered statistically significant. Given that FTLD-GRN tended to have more women and FTLD-C9ORF72 to be older, analyses were also adjusted for age and sex in a multiple logistic regression models assessing clinical differences, such as memory impairment similar adjustments were made in multiple linear regression analyses for Braak stage. There were no statistically significant differences between the two groups with respect to median disease duration and male-to-female ratio; but there were more women in FTLD-GRN (57%) than FTLD-C9ORF72 (27%). This observation fits with a recent meta-analyses showing that FTLD-GRN is more common in women than men. 31
Asymmetric clinical syndromes and neuroimaging findings
The asymmetrical clinical syndrome of CBS was noted in three of 14 patients (21%) with FTLD-GRN, but in none of the FTLD-C9ORF72 patients. Available MRI or CT images (14 for FTLD-GRN and 11 for FTLD-C9ORF72) showed asymmetrical cortical atrophy in only one patient with FTLD-C9ORF72, but in seven (50%) patients with FTLD-GRN (Table 2).
Language disorders
Progressive aphasia (four PNFA and two progressive aphasia not otherwise specified) was noted in six of 14 patients with FTLD-GRN, but in only three of 13 FTLD-C9ORF72 patients. One patient appeared to have logopenic aphasia, and one patient appeared to have semantic components. A receptive component could not be excluded confidently. The remainder clearly had progressive aphasia, but could not be further subtyped.
Amnestic clinical syndromes and extrapyramidal signs
Amnestic dementia was based on neuropsychological evaluations and considered for patients where memory complaints were the predominant feature. Alzhiemer dementia was in the clinical differential diagnosis of some of these patients. Memory problems overshadowed other clinical features of FTLD, such as behavioral and language problems. Amnestic dementia (AD) was prominent in two patients (14%) with FTLD-GRN, but in eight patients (62%) with FTLD-C9ORF72. AD was the clinical diagnosis of four of eight FTLD-C9ORF72 patients (Table 1). In FTLD-C9ORF72, prominent memory problems were noted in four patients with dementia with Lewy bodies (DLB) (31%), and these patients also had extrapyramidal signs and visual hallucinations. The frequency of extrapyramidal signs in the entire cohort was similar in FTLD-GRN and FTLD-C9ORF72 ( Table 2).
Effects of age and Alzheimer type pathology on clinical features
To determine potential confounding variables accounting for observed differences in clinical syndromes and neuroimaging findings, we addressed possible contributing pathologic features, such as brain weight, Thal amyloid phase, Braak neurofibrillary tangle (NFT) stage, and presence of hippocampal sclerosis (HpScl). There were no differences between FTLD-GRN and FTLD-C9ORF72 for age at death, disease duration, brain weight, frequency of HpScl or Thal amyloid phase. On the other hand, the median Braak NFT stage was greater (P < 0.001) in FTLD-C9ORF72 than in FTLD-GRN. To adjust for the effects of age at death and sex on differences in Braak NFT stage, a multiple linear regression model showed higher (P = 0.011) Braak NFT stage in FTLD-C9ORF72 compared with FTLD-GRN.
TDP-43 pathology
To assess differences in TDP-43 pathology histological sections from the mFCtx, sTCtx, iPCtx and hippocampus were assessed and scored semiquantitatively as either present or absent. The density of phospho-TDP-43 pathology was also assessed with image analysis. There were no significant differences in TDP-43 pathology in MCtx (not shown); however, densities of TDP-43 pathology tended to be greater in FTLD-GRN than FTLD-C9ORF72 in mFCtx, sTCtx and iPCtx, (Fig. 2). On the other hand, FTLD-C9ORF72 had greater TDP-43 density in the hippocampus than FTLD-GRN (P = 0.002). In subregions of the hippocampus, TDP-43 density was significantly greater in dentate gyrus (P < 0.001) and CA4 sector (P < 0.001) in FTLD-C9ORF72 compared with FTLD-GRN.
Cortical thickness
We assessed cortical thickness in mFCtx and sTCtx. FTLD-GRN had significant atrophy of the mFCtx (P = 0.036) ( Table 3). The cortical thickness in the sTCtx was not significantly different.
Neocortical microvacuolation
The degree of superficial microvacuolation was assessed using digital image analysis in the mFCtx and sTCtx. In most cases, at least some degree of microvacuolation was detected in both ROIs (Fig. 3). Quantitative analysis revealed that microvacuolation was more severe in FTLD-GRN than FTLD-C9ORF72, reaching statistical significance in mFCtx (P = 0.036).
Neuroinflammation
There were no significant differences in overall density of IBA-1, CD68 and GFAP-positive glial cells between FTLD-C9ORF72 and FTLD-GRN using digital image analysis in mFCTx, sTCTx, iPCTx, MCTx. The only exception was more CD68-positive microglia in the hippocampus in FTLD-C9ORF72 (data not shown).
Discussion
In this study, we compared FTLD-GRN and FTLD-C9ORF72 matched for TDP-43 subtype (Type A) to exclude for differences due to TDP-43 subtype. Although there are several clinical, neuropsychological, and radiological studies of FTLD-GRN and C9ORF72, to our knowledge, this is the first direct clinicopathological comparison of FTLD-GRN and FTLD-C9ORF72 matched for TDP-43 type. In our clinical comparison, we found that FTLD-C9ORF72 more frequently had amnestic dementia, including antemortem diagnoses of AD or DLB, as well as more symmetrical and milder cortical atrophy compared with FTLD-GRN. Although amnestic deficits in FTD can be affected by executive dysfunction, 32 our observations are in line with previous reports, including Mahoney and coworkers and Simon-Sanchez and coworkers, who reported about half of FTLD-C9ORF72 patients presented with memory impairment. 33,34 Our previous study showed FTLD-C9ORF72 had greater tau pathology than FTLD-GRN. A previous study showed less tau pathology in FTLD-GRN compared to FTLD-C9ORF72. 27 In this study, we similarly found decreased tau densities in FTLD-GRN in the mFCtx and hippocampus compared to FTLD-C9ORF72. In line with this, Papegaey and coworkers biochemically reported reduction of tau protein expression in frontal cortex of FTLD-GRN. 35 We also found significant greater TDP-43 in the hippocampus of FTLD-C9ORF72 compared with FTLD-GRN in addition to tau pathology. It is likely that hippocampal TDP-43 pathology and tau pathology both contribute to the amnestic phenotype in FTLD-C9ORF72.
FTLD-GRN patients more often had asymmetrical clinical presentations, such as PNFA and CBS. This is in line with previous reports, 36,37 including a study by Pickering-Brown and coworkers who reported that PNFA was more common in FTLD-GRN (36%). 38 FTLD-GRN also had more severe cortical atrophy, microvacuolation, and more ameboid microglia in cortical regions (but not in the hippocampus) compared with FTLD-C9ORF72. More severe and asymmetrical cortical atrophy, as well as greater TDP-43 pathology in, sTCtx and iPCtx may underlie a pathological substrate of aphasia.
In the brain, expression studies based upon RNA sequencing show that it is extremely high in microglia and sparse in neurons. 39 Loss of function mutations in GRN, leading to nonsense mediated decay of mRNA, 9 are thought to be associated with microglial dysfunction. 40,41 There are only a few quantitative neuropathologic studies on neuroinflammation in FTLD-GRN and FTLD-C9ORF72. Lant and coworkers performed a semiquantitative analysis of CD68-positive microglia in FTLD-GRN (Type A), FTLD-C9ORF72 (Types A and B) and FTLD-MAPT in frontal and temporal cortices and found significantly more CD68-positive microglia in FTLD-MAPT than in FTLD-GRN and FTLD-C9ORF72, but no difference between the two genetic forms of FTLD-TDP. 42 In the present study, however, we assessed critically affected areas such as cortical layer II and also subtyped microglia based on morphology. We did not find any significant differences in rod-shaped or dystrophic microglia between FTLD-GRN and FTLD-C9ORF72, but showed that FTLD had significantly more dystrophic microglia compared to control. Several studies highlight a role for rod-shaped and dystrophic microglia in neurodegenerative diseases. 30,43,44 The exact significance of these findings remain to be identified, but may be related to microglial dysfunction. On the other hand, FTLD-GRN had more microglia with condensed nuclei, which may suggest either increased vulnerability of microglia deficient in GRN or an increased turnover of macrophages.
Furthermore, FTLD-GRN had significantly more IBA-1-positive ameboid cells in layer II of mFCtx and deep white matter. This is consistent with a study by Woollacott and coworkers reporting high density of amoeboid microglia in the cortex and white matter of a single FTLD-GRN case. 45 These findings suggest a role for ameboid microglia in the pathogenesis of both white matter and gray matter damage in FTLD-GRN. FTLD-GRN also showed more significant neocortical microvacuolation. The underlying molecular mechanism for microvacuolation is unknown, but synaptic damage has long been presumed to play a role. 46 Lui and coworkers showed that progranulin knockout mice have synaptic loss through defective synaptic pruning by microglia. 40 They also found increased IBA-1positive microglia in the frontal cortex of FTLD-GRN compared with neurological controls. 40 In conclusion, we found distinct clinicopathological differences between in FTLD-GRN and FTLD-C9ORF72 with respect to type A TDP-43. Clinical asymmetric phenotypes, such as PNFA or CBS, as well as asymmetric brain atrophy, were more common in FTLD-GRN. Early memory deficits and symmetric brain atrophy were more common in FTLD-C9ORF72. Our neuropathological analyses highlight differential neocortical microvacuolation and phagocytic microglial phenotype between FTLD-GRN and FTLD-C9ORF72. Microglial dysfunction may be implicated in both mutations, but our data suggest different roles for neuroinflammation between FTLD-GRN and FTLD-C9ORF72. Additional experimental studies are warranted to better determine shared or distinct downstream Table S2. Demographic and pathologic features of FTLD cases and normal controls used in microglial morphologic studies. Figure S1. Comparison of manual microglial counts to image analysis of IBA-1 density. Figure S2. Comparison of manual microvacuolation scores and vacuolation burden from image analysis. | 4,333 | 2019-08-25T00:00:00.000 | [
"Biology"
] |
The discrete time multichannel three-dimensional probability CSMA with function of monitoring based IoT
Currently, with the rapid development of Internet of Things (IoT), users demand more and more personalized and diversified business categories, radio spectrum increasingly tense. With the exploiting of Transmission Control Protocol/Internet Protocol (TCP / IP) protocol, network achieves the objects of the intelligent identifying, locating, tracking, monitoring and managing a network. Multichannel random access mechanism maximizes the utilization of constrained spectrum resources and allocating scarce channel resources to disparate hierarchical according to business priorities with various qualitiy of service (QoS) requirements. In this paper, we propose a MAC design combining them together in one unit system under three-dimensional probability CSMA (Carrier Sense Multiple Access) adopting the average cycle analysis method to analysis. Through the paper, precise mathematical value of system throughput is got by rigorous derivations under many circumstances. The correctness of the theory and model is demonstrated with simulation results and effectiveness illustrated. In the offline phase, simulation results verify that the proposed algorithm can improves the controllability of the system, the channel utilization, system security, and reliability of packet transmission and meet the different priorities of different QoS requirements.
Introduction
Things to be considered as use of the Internet expansion, the application of innovation is the core of the development of things to the user experience as the core of the innovation is the soul of the development of things.IoT (Internet of Things) is an important part of the new generation of information technology [1].Thus, by definition, "things that material objects connected to the Internet.""Things" refers to the various information sensing equipments, such as: radio frequency identification, infrared sensors, global positioning systems, laser scanners, various other devices and the Internet combine to form a huge network.The aim is to have all the items are connected to the network to facilitate the identification and management.This has two meanings: First, the core and foundation of things is still the Internet, is based on the Internet extension and expansion of the network; second, to extend and expand its client to any goods and items of information exchange and communication [2] .
The role of the MAC layer is to provide fair, reliable, efficient scheduling mechanism to allocate radio channel resources, MAC protocol performance is directly related to the performance of the wireless channel utilization and the entire network [3] .It has always been important and difficult research among scholars.
And the most common MAC layer protocol is carrier sense multiple access (CSMA) with varieties of other mechanisms.
There are many sites in the wireless communication network, so the channel resource is limited.If the site sends a message according to no law, then the probability that information packets come into the collision will be greatly increased [4] .Then if we use the CSMA (Carrier Sense Multiple Access) protocol, a site listens to the status of the channel first, and then decides whether to send a message or not [5] .With the mechanism, the probability of such a collision would be significantly reduced.
During the process of information exchange and communication, in order to achieve the objects of the intelligent identify, locate, track, monitor and manage a network, then TCP / IP (Control Protocol/Internet Protocol) come into use [6,7] .
For TCP / IP protocol, if the recipient successfully receives the data, it will return an ACK.ACK signals usually have their own fixed format, the length of size, to reply to the sender by the recipient.The format depends on which kind of the network protocol is taken [8] .When the sender receives the ACK signal, it can send the next data.If the sender does not receive a signal, then the sender may retransmit the current data package, data transfer may stop.The process is depending on the network protocol used.
Currently, with the rapid development of IoT, users demand more and more personalized and diversified business categories, radio spectrum increasingly tense.Multichannel random access protocol can maximize the utilization of spectrum resources and to allocate resources according to business priorities various QoS (qualitiy of service) requirements.On the future development of mobile communications, it will play an important role.To achieve the above functions we propose the multichannel three-dimensional probability CSMA with function of monitoring (MMTDP-CSMA).The agreement not only enables the function of intelligent identify, locate, track, monitor and manage a network, but also realizes maximizing the utilization of spectrum resources and allocating resources according to business priorities various QoS requirements.
The rest of this paper is organized as follows: Section 2 describes the based protocol three-dimensional probability CSMA.Section 3 presents our MMTDP-CSMA protocol in detail.To evaluate the performance of the MMTDP-CSMA protocol, simulation setup and results are presented in Section 4. Finally, Section 5 concludes the paper.
Three-dimensional probability CSMA
The basic idea of CSMA is that before a site attempts its transmission, it needs to infer the channel condition by sensing the channel.If it infers that its transmission will upset (or be upset by) any receiver's ongoing transmissions (including its own receiver), then it defers its transmission.In addition, to prevent two sites from beginning their transmissions at the same time (given that they both sense the channel to be safe for transmission), each transmitter undergoes a random back-off countdown period before transmission .
For three-dimensional probability CSMA, there will be three random events appearing repeatedly: a packet is sent successfully (U events), transmission is done unsuccessfully namely two or more packets are sent at one time (C events), no packets need to be transmitted that channel is idle (I events), these three events are forced into: channel is idle (I events) event, channel is busy (CU events) and channel is idle following the CU events (CUI events); packet is sent successfully or unsuccessfully(combined C events with U event, denoted by CU event); force CU events and CUI events into B events.A cycle period is n T .TP is transmission period.
Use three-dimensional probability: P1, P2, P3 to control the period of I events, CUI events and CU events separately.At I events phase, a site send a packet at probability P1.During the CU events, the site senses channel status with probability P2.Similar to CU events, while a site is in CUI events, it senses channel status at probability P3.
Basic unit of the system control clock is a , the information packets arrived at time a will transmit at the starting time of the next slot; Channel propagation delay is a ,the packet length is unit length and is an integral multiple of a ; Packets need to be sent at the first slot in the transmission period can always sense the state of channel at last moment; During the transmission period of information packets, the phenomenon of packet collisions occur inevitably, and continues to be sent after a random time delay, it sends will not produce any adverse effects on the arrival process channel In a cycle, the average length of time slot that information packet has been successfully sent in a cycle is: Average length of B event is: Where (1 ) a represents the length of information packet whether it transmitted successfully or not in the TP cycle.Average length of I event is: 3 System throughput of three-dimensional probability CSMA is:
MMTDP-CSMA
With the multi-channel mechanism, there is more than one channel; we assume it owns N channels.Thus considering the system having N channels and N priorities, the nodes are accessed to the channel resources randomly by the business priorities of themselves which is depicted in Fig. 2.
Assume that the priority sequence is arranged from low to high as priority 1, priority 2… priority N. The service with priority i occupies channel 1 to i ( i =1, 2… N), that is the service with priority 1 occupies channel 1, priority 2 occupies channel 1 and channel 2, and the service with priority N occupies channel 1 to channel N .
Fig. 1. Structure of multichannel mechanism with N channels
For a single channel, assuming channel i , under the MMTDP-CSMA mode, packet transmission and retransmission phase is the same as three-dimensional probability CSMA.Unlike the above B events scenarios, when during the transmission interval, the time is added to (1 3 ) a while the previous is (1 ) a , the addition time slot 2a is exploited to transmit ACK or NCK information.
Before analysis, do assumptions as before while adding one more: the access method of number process whose independent parameter is i G , each arrival process on the channel is independent of each other.
For channel i , in a cycle, the average length of time slot that information packet has been successfully sent in a cycle is: Average length of B event in channel i is: Where (1 3 ) a represents the length of information packet whether it transmitted successfully or not of channel i in the TP cycle.
Average length of I event in channel i is: For channel i , system throughput of MMTDP-CSMA is: Basing on the above analysis and computational formula of the systemic throughput.
systemic throughput of MMTDP-CSMA is: (9) Assuming that the length of information packet sent by the business with priority l successfully in average cycle period of channel i is: . Then according to the above analysis, we can get the throughput of the MMTDP-CSMA with the priority l :
Simulation results and analysis
From the above analysis, the expression of the system throughput under the discrete time three-dimensional probability CSMA protocol with multichannel mechanism is got.With the simulation tool-MATLAB R2010a, the simulation results are shown in Figure 4 As it can be seen from the Fig. 2, the simulation values of system throughput under the new protocol are consistent with the theoretical ones.At the beginning, when the arrival rate increases, the system throughput also increased; then throughput reaches its maximum under the condition.Finally, with the increasing amount of the arrival rate, the system throughput decreases.The simulation values of system throughput under the new protocol are consistent with the theoretical ones.Can be seen from the Fig. 3, the simulation values of system throughput are consistent with the theoretical ones.We can find that with the 1 P increases, the system throughput is increases too.This is laying that when the arrival rate is small, if the probability of arrival information sent is too small at the I events, the channel resource is not fully utilized.Thus, if we increase the probability of the arrival information transmitted at this time, we can improve the efficiency of channel resources and increase the value of the system throughput.
Fig. 4. The throughput of the new protocol with variable parameter P2
In the Fig. 4, the simulation values of system throughput under the new protocol are consistent with the theoretical ones.As can be understood from the figure, when 2 P becoming bigger, the throughput will decrease; because when the ICEICE 2016 01023-p.3channel is busy sending the packet, the more new arrival information packets to send at the CU events the more collisions will be.
So we can change variable 3 P to control the process of the CUI events.Not only this, we can change the variable of 1 P , 2 P and 3 P at the same time to get value of the system throughput needed.As can be understood from Fig. 5 to Fig. 8, the simulation values of system throughput under the new protocol are consistent with the theoretical ones.With the total number of channels increases, the value of the new protocol's total system throughput will increases; the channel resources can distribute to every channel according to their priority according to their own priority separately; when the priority is higher, the corresponding single channel will get more network resources than the lower priorities; thus the value of throughput with higher priority channel is bigger than others with lower priorities.With the multi-channel mechanism, the network resources utilization has been improved significantly.
Conclusions
With radio spectrum increasingly tense, to satisfy the users' demand of more personalized and diversified business categories.To realizing the functions the intelligent identify, locate, track, monitor and manage a network.Using the average cycle analysis method to analysis the proposed protocol of the discrete time multichannel three-dimensional probability CSMA protocol with function of monitoring, realizing the functions above at the same time; during the paper, through the vigorous deviations obtains the accurate mathematical expressions of the system throughput.The correctness of the theory and model is demonstrated with the simulation tool-MATLAB and its effectiveness is illustrated.
DOI: 10.1051/ C Owned by the authors, published by EDP Sciences, 201
Fig. 2 .
Fig. 2. The throughput of the new protocol for channel i
Fig. 3 .
Fig. 3.The throughput of the new protocol with variable parameter P1Can be seen from the Fig.3, the simulation values of system throughput are consistent with the theoretical ones.We can find that with the 1 P increases, the system throughput is increases too.This is laying that when the arrival rate is small, if the probability of arrival information sent is too small at the I events, the channel resource is not fully utilized.Thus, if we increase the probability of the arrival information transmitted at this time, we can improve the efficiency of channel resources and increase the value of the system throughput. | 3,087 | 2016-01-01T00:00:00.000 | [
"Computer Science",
"Engineering"
] |
TCP J18224935-2408280: a symbiotic star identified during outburst
TCP J18224935-2408280 was reported to be in outburst on 2021 May 19. Follow-up spectroscopic observations confirmed that the system was a symbiotic star. We present optical spectra obtained from the Himalayan Chandra Telescope during 2021-22. The early spectra were dominated by Balmer lines, He I lines and high ionization lines such as He II. In the later observations, Raman scattered O VI was also identified. Outburst in the system started as a disc instability, and later the signature of enhanced shell burning and expansion of photospheric radius of the white dwarf was identified. Hence we suggest this outburst is of combination nova type. The post-outburst temperature of the hot component remains above 1.5 x 10$^5$ K indicating a stable shell burning in the system for a prolonged time after the outburst. Based on our analysis of archival multiband photometric data, we find that the system contains a cool giant of M1-2 III spectral type with a temperature of $\sim$ 3600K and a radius of $\sim$ 69 R$_\odot$. The pre- and post-outburst light curve shows a periodicity of 631.25 $\pm$ 2.93 d; we consider this as the orbital period.
INTRODUCTION
Symbiotic stars are interacting wide binaries consisting of a cool giant of spectral type M (or K) as a donor star and a hot component, mostly white dwarf (WD) accreting from the giant's wind and surrounded by a circumstellar nebula (Mikołajewska 2012).Symbiotic stars manifest a wide variety of variability, from orbital motion to outbursts.Outbursts in symbiotic stars are classified into three, symbiotic novae or slow novae, symbiotic recurrent novae, and classical symbiotic outburst (Z And-type).Symbiotic novae and symbiotic recurrent novae are powered by thermonuclear runaway reactions, whereas classical symbiotic outbursts are believed to be either caused by the release of potential energy from the extra-accreted matter or due to the increased mass accretion rate followed by the expansion of hot component (Munari 2019).Classical symbiotic outbursts are commonly seen feature in symbiotic stars and typically show a 1-3 B mag brightening in the system during the outburst.
Although classical symbiotic outbursts are one of the most common features of symbiotic stars, we still have limited knowledge about the exact mechanism of these outbursts.In the literature, four different models are proposed to explain these outbursts: 1) Expansion of WD photosphere at a near constant bolometric luminosity due to an increased accretion rate that exceeds steady burning (Tutukov & Yungel'Son 1976;Iben 1982), 2) Shell flash or thermal pulse similar to nova and recurrent nova (Kenyon & Truran 1983), 3) Dwarf nova-like outburst due to accretion disc instability (Duschl 1986a,b;Mikolajewska et al. 2002), 4) Combination nova, where an outburst is initiated by disc instability following an enhanced shell burning ★ E-mail: sonith.sls@gmail.com,sonith.ls@iiap.res.in,(Sokoloski et al. 2006).In symbiotic stars, it could also be possible to see the same systems showing outbursts with different mechanisms as in AG Peg (Tomov et al. 2016) or Z And (Sokoloski et al. 2006).To understand the nature of classical symbiotic outbursts, we require spectroscopic follow-up observations of more systems.
TCP J18224935-2408280 (hereafter referred to as TCP J1822) was discovered by Tadashi Kojima, Tsumagoi, Gunma-ken, Japan, on 2021 May 19.683 UT.Discovery was reported in the 'Transient Object Followup Reports' pages 1 of the Central Bureau for Astronomical Telegrams (CBAT).It was suggested to be a symbiotic star outburst by Patrick Schmeer (Saarbrucken-Bischmisheim, Germany) after he found a 2 arcsec nearby Gaia LPV source (Gaia DR2 4089297564356878720) with an approximate orbital period of 800 d.This star is included in the Gaia DR2 catalogue of large-amplitude variables by Mowlavi et al. (2021).The spectroscopic follow-up observation by Merc et al. (2021) on 2021 June 09 showed strong emission lines of H I, He I, O [III], and He II in addition to the K5-M0 continuum.They noted that TCP J1822 is an S-type symbiotic star based on infrared colours; also, the distance and apparent magnitude of the system suggest that it contains a cool component of luminosity class III.The follow-up observations by Aydi et al. (2021) reached a similar conclusion, and in addition, they have also reported Bowen blend and relatively weak emission lines of Fe II (42,48,49 multiplets).Earlier observations by Taguchi et al. (2021) on 2021 June 07, and later observation on June 09 confirm similar nature of TCP J1822 although the identification of the O [III] line is reported weak or absent.We present optical spectroscopic observations of TCP J1822 during 2021-22 and confirm the symbiotic nature of the system and try to understand the nature of the outburst.
Photometry
For understanding the behaviour of TCP J1822 before and during the outburst, we have obtained V and g band photometric data from ASAS-SN sky survey (Shappee et al. 2014;Kochanek et al. 2017), covering the period JD 2457461.83 to JD 2460138.2 (2016March 14 -2023 July 12), G, G and G band magnitudes from Gaia DR3 (Gaia Collaboration et al. 2022), covering the period JD 2456913.47 to JD 2457506.75 (2014September 12 to 2016 April 28).Gaia and ASAS-SN light curves are shown in Fig. 1.
Spectroscopy
Low-resolution optical spectra of TCP J1822 were obtained from Himalayan Faint Object Spectrograph Camera (HFOSC) mounted on Himalayan Chandra Telescope (HCT) situated at Indian Astronomical Observatory, Hanle.Observations were carried out between 2021 June 10 and 2021 September 19, using grism 7, having a wavelength range of 3500 to 8000 Å with a resolution of R ∼ 1300 and grism 8, having a wavelength range of 5200 to 9000 with a resolution of R ∼ 2200.A majority of these observations were carried out in the Target of Opportunity (ToO) mode.The details of the observations are given in Table 1.The data reduction was carried out using a pipeline based on python using pyraf modules, following the standard procedure using different tasks in the Image Reduction and Analysis Facility (IRAF2 ).Wavelength calibration was carried out for grism 7 and grism 8 using FeAr and FeNe arc lamp spectra, respectively.Feige 110 and Feige 66 were used as standard stars.On the nights in which observation was carried out in the ToO mode, spectrophotometric standards observed on the nearest night were used for correcting the instrumental response.The response corrected spectra in two grism were scaled to a weighted mean and combined to give the final spectrum.ASAS-SN g-band photometric light curve was used for calibrating the spectra to the absolute flux scale.
Optical light curve and periodicity
The ASAS-SN g-band light curve (Fig. 1) shows that TCP J1822 started brightening on 2021 May 16 and peaked at around 13.5 mag with an increase of 2.2 mag from its quiescent state.Such a 2-3 mag brightening is often seen in Z-And type symbiotic outbursts.The triangular-shaped outburst peak is similar to the light curve of Z-And outburst of 2000 (Sokoloski et al. 2006), where it is suggested that a disc instability event causes an initial brightening in the light 2459822.171800 + 1200 3800-9000 curve.There followed a decrease of 0.5 mag in the next ten days and a re-brightening to a second maximum, with a broader peak.This follow-up event appears to be related to the nuclear burning on the surface of the WD.After 2021 June 5, the g-band magnitude started to decline again, and TCP J1822 returned to its photometric quiescent state about a year after outburst.
The pre-and post-outburst light curves of TCP J1822 show wavelike variations.Using the Lomb-Scargle periodogram (LSP) (Lomb 1976;Scargle 1982), we have obtained periods of 598.95, 618, 598.95 and 609 d corresponding to the highest peaks in the Gaia G, G , G and ASAS-SN V bands, respectively.A similar analysis using the ASAS-SN g band removing magnitudes during the outburst and shifting the post-outburst quiescent magnitudes to the pre-outburst level gives a period of 629.75 d for the highest peak.Furthermore, we estimated the period using multiband data after applying appropriate magnitude shifts and combining them, resulting in the highest LSP peak at 631.69 d, with a false alarm probability of <0.01 per cent.These periodograms are shown in Fig. 2. Other peaks obtained from the Gaia data are due to the sampling effect (see appendix A).Additionally, we verified our result for the multiband data using LombScargleMultiband function implemented in astropy (Astropy Collaboration et al. 2018), which gives the same result.Using this period as an initial guess, we have fitted a sinusoidal curve and estimated period, light curve minima and associated errors.Based on the above analysis, we have obtained an ephemeris of TCP J1822 given by = 2457541.69± 5.78 + 631.25 ± 2.93 × (1) We attribute this periodicity of 631.25 ± 2.93 d to the orbital period.Using the period and phase based on the ephemeris, the G , G and G data points were fitted using a sinusoidal function by varying the amplitude of the sinusoid (see Fig. 3).The resultant amplitudes are approximately 0.36, 0.2 and 0.15 mag in the G , G and G bands, respectively.The larger amplitude at the shorter wavelength is indicative of the irradiation of the red giant by the hot WD.This suggests that there was quiescent burning on the suface of the WD in pre-outburst quiescence.Photometry over the next few years will help in refining this value.Future multiband observations would help to delineate the effects of ellipsoidal modulation (which should be prominent in the band) and irradiation by the hot component ( band).
Distance and reddening
From the Gaia EDR3 parallaxes (Gaia 2021) method.We have estimated a visual extinction of v ∼ 1.48 in this direction and for this distance using the 3D map of interstellar dust reddening published by Green et al. (2019).We calculated v values using the same procedure for the upper and lower bounds of the distance error, and the results are identical.The v value derived from the Bailer-Jones et al. ( 2021) approach in the direction of the object does not show significant differences beyond 5 kpc.The map by Schlafly & Finkbeiner (2011) indicates that the visual extinction in the direction of TCP J1822 is, v = 1.73.Goodness-of-fit of the astrometric model is -0.16 for Gaia EDR3.A lower than 3 is considered to be a good fit3 .However, Bailer-Jones et al. ( 2021) use a probabilistic approach for estimating distances, which relies on priors constructed based on single stars within our Galaxy.It can cause considerable uncertainties for binaries like symbiotic stars.In this work, we use the distance calculated using EDR3 parallaxes for calculating v value and distance prior given for the SED fit (see section 3.3).Considering that the v value is not showing much difference above 5 kpc distance, we are fixing our v value to a conservative 1.48 in all the calculations done in this paper.Our spectral type estimation of the cool giant in the system will be hotter by 1-2 spectral sub-types if we take into account a larger v value from Schlafly & Finkbeiner (2011).Since our best-fitting spectral type from SED matches well with the quiescent TCP J1822 spectrum above 6000 Å, where the contribution from giant dominates, we find the conservative v value we have adopted is suited for subsequent calculations.Reddening corrections were done using the extinction law of Fitzpatrick (1999).
Spectral energy distribution of cool component in TCP J1822
The Virtually no excess over the stellar continuum is seen in the WISE W3 and W4 bands, indicating that this is an S-type symbiotic.The fit is quite reasonable, given that the data represent different epochs.For running ARIADNE, we have used the temperature prior based on the Gaia temperature estimate and upper limit.The distance prior is taken from the distance estimate (see section 3.2), setting the highest error as the upper limit.The v value has been kept fixed at 1.48.We have set a uniform prior for log g from 0 to 4 based on our initial fitting, which results in a radius estimation in the giant star regime.We have used the default prior for radius and metallicity.The best fit gives a temperature of 3614.It should be noted that the temperature, radius and log g are more accurately constrained, whereas [Fe/H] is indicative, and the obtained SED represents only the mean spectrum of the red giant in TCP J1822.Since no UV photometry is available, we are unable to probe the nature of the hot component or its orbital variations in a similar manner.
Evolution of the optical spectra
We obtained three spectra of TCP J1822 in 2021, during the declining phase of the outburst and four in 2022, after the outburst has almost subsided.Significant changes can be seen in the emission lines and the continuum over this period (see Fig. 6).The very blue continuum along with strong Balmer and Paschen lines in the spectrum of 2021 June 10 are suggestive of a prominent, hot accretion disc.As the photometry data also indicates, the beginning of the outburst was due to an accretion event.The circumstellar nebula, illuminated by the luminous WD, also contributed to the emission lines.As the outburst progressed, the blue continuum weakened and the contribution from the red giant in the form of visibility of some TiO bands became 4000 4500 5000 5500 6000 6500 7000 7500 8000 8500 9000 9500 Wavelength C1.The evolution of emission lines of TCP J1822 gives an insight into the nature of the outburst and the hot component present in the system.All the emission lines fluxes, including Balmer lines, and H I lines, showed an increasing trend in the first (2021 June 10) to the second (2021 July 14) observation and later declined.The O I 7774 Å line is significantly fainter than O I 8444 Å.This may be due to the fluorescence of Ly photons, where Ly photons at 1025.72 Å pump the O I ground state resonance line at 1025.77Å and later downward cascade to produce 11287Å, 8444Å, and 1304Å lines in emission (Bowen 1947;Kastner & Bhatia 1995).
In the quiescent phase spectra (obtained on and after 2022 March 08) all the emission line strengths were reduced, except Raman scattered O VI line and O I 8444 Å. Faint lines of [Ca VII] 5618 Å, [Fe VII] 5721 Å, [Ca V] 6086 Å or [Fe VII] 6087 Å appeared and later reduced their strength in the subsequent observations.High excitation lines emerging in the quiescent spectra indicate that the expanding pseudo-photosphere becomes optically thin (see section 3.5); hence the nebular region is exposed to the heated WD.
Raman scattered O VI line
Raman scattered O VI line is a unique feature that can only be present in a system with neutral hydrogen regions and a hot component which could ionize oxygen to the fifth ionized state (Nussbaumer et al. 1989).Hence this feature alone can confirm the symbiotic nature of a system.In our optical spectrum obtained on 2021 June 10 near the g-band maximum, this line was almost invisible.However, it strengthened significantly in the second observation on 2021 July 14 and remained at a similar strength throughout the later observations (see Fig. 8).Shape of the line was broad on 2021 July 14, but narrowed subsequently.Variations of the Raman scattered O VI line correlated to the optical light curves were also seen in other outbursting symbiotics such as V426 Sge (fig.7
Bowen feature
The Bowen feature near 4640 Å is seen in all our spectra.This feature was also reported in earlier spectra obtained by Aydi et al. (2021); Taguchi et al. (2021).The strength of the feature reduced as the outburst declined.The Bowen feature is produced when X-rays emitted from a compact source interact with nearby gaseous matter.This indirectly indicates that the system produces X-rays during the outburst --type (Luna et al. 2013) -and is consistent with the presence of Raman scattered O VI line in systems showing -type X-ray emission (Akras et al. 2019).The Bowen feature is also shown by other symbiotic systems like RR Tel and AG Peg (Eriksson et al. 2005).These two stars are also reported to show -type and type X-ray emission, respectively (Luna et al. 2013).The presence of an enhanced blue continuum during the outburst also indicates the possibility of the -type X-ray component in the system, which originates from the inner layer of accretion disc (Luna et al. 2013).Another similar symbiotic with an active accretion disc is MWC 560 (fig.G1 of Lucy et al. 2020).
Nature of the hot component
The lower limit of the temperature of the hot component ( h ) in a symbiotic star can be estimated using the empirical relation Murset & Nussbaumer (1994).This is based on the highest observed ionization potential ( max ) of an emission line seen in the optical spectrum.Using this, we determine h ≳ 114 000 K from the presence of Raman scattered O VI band at 6825 Å in the spectra of TCP J1822, corresponding to the highest ionization potential O +5 ∼ 114 eV.
The hot component is best studied using x-ray and uv observations.In the absence of those, emission lines in the optical are a good proxy for understanding its nature.Considering the hot source to be a blackbody, its temperature and luminosity can be calculated based on H, He I and He II lines assuming case B recombination.We have used the relation (2) derived by Iijima (1981) The luminosity of the hot component was calculated using equation (8) of Kenyon et al. (1991) and equation ( 6) given in Mikolajewska et al. (1997).Both results match within 25 per cent, and the average value of these estimates is given in Table 2.The luminosity estimate using equation ( 7) of Mikolajewska et al. (1997), which is based on H flux, gives a value nearly half of the above.This is not unexpected given that Mikolajewska et al. (1997) noted these equations have a factor of ∼2 accuracy.A similar effect was reported in the case of Hen 3-860 by Merc et al. (2022), where they have shown H lines having an absorption component seen in high-resolution observations, and hence the flux is getting underestimated.However, we do not see any absorption feature in our low-resolution spectra of TCP J1822.
The blackbody assumption also allows determination of the radius, which is given in Table 2. From Fig. 9 it is evident that radius of the hot component showed an increasing trend during the outburst decline.There is an enhancement in the blue wings of H early during the outburst; the line width is also broader (see Fig. 10).The radius suddenly dropped when TCP J1822 reached the quiescence phase (last four observations).The increase in radius was due to the physical expansion of the photosphere caused by excess burning on the surface of the WD.As the photosphere expanded, the temperature dropped.During the quiescence phase, the expanded shell became optically thin, and hence radius showed a sudden drop, which means we started seeing closer to the WD again.
H wing profiles presented in Fig. 10 are obtained by subtracting the local continuum using fit_continuum function in Specutils (Earl et al. 2023).We see H wings as broad as ∼3500 km/s in the blue region and ∼3000 km/s in the red region for the first three spectra taken during the outburst.The H wings in the blue region are stronger than those in the red region.Line broadening has been reported in past outbursts in AG Peg (fig. 3 in Tomov et al. 2016) andV426 Sge (fig. 3 in Skopal et al. 2020).However, in the case of AG Peg and V426 Sge, velocities of H wing profiles are lower (≤1500 km/s) compared to what we observe in TCP J1822.These broadenings are due to an increased outflow during the outburst.
Nature of the outburst
The optical eruption observed in TCP J1822 shows an amplitude of around 2.5 mag in the ASAS-SN g-band and is similar to Z And-type outburst seen in classical symbiotic stars.They show a brightening of 1-3 mag with time scales from months to years (e.g.Z And, CI Cyg, and AG Dra).In addition, spectroscopic observations of TCP J1822 after the optical maximum show that forbidden lines (e.g.Kenyon et al. 1991;AG Dra -Mikolajewska et al. 1995;LIN 9 -Miszalski et al. 2014).The multi-peak light curve of TCP J1822, with a sharp rise during the outburst, resembles that of Z And, which showed a combination nova outburst in 2000 (Sokoloski et al. 2006).Dominance of the blue continuum, strong Balmer and Paschen lines in the early outburst, and the nature of the light curve indicate that some sort of disc instability was responsible.This probably deposited additional matter on the already burning WD, causing the second peak in the light curve (see Fig. 7).Dwarf nova-like disc instability as a triggering mechanism for Z And outburst is also examined in the theoretical model by Bollimpalli et al. (2018).It is estimated that a high accretion rate of the order of 10 -6 M ⊙ yr -1 is required for such a scenario to be feasible in a symbiotic star like Z And.Bollimpalli et al. (2018) suggest that such an enhancement in mass transfer could be attributed to the magnetic activity on the surface of the giant as suggested by Leibowitz & Formiggini (2008).In this scenario, the increased mass transfer could act as a trigger mechanism for enhanced shell burning.The continuum observed during the outburst of TCP J1822 is derived from multiple components, including the nebula, accretion disc, and WD.Understanding the individual contributions of each component requires rigorous modelling, which is beyond the scope of this paper.
The presence of high ionization lines like He II 4686 and Ramanscattered 6825 from outburst through near-quiescence indicates that the WD continued to burn matter on its surface.This could give rise to detectable soft X-rays.X-ray data would be needed to understand the relative contributions of steady nuclear burning and accretion in the system.The strength of the Raman scattered O VI line remains high even after nearly a year after the outburst declined, indicating that enough material reached the surface of the hot component to maintain the shell burning for a prolonged time.From the ASAS-SN g band light curve (Fig. 1), it is seen that the post-outburst magnitude is brighter than the pre-outburst magnitudes, which further confirms our finding.
After returning to quiescence, TCP J1822 exhibits a temperature of above 10 5 K, luminosity of order 10 3 L ⊙ , which is typical for the hot component in quiescently burning symbiotic stars (fig.4 in Mikołajewska 2003, andMunari 2019).which unambiguously confirm the symbiotic nature of the system.
CONCLUSIONS
(ii).We probed the nature of the cool component in the system using multiband SED and found that the system contains an M1-2 III spectral-type star having a temperature of ∼ 3600K, radius of ∼ 69 R ⊙ and luminosity of ∼ 700 L ⊙ .
(iii).TCP J1822 shows a combination nova type outburst where the outburst begins as accretion disc instability during the first peak of the light curve and then enhances the shell burning in the system, which is correlated with the radius increase of WD photosphere.
(iv).The pre-and post-outburst light curve of TCP J1822 shows a 631.25 ± 2.93 day periodic variation, which most probably originates from the orbital motion of the system.
(v).The post-outburst temperature of the hot component remains above 1.5x105K, indicating a stable shell burning in the system for a prolonged time after the outburst.The strength of Raman scattered O VI band and elevated post-outburst ASAS-SN g band magnitude compared to pre-outburst also confirms the same.These findings collectively suggest an enhanced mass transfer during the outburst.
Figure 1 .
Figure 1.Light curve of TCP J1822 using the Gaia G, G , G magnitudes and ASAS-SN g and V band magnitudes.Periodic behaviour of TCP J1822 is evident in the pre-and post-outburst light curves.The dashed lines show the best fitting sinusoidal curve based on the ephemeris provided in equation (1).
Figure 2 .
Figure 2. Comparison of Lomb-Scargle periodograms for TCP J1822 obtained using Gaia and ASAS-SN data, including combined multiband data.LSP give dominant peaks at 598.95, 618, 598.95, 609, 629.75, and 631.69 d in G, G , G , ASAS-SN V, ASAS-SN g, and multiband data, respectively.The dashed line indicates a 631.69-day period value.See the text for details.
Figure 3 .
Figure 3. Gaia G, G and G band light curves of TCP J1822.The G, G and G bands are represented by green, blue and red points, respectively.The dashed lines show the sinusoidal function fitted based on the ephemeris for Gaia G, G and G bands.See the text for details.
Figure 4 .Figure 5 .
Figure 4. Spectral energy distribution of the TCP J1822 obtained from various bands.Blue points represent the bands used for obtaining the best fit to the SED, as mentioned in section 3.3.WISE W3 and W4 filter magnitudes are over-plotted as orange points.No infrared excess is seen over the stellar continuum.
Figure 6 .
Figure6.The de-reddened optical low-resolution spectra of symbiotic star TCP J1822 at various epochs of its recent outburst.For clarity, each spectrum has been shifted vertically by the indicated amount.
of Skopal et al. 2020) and AG Peg (fig.2ofTomov et al. 2016).However, unlike V426 Sge and AG Peg, in TCP J1822, after the initial rise, Raman scattered O VI line strength remains almost the same even after the light curve declines.It did not show any decreasing trend while the g-band light curve declined.The continued line strength for a prolonged period indicates that some extra mass reached the hot component to sustain the shell burning.
Figure 7 .
Figure 7. (Top) ASAS-SN g-band light curve of the outburst of TCP J1822.Epochs of our spectroscopic observations are marked with red arrows.Evolution of the temperature of the hot component and emission line fluxes are shown in the subsequent plots.
Figure 8 .
Figure 8. Evolution of Raman scattered O VI band during the TCP J1822 outburst.In the first epoch observation (June 10 2021), g-band light curve was at its peak (see Fig. 7) but the Raman scattered O VI band was absent in the optical spectrum.
Figure 9 .
Figure 9. HR diagram showing evolution of the hot component in TCP J1822 from 2021 (orange points) to 2022 (blue points) during the current outburst.
Figure 10 .
Figure 10.Broadening of H line during outburst of TCP J1822.The H line is plotted after subtracting the local continuum.Enhanced bluer wing of H line is an indication of outflow while the outburst happened in the system.
(i).The optical spectrum of TCP J1822 shows Balmer series lines, O I, He I, and high excitation lines such as He II, O[III], Raman scattered O VI and TiO band heads from the cool component
Figure A1 .
Figure A1.Lomb-scargle periodogram generated using simulated Gaia G magnitudes and observed Gaia G magnitudes are shown in the figure.Simulated data points are created on the same observed epochs to check the sampling effect.We assumed a sinusoidal variation in the G band light curve and used the same period we obtained from the observed G magnitudes, 598.95 d.
Table 1 .
Observational log for spectroscopic data obtained for TCP J1822.
Table 2 .
The de-reddened absolute fluxes of H, He II 4686 Å, He I 4471 Å, He I 5876 Å, and O VI 6825 Å line, together with the estimated luminosity, temperature and radius of the TCP hot component. | 6,267.8 | 2023-10-12T00:00:00.000 | [
"Physics"
] |
LINEAR DIFFERENTIAL EQUATIONS WITH UNBOUNDED DELAYS AND A FORCING TERM
The paper discusses the asymptotic behaviour of all solutions of the differential equation ẏ(t)=−a(t)y(t) +∑ni=1 bi(t)y(τi(t)) + f (t), t ∈ I = [t0,∞), with a positive continuous function a, continuous functions bi, f , and n continuously differentiable unbounded lags. We establish conditions under which any solution y of this equation can be estimated by means of a solution of an auxiliary functional equation with one unbounded lag. Moreover, some related questions concerning functional equations are discussed as well.
Introduction
In this paper, we study the problem of the asymptotic bounds of all solutions for the delay differential equation ẏ(t) = −a(t)y(t) + n i=1 b i (t)y τ i (t) + f (t), t ∈ I = t 0 ,∞ , ( where a is a positive continuous function on I; b i , f are continuous functions on I, τ i are continuously differentiable functions on I fulfilling τ i (t) < t, 0 < τi (t) ≤ λ i < 1 for all t ∈ I and τ i (t) → ∞ as t → ∞, i = 1,...,n.
The prototype of such equations may serve the equation with proportional delays ẏ(t) = −ay(t) where a > 0, b i = 0, 0 < λ i < 1, i = 1,...,n, are real scalars.There are numerous interesting applications for (1.2) and its modifications, such as collection of current by the pantograph head of an electric locomotive, probability theory on algebraic structures or partition problems in number theory.Various special cases of (1.2) have been studied because of these applications, as well as for theoretical reasons (see, e.g., Bereketoglu and Pituk [1], Lim [11], Liu [12], or Ockendon and Taylor [15]).
The study of these differential equations with proportional delays turned out to be the useful paradigm for the investigation of qualitative properties of differential equations with general unbounded lags.Some results of the above-cited papers have been generalized in this direction by Heard [7], Makay and Terjéki [13], and in [2,3,4].For further related results on the asymptotic behaviour of solutions, see, for example, Diblík [5,6], Iserles [8], or Krisztin [9].
In this paper, we combine standard methods from the theory of functional differential equations and some results of the theory of functional equations and difference equations to analyze the asymptotic properties of all solutions of (1.1).The main results are formulated in Sections 3 and 4. In Section 3, we derive the asymptotic estimate of all solutions of (1.1).Section 4 discusses some particular cases of (1.1) and improves the above derived estimate for these special cases.Both sections also present the illustrating examples involving, among others, (1.2).
In the sequel, we introduce the notion of embeddability of given functions into an iteration group.This property will be imposed on the set of delays {τ 1 ,...,τ n } throughout next sections.Definition 2.1.Let ψ ∈ C 1 (I −1 ), ψ > 0 on I −1 .Say that {τ 1 ,...,τ n } can be embedded into an iteration group [ψ] if for any τ i there exists a constant d i such that (2.1) Remark 2.2.The problem of embeddability of given functions {τ 1 ,...,τ n } into an iteration group [ψ] is closely related to the existence of a common solution ψ to the system of the simultaneous Abel equations The complete solution of these problems have been described by Neuman [14] and Zdun [16].These papers contain conditions under which (2.1) holds for any τ i , i = 1,...,n (see also [10,Theorem 9.4.1]).We only note that the most important necessary condition is commutativity of any pair τ i , τ j , i, j = 1,...,n.Notice also, that if τ i are delays, then d i must be positive.
The asymptotic bound of all solutions of (1.1)
The aim of this section is to formulate and prove the asymptotic estimate of all solutions of (1.1).We assume that all the assumptions imposed on a, b i , τ i , and f in Section 1 are valid.
Proof.The substitution where " " stands for d/ds, h(s) = ψ −1 (s), and ..,n.This form can be rewritten as where (3.7) we can rewrite (3.7) as (3.9) From here, we get a(u)du where K 2 = K 1 /K.Using the assumptions imposed on a and τi , we can estimate the integral s0 a(u)du}ds as (for a similar situation see also [4]).Hence, a(u)du where M * k := max(M k ,K 2 ), κ := min(ω,γ − α − β) > 0, and N > 0 is a constant large enough.Since s * ∈ J k+1 was arbitrary, Now, the boundedness of (M * k ) as k → ∞ implies via substitution (3.2) the asymptotic estimate (3.1).Remark 3.2.This remark concerns the possible extension of our results to differential equations with delays intersecting the identity at the initial point t 0 .These equations form a wide and natural class of delay differential equations (see the following examples) and have many applications (some of them have been mentioned in Section 1).Since we are interested in the behaviour at infinity, it is obvious that the main notions and results of this paper can be easily reformulated to this case.
Some particular cases of (1.1)
In this section, we first consider (1.1) in the homogeneous form Using a simple modification of the proof of Theorem 3.1, we improve the conclusion of this theorem for the case of (4.1).We assume that all the assumptions of Theorem 3.1 are valid (the assumptions on f are missing, of course).Using the same notation as in Theorem 3.1, we have the following theorem.
Theorem 4.1.Let y be a solution of (4.1).Then and this implies the validity of (4.2).Now, we consider (1.1) in another special form ẏ(t) = −ay(t) + by τ(t) + f (t), t ∈ I, ( where Under these assumptions on τ, there exists a function ψ ∈ C 1 (I), ψ > 0 on I such that (for this and related results concerning (4.6) see, e.g., [10]).Then applying Theorem 3.1 to (4.5), we can easily deduce that the property holds for any solution y of (4.5).The asymptotic behaviour of (4.5) has been studied in [3] (4.9) It is easy to see that relations (4.9) yield sharper estimates of solutions than (4.7).On the other hand, we emphasize that the proof technique used in [3] is effective just for (4.5) and cannot be applied to more general equation (1.1).In the final part of this paper, we propose a simple way on how to extend the conclusions of Theorem 4.2 to some equation (4.5) with nonconstant coefficients.To explain the main idea, we consider (4.5),where the delayed argument is a power function.
where a > 0, b = 0, 0 < λ < 1 are constants and f ∈ C 1 ([1,∞)) fulfils the properties The corresponding Abel equation (4.6) has the form and admits the function ψ(t) = loglogt as a solution with the required properties.Substituting this ψ into assumptions and conclusions of Theorem 4.2, we obtain the following result, where σ is given by (4.8), if y is a solution of (4.10), then (s) exp − γd i h (s)exp γs + )du f h(s) exp{−γs}h (s)ds.
Example 4 . 3 .
We consider the delay equation ẏ functions {λ 1 t,...,λ n t} can be embedded into an iteration group [ψ].Indeed, ≡ 0 on I.Moreover, the inequality (3.8) can be replaced by Now if the forcing term in(4.16)fulfils the required asymptotic properties, then applying Theorem 4.2 to (4.16) and substituting this back into (4.15),we get that relations (4.13) are valid for any solution y of (4.14).Remark 4.4.Following Example 4.3, we can extend asymptotic estimates (4.13) also to some other equations of the form ẏ(t) = − φ(t) ay(t) − by t λ + f (t), t ≥ 1, (4.17) where ϕ ∈ C 1 ([1,∞)) and φ > 0 on [1,∞).If we introduce the change of variables Now if the delayed argument and the forcing term in (4.19) fulfil the assumptions of Theorem 4.2, then we can apply this theorem to (4.19) and via substituting (4.18) obtain the validity of (4.13) for any solution y of (4.17). | 1,916.6 | 2004-01-01T00:00:00.000 | [
"Mathematics"
] |
A NEW DESIGN OF TANGENT HYPERBOLIC FUNCTION GENERATOR WITH APPLICATION TO THE NEURAL NETWORK IMPLEMENTATIONS
A CMOS hyperbolic tangent function generator circuit suitable for the implementation of analog neural networks is presented. In order to obtain an accurate yet simple circuit realization, a judiciously chosen symmetrical Pade approximation of the hyperbolic tangent function is proposed. As an illustrative application, we set up an application in which the proposed circuit is used as the nonlinear block of a two-layer neural network. Simulation results using Spectre Simulation tool in Cadence design environment with 0.18µm CMOS process verify proper operation of the proposed circuit as well as the neural network built around. These results demonstrate the validity of the theoretical analysis and the feasibility of the proposed circuit.
Introduction
This An accurate and high-speed nonlinear function generation is highly demanded in many areas of electronics that require computation, instrumentation and control system design [1][2][3]. Conventionally function generation is realized by using look-up table (LUT) [4], polynomial expansion and approximation methods. [5][6]. One of the major disadvantage of the LUT method is that it requires many hardware resources, thus this is solely used for digital implementations. On the other hand, methods relying on approximations such as Taylor and Padé approximants are mainly intended to the analog circuit realizations, which can be classified into two main groups; circuits realized with transistors operating in sub-threshold mode, usually destined for low frequency applications [7], and circuits using transistors operating in strong inversion region [8]. Besides the well-known limitations of the circuits designed with transistors operating in sub-threshold, the most important limitation of all these solutions is the limited accuracy. In order to increase the accuracy, the approximation order should be increased at the cost of increasing circuit complexity. Obviously, this fact leads to the increased chip area, which makes the circuits unattractive for VLSI design. One of the commonly used nonlinear function is the hyperbolic tangent function, which can be expressed depending on exponential function. In this paper, it is aimed to design hyperbolic tangent function generator with high accuracy and minimal circuit complexity using an appropriate approach. 1 Faculty of Electrical and Electronics Engineering, Istanbul Technical University, 34469 Maslak, Istanbul, Turkey, For this purpose, circuit realization is provided by using Padé approximation considering the following design criteria: i) Since the hyperbolic tangent function generator is realized in strong inversion region, it will operate in a much wider frequency range than the sub-threshold circuits proposed in the literature [9]. In neural network applications, signals are usually applied to neuron in the form of pulses, or spikes, which requires wide bandwidth operation of the neurons. These requirements may be difficult to achieve with sub-threshold circuits.
ii) The degree of approximation should be very high to increase accuracy in analog circuits. This causes the design complexity for the implementation of circuit. For this reason, it is necessary to recommend a special approximation for hyperbolic tangent function, rather than a general approximation (Taylor, Padé etc.) for the desired function.
One of the application area of the hyperbolic tangent function is the analog neural network (ANN) implementations. In ANN circuits, one of the most common function used to perform the activation process is a S-shaped function and produces outputs in the scale of -1 to 1, which is perfectly realized with the hyperbolic tangent function. There are many circuit examples designed to perform the hyperbolic tangent function [10][11]. In this work, an ANN that performs character recognition is also used to demonstrate the usefulness of the proposed circuit. For this purpose, the proposed tangent hyperbolic circuit is used to realize the activation process. All simulation results are obtained in UMC 0.18µm CMOS technology. The paper is planned as follows: The realization of the hyperbolic tangent function and its circuit implementation is given in Section 2 and Section 3 respectively. Simulation results of the proposed circuits and comparison with the previous works are presented in Section 4. The application of the proposed hyperbolic tangent function generator is described in Section 5. Finally, conclusion part is presented in Section 6.
The Design of Hyperbolic Tangent Function Generator
As it is well known from the literature, hyperbolic tangent function can be expressed in terms of the exponential function as follows: Besides this, exponential function has very effective and well-known approximations [12][13]. Among these, Taylor and Padé approximations constitute two mainstreams in the design of exponential function generators, as the former is a polynomial type approximation, where, m and n are used to define the nominator and denominator degrees of the Padé approximant, respectively [14]. It should be noted that the choice of the polynomial or rational type approximation affects the performance of the resulting function generator drastically. In general, the use of Padé approximant leads to more complicated but more accurate systems, as the rational function approximates the fast variation of the exponential function for finite x values, thanks to the existing denominator term. The design of hyperbolic tangent function generator circuit is considered based on the following two criteria: i) The circuit complexity of the resulting function generator will depend on the mathematical complexity of the approximating function, that is the orders of the numerator and denominator polynomials in Eqn. (3). ii) In addition to the simplicity of the circuit design, the accuracy of the function generator's response over wide input and output range is also very important. In this paper, the proposed realization of the hyperbolic tangent function is based on the expression in Eqn. (1) and exponential Padé approximants defined in Eqn. (3). By substituting the approximate expression in (3) into (1) and rearranging of the terms, the following simple approximation of the hyperbolic tangent function is obtained: Note that the proposed approximation is obtained by considering the following realization of the involved exponential terms in Eqn. (1) as follows: where the first and the second product terms are symmetrical exponential Padé approximants of order [m/n] and [n/m], respectively. It should be noted that owing to the use of the symmetrical Padé approximants, the individual errors of each approximant cancel out each other and leads to more accurate approximation. On the other hand, this choice leads to mathematically simpler expression, which in turn leads to simpler circuit realization. These advantages will be discussed further in Section 2.2.
Fig.1 Comparison of Padé approximation with symmetrical pairs, arbitrary Padé approximation and Taylor series approximation
The Matlab simulation results obtained using Taylor and Padé approximants are shown in Fig.1. In order to obtain a meaningful comparison, the degrees of nominator and denominator polynomials of the different approximations are taken as equal. As it can be easily seen from these results, the symmetrical Padé approximation gives a much more accurate result than Taylor approximation in a wide range.
Advantages of the Proposed Approach
Due to the nature characteristics of the hyperbolic tangent function, symmetrical rational Padé pairs are selected for approximating to the hyperbolic tangent function, and obtained both a much more accurate result and a simpler circuit implementation. Symmetrical Padé approximants can be expressed as follows: as a result of which, hyperbolic tangent function assumes the following form: Assuming that even and odd parts of the polynomial A(x) are respectively E(x) and O(x), that is , the approximation of the hyperbolic tangent function is obtained as follows: Therefore, all even order terms in the numerator and all odd order terms in the denominator are missing. Therefore, the use of symmetrical Padé pair may lead to the simplest algebraic approximation compared to approximations composed using all other Padé combinations. On the other hand, in order to demonstrate the usefulness of the proposed method, the error function of the proposed approximation can be obtained as follows: where Ɛ1 and Ɛ2 are the error functions of the Padé approximants. Similarly, the error function of the hyperbolic tangent function can be calculated as follows: It should be also noted that, the error functions 1 and 2 are symmetrical for x =0. Therefore, 1 and 2 cancel each other for small x and tanh remains small. Although for large values of x, the error functions lose their symmetric properties, the 1 +2 still remains smaller than either 1 or 2. The corresponding errors are given in Fig.2 Therefore, although the error is small in the case of symmetrical Padé approximants, still the values of the parameters m and n is important and affects the approximation error. Obviously, there is a trade-off between the complexity of the approximating function and the circuit complexity. The value n+m can be considered to estimate the complexity of the Padé approximation. For more complicated function, i.e for larger values of m+n, the approximation error is smaller, but the resulting circuit is more complicated. However, as m and n are increased, the rate of error reduction may not be as high as expected. To be specific, let us consider two different cases, in the first one of which is used Padé approximants with m=2, n=1 and in the second one of which with m=3, n=1. Therefore, it is compared the errors of the different symmetrical Padé approximants in Fig.3. For a limited input range, the symmetrical Padé pair for m=2, n=1 approximates hyperbolic tangent function with smaller error compared to the more complicated approximation, m=3, n=1. This provides an important advantage as the resulting circuit would be considerably simpler. Obviously, for larger input range, more complicated approximation yields better results, but having more complicated circuit realization. Nevertheless, the degrees of the symmetrical Padé approximants in Eqn. (4) are set to m=2 and n=1. With these parameters, the proposed approximation for the hyperbolic tangent function is obtained as follows: 15 6 15 ) tanh(
Implementation of the Hyperbolic Tangent Function Generator
Many squarer circuits proposed in the literature realizes the following function [15][16][17][18] (14) where IB is a scaling biasing current. The architecture of the proposed hyperbolic tangent function generator is depicted in Figure 4. The structure consists of two current mode squarers, each realizing the function in Eqn. (14) and a current mode analog multiplier, which realizes the following function: Therefore, by comparing Eqns. (16) and (17), it is seen that the circuit is indeed realizes approximate hyperbolic tangent function for x=Iin/IB and y=Iout/IB. As it can be seen from Fig.4, the circuit adopts three input currents. These currents are realized using a current steering/replicator block implemented using simple current mirrors. Concurrently, the scaling coefficient of 1/6 can be realized by adjusting the transistor sizes of the involved current mirrors, without having to use any additional circuit. generator for m=2, n=1 (β1=15 and β2=15/6).
The hyperbolic tangent function generator has been redesigned using an alternative symmetrical Padé pair with the following parameters: m=3 and n=1 for comparative purposes. The hyperbolic tangent function obtained using the approach employed in Section 2. is given as follows: Copyright
Fig.5 Architecture which realizes hyperbolic tangent function generator with the symmetrical Padé pair for m=3, n=1
For x=Iin/IB and y=Iout/IB, the hyperbolic tangent function can be obtained in terms of currents as follows: If the hyperbolic tangent function is designed using squarer and multiplier subcircuits, the architecture of the circuit is obtained as in Fig.5. As can be seen from this figure, this circuit consists of 5 squarer and 2 multiplier subcircuits and therefore this realization in this way requires more complicated circuit.
Current-Mode Squarer Subcircuit
As it can be seen from the Fig.6, it is considered the squarer subcircuit-1 which is well-known circuit from the literature and operates in current mode [17]. This circuit is based on trans-linear loop (M1-M4), and all transistors used in the circuits are assumed to operate in saturation mode. According to this, the current equation for saturation region can be given as follows: where the transistor parameters have the usual meanings. By applying Kirchhoff's voltage law around the loop consisting of transistors M1-M4, using the square relationship given in Eqn. (20) and assuming that transistors are perfectly matched, we obtain the following relation for the transistors' drain currents: It is assumed that, the transistors have the same transconductance value. By substituting (20) into (21) and considering IDS1=IDS2=IB1, the output current is obtained as follows: where α1=1 and β1=15. Note that since the IB is the biasing current, the value of β1xIB is provided easily by scaling the corresponding current. The output of the second squarer circuit can be also obtained as follows: where α2=1and β2 = 15/6, this value can be obtained by appropriate scaling of the biasing current.
Multiplier Subcircuit
This subcircuit which performs the multiplication operation, can easily be realized with the following equation [19]: As can be easily seen also from the Fig.6, this circuit is based on two trans-linear loops and achieves squaring operations by using these loops. The first loop formed by the M21-M24 transistors provides the squarer function (X+Y) 2 and the second loop formed by the M25-M28 transistors provides the squarer function (X-Y) 2 . The equations performed by the first and second loops can be given as follows, depending on the current values:
The Comparison and Simulation Results
In order to demonstrate the usefulness of the proposed circuit, the hyperbolic tangent function generator is designed for implementing in 0.18µm UMC technology, under supply voltage of ±1V. For the proper operation, the biasing current is chosen for IB=5µA. In order to evaluate simulation results, it is compared the nonlinear transfer characteristic of the circuit with the ideal hyperbolic tangent function, defined as I0ut=α. tanh(βIin). As it can be seen from Fig.7a, the ideal characteristic obtained for α=1.088 and β=7800 is quite compatible with the simulation results. The average error is found to be as 3%. According to these results, it can easily be said that the proposed hyperbolic tangent function generator gives a much more accurate result compared to the other analog circuit generator in reference [9,22], at the cost of having more complicated circuitry. Besides this, in order to provide a fair comparison with other circuits in the literature in terms of the area, accuracy and power consumption, it is obtained the circuit layout as shown in Fig.7b. Crucial performance figures of the proposed circuit and the recently presented works are tabulated in Table 1.
180-nm CMOS
As it is known from the literature, in such circuits, digital only solutions (LUT) suffers from large chip area and from large power consumption [20]. As a remedy, hybrid realization has been proposed, which achieves better accuracy while providing acceptable chip area and power consumption [21]. Accordingly, as shown in Table 1, the proposed circuit offers a more advantageous solution in terms of the area and the accuracy compared to the hybrid circuit. Beside this, it is clear that the proposed circuit provides much better error performance compared to the other analog circuits. In addition, it is also found the power consumption of the proposed circuit as 244µW.
Application
In today state-of-the art technologies, the realization of artificial neural networks (ANN) featuring large number of neurons and huge number of interconnection between them poses significant challenges to the integrated circuit designers. Therefore, many studies on the application and implementation of the ANNs are presented in the literature [22][23]. The general scheme of ANN is given in Fig.8a. In order to illustrate the usefulness of this work, the proposed hyperbolic tangent function generator is used as an activation function in ANN circuit in Fig.8a. The required weighted summing operation are realized using current mirrors in which the weights are set by adjusting transistor aspect ratios. The resulting circuit is shown in. Fig.8b. This circuit with one hidden layer operates in current mode and is used for the character recognition from the inputs set up to represent the letters of the alphabet from the letter "A" to the letter "E". These characters are expressed in a 5X3 matrix and represented by black and white patterns. The representation of these letters are shown in Fig.9. The training of the network and derivation of the proper weights are derived using MATLAB. These values are inserted to the proposed CMOS circuit using Cadence simulation program. Thus, by applying the input set that provides each character, the characters corresponding to the output values are obtained. It has also been tested whether the ANN is capable of generalization and fault tolerant. For this purpose, some of the incorrect input values are deliberately applied to the network and it is determined that the network generates correct results despite this error. Therefore, it can be said that for this ANN, where the proposed CMOS circuit performs the activation function, it is a fault tolerant circuit.
Conclusion
In this study, a new circuit that provides tangent hyperbolic function characteristics is proposed using Padé approximation. With the proposed approach, it is aimed to realize the tangent hyperbolic function algebraically simple. Thus, the circuit implementation of the tanh function is also reduced to a much simple form. In order to demonstrate the usefulness of the proposed circuit, this circuit is used as an activation function in an ANN with 15 inputs, 5 outputs and performing character recognition. The proposed circuit is based on the translinear principle and consists of sub circuits such as squarer and multiplier. By comparing the proposed circuit with those given in the literature, it can easily be said that the circuit still covers a small area while providing an accuracy of 3%. The proposed circuit has been applied to a simple character recognition problem, which is a well-known problem in the literature, and its usefulness and applicability have been proven. | 4,078.2 | 2020-12-30T00:00:00.000 | [
"Computer Science"
] |
CD9 Tetraspanin Interacts with CD36 on the Surface of Macrophages: A Possible Regulatory Influence on Uptake of Oxidized Low Density Lipoprotein
CD36 is a type 2 scavenger receptor with multiple functions. CD36 binding to oxidized LDL triggers signaling cascades that are required for macrophage foam cell formation, but the mechanisms by which CD36 signals remain incompletely understood. Mass spectrometry analysis of anti-CD36 immuno-precipitates from macrophages identified the tetraspanin CD9 as a CD36 interacting protein. Western blot showed that CD9 was precipitated from mouse macrophages by anti-CD36 monoclonal antibody and CD36 was likewise precipitated by anti-CD9, confirming the mass spectrometry results. Macrophages from cd36 null mice were used to demonstrate specificity. Membrane associations of the two proteins on intact cells was analyzed by confocal immunofluorescence microscopy and by a novel cross linking assay that detects proteins in close proximity (<40 nm). Functional significance was determined by assessing lipid accumulation, foam cell formation and JNK activation in wt, cd9 null and cd36 null macrophages exposed to oxLDL. OxLDL uptake, lipid accumulation, foam cell formation, and JNK phosphorylation were partially impaired in cd9 null macrophages. The present study demonstrates that CD9 associates with CD36 on the macrophage surface and may participate in macrophage signaling in response to oxidized LDL.
Much of the function of CD36 depends on ligand-induced triggering of specific intracellular signaling cascades. For example, TSR containing proteins inhibit angiogenesis by inducing a CD36-dependent pro-apoptotic signal in microvascular endothelial cells via direct activation of Fyn, p38 MAP kinase and caspase-3 [14], as well as up-regulation of the Fas and TNFa mediated apoptotic pathways [15,16]. On macrophages, oxLDL induces CD36-mediated recruitment and activation of Lyn and activation of Vav family guanine nucleotide exchange factors and c-Jun Nterminal kinase (JNK)-2 [17,18,19]. These pro-atherogenic pathways are required for internalization of oxLDL, foam cell formation, and inhibition of migration. CD36-mediated activation of platelets shares features with the macrophage pathway in that Lyn, JNK2, and Vav are all activated by CD36 in a liganddependent manner, providing a mechanistic link between oxidant stress, inflammation and thrombosis [20,21,22,23].
The precise mechanisms of CD36-mediated cell signaling are incompletely understood. It has 2 very short intra-cytoplasmic domains and no inherent intracellular enzymatic activity, but its carboxy-terminal cytoplasmic domain has been shown to interact with intracellular signaling proteins, including src-family kinases and MAP kinase kinases [17]. Mutations or deletions in the carboxy terminal domain abolish signaling responses in transfected cells [24,25]. Several aspects of CD36 function and signaling are known to require functional and/or physical association with other membrane receptors, including integrins and toll-like receptors (TLR) [26,27]. For example, uptake of apoptotic cells by dendritic cells and uptake of shed photoreceptor outer segments by retinal pigment epithelial cells involve both CD36 and a V b 5 integrin [28,29]. Certain aspects of uptake and signaling by microbial cell wall glycolipids require both CD36 and TLR-2 containing complexes, and a CD36-TLR4-TLR6 pathway has been implicated in microglial responses to oxLDL and amyloid-b [30]. The structural mechanisms by which CD36 serves as a membrane coreceptor are not well understood, but may relate in part to colocalization in membrane microdomains.
The tetraspanin family of membrane proteins has recently been implicated in cell signaling via their ability to compartmentalize other membrane proteins including integrins, along with intracellular signaling molecules, such as small molecular weight GTP binding proteins, in plasma membrane domains [31,32]. Tetraspanins are a widely expressed, highly conserved group of more than 30 proteins that span the plasma membrane 4 times and that contain a conserved cysteine motif in their cytoplasmic amino and carboxy terminal domains [33]. Specific tetraspanins have been shown to regulate cell adhesion, migration, activation and proliferation in inflammation, immune responses, hemostasis/ thrombosis, cancer metastasis, and sperm-egg fusion. Previous studies indicated that the tetraspanin CD9 could be coimmunoprecipitated with CD36 from human platelets or endothelial cells [34,35], but no functional significance was identified. We therefore tested the hypothesis that CD9 on macrophages would interact with CD36 and contribute to CD36-mediated functional responses. Using a combination of proteomic, immunolocalization and functional approaches we now report that macrophage CD9 associates with CD36 on the cell surface and participates in CD36-dependent uptake of oxLDL.
Co-precipitation of macrophage CD9 and CD36 by monoclonal antibodies
In preliminary experiments we used mass spectrometry to identify proteins immunoprecipitated from mouse peritoneal macrophage lysates by a monoclonal anti-CD36 IgA. The precipitates were analyzed by SDS-PAGE and then subjected to LC-MS. Multiple CD36 peptides were detected in the appropriate MW region in the gels and in the lowest molecular weight region we identified four specific peptides representing 21% amino acid coverage of CD9. CD9 peptides were not detected in immunoprecipitates from cd36 null macrophages, demonstrating specificity. To confirm and validate these results we performed specific IPs followed by immunoblot assays. As shown in Figure 1, CD9 was detected in the anti-CD36 IP from wt but not cd36 null cells (Panel A). Similarly, CD36 was detected in the anti-CD9 IP from wt cells (Panel B). Isotype matched control antibodies were used as controls in all studies. To further demonstrate specificity, we performed an IP with an antibody to an irrelevant macrophage surface protein, CD31, and found no evidence by western blot of co-precipitated CD36. Similarly anti-CD36 IPs did not contain detectable CD31 (not shown).
CD9 and CD36 co-localize on the macrophage cell surface
Because of potential artifacts introduced by detergent lysis of membrane proteins, we also examined CD9 and CD36 association by immunofluorescence microscopy. The confocal images shown in Figure 2A demonstrate that both CD9 and CD36 are densely expressed on the macrophage cell plasma membrane in a ''ring'' pattern. The merged image shown in the far right panel shows nearly complete overlap of fluorescence from the two markers. We then used a Proximity Ligation Cross Linking Assay (OLink, Inc) with anti-CD9 and anti-CD36 antibodies derived from 2 different species (rabbit and mouse). In this system, species specific secondary antibodies conjugated to unique DNA strands that template hybridization of specific oligonucleotides are then added, and when in close proximity (,40 nm) the oligonucleotides can be ligated to form a circular template. The template can then be amplified and detected using specific complementary oligonucleotide probes tagged with fluorescent probes. Single-molecule protein-protein interaction events are visualized as distinct fluorescent spots. Figure 2B, panel a shows distinct spot formation in WT macrophages using this system with anti-CD9 and anti-CD36 antibodies. To show specificity we demonstrated that no spots were formed on cd36 null cells with these antibodies ( Figure 2B, panel b) and that no spots were formed when CD31 or CD40 antibodies were used instead of CD9 on WT cells (panels c and d). To confirm these results, we also used FITC-labeled anti-CD36 and biotin-labeled anti-CD9 mouse antibodies or biotinlabeled anti-CD31 rat IgG as primary antibody sets to repeat the experiment with secondary anti-FITC and anti-biotin antibodies for detection. The results were similar (not show). These studies thus show that CD9 and CD36 are in close proximity to each other (within 40 nM) on the surface of macrophages.
CD9 participates in CD36-mediated macrophage functions
To investigate the role of CD9 in the biological functions of CD36, we first studied oxLDL uptake and foam cell formation using macrophages obtained from cd9 null mice. For these studies we used a form of oxidized LDL highly specific for CD36 (termed NO 2 LDL) that is generated by incubating human LDL with a myeloperoxidase/nitrite-based oxidizing system. In a short term experiment using DiI-labeled NO 2 LDL we found that fluorescence uptake at 15-60 minutes was moderately decreased in cd9 deleted macrophage compared to wt macrophages ( Figure 3A). To determine the quantitative impact of this defect on foam cell formation we incubated wt and cd9 null macrophages with NO 2 LDL for 16 hours. Cells were then stained with Oil Red O and neutral lipid content was assessed by extracting and quantifying the dye. Figure 3B shows, as expected, that cd36 null macrophages accumulated little or no lipid, and that the cd9 null cells accumulated significantly less that wt, but more than the cd36 null. To confirm these results we also assayed total cholesterol content in the cell lysates ( Figure 3C) and showed a 26% decrease in cd9 null cells compared to wt cells (p = 0.02). No differences among the genotypes were seen in cells incubated with native LDL. Flow cytometry assays with monoclonal anti-CD9 IgG showed that the level of surface expression of CD9 was not changed in cd36 null macrophages (data not shown; p = 0.8).
Previous studies from our lab revealed that phosphorylation of the MAP kinase JNK is a proximal event in CD36 signaling in macrophages and that JNK inhibition blocks CD36-mediated uptake of oxidized LDL [17]. We thus tested the hypothesis that CD9 contributes to CD36 signaling by examining the extent and kinetics of JNK phosphorylation in cd9 null macrophages after exposure to NO 2 LDL. Figure 4 shows western blots with an antibody specific to phospho-JNK. Both JNK1 and JNK2 were phosphorylated in wt cells, with an approximate 6 fold increase seen at 15 minutes. Phosphorylation was still increased by more than 4 fold at 30 minutes. Interestingly, in the cd9 null cells NO 2 LDL incubation induced a similar degree of JNK activation as in wt at 15 minutes, but by 30 minutes there was significantly less activation in the cd9 cells, suggesting that CD9 might regulate this pathway. As expected, minimal JNK phosphorylation was seen in cd36 null cells.
Discussion
The tetraspanin CD9 (Tspan 29) is expressed on platelets, macrophages, vascular endothelial and smooth muscle cells, neuronal cells, fibroblasts, oocytes and some epithelial cells [33]. It is among the best studied of the tetraspanins and has been shown to regulate several biologically important cellular functions, including sperm-egg fusion [36], and adhesion, proliferation, and migration of nucleated cells. It is densely expressed on platelets where it appears to play a role in modulating and stabilizing aggregration. The mechanisms by which CD9 and other tetraspanins regulate cell functions remain incompletely understood, but the prevailing model is that they associate with one another and with other membrane proteins to form a ''tetraspanin web'' that clusters specific membrane components and intracel-lular signaling molecules into microdomains that facilitate signal transduction [31]. Interaction of CD9 with specific ß1 and ß3 integrins has been shown to regulate fertilization [37], migration, adhesion and platelet aggregation. In addition to integrins, CD9 also associates with the Ig superfamily adhesion molecule ICAM, and with membrane associated growth factors.
Although Maio et al. previously showed that CD9 could be coimmunoprecipitated with CD36 in human platelet lysates [34], and Kazerounian et al recently reported the association of CD36 with the tetraspanins CD9 and CD151 in endothelial cells [35], a functional role for the interaction was not shown, nor was it shown if CD9 and CD36 co-localize in intact cells. In this report we show with several different experimental approaches that CD9 and CD36 co-associate on macrophage cell membranes. Immunoprecipation with monoclonal antibodies to either protein precipitated the other, and immunofluorescence microscopy using a novel ''proximity ligation cross-linking assay'' demonstrated that the two proteins are closely associated (within 40 nM) with one another on the surface of the cells. Most protein interactions involving tetraspanins are not due to direct binding between specific peptide domains, with the exception that the second extracellular domain of CD9 has been shown to bind directly to integrins [33]. Whether CD9 and CD36 bind to each other directly remains to be determined.
Our studies also suggest that the CD36 signaling pathways triggered by oxLDL which lead to cholesterol accumulation and foam cell formation may be facilitated in part by its association in tetraspanin webs. Genetic deletion of CD9 did not abolish foam cell formation, but oxLDL uptake was modestly decreased as were total lipid and cholesterol accumulation. Interestingly, in the absence of CD9, CD36-mediated activation of JNK was altered, with more rapid loss of phosphorylation and thus presumably more rapid termination of the signal. JNK activation is a critical step in foam cell formation and atherogenesis, as inhibition or deletion of JNK has been shown by our group to block CD36mediated oxLDL uptake [17] and by others to inhibit atherosclerosis in an apoe null mouse model [38]. We also showed that JNK inhibition in platelets blocked CD36-mediated pro-thrombotic responses [21]. Thus modulation of the dynamics of CD36-mediated JNK activation by CD9 could account for the differences seen in oxLDL uptake and foam cell formation in cd9 null macrophages. The mechanisms responsible for the alteration in JNK phosphorylation kinetics in the absence of CD9 remains to be determined, but possibilities include changes in recruitment of src family and/or MAP kinases to the CD36 signaling complex or alteration of phosphatase function. Our studies showing less cellular lipid accumulation in the absence of CD9 are also consistent with reports that tetraspanins can traffic between the plasma membrane and intracellular vesicular compartments and therefore potentially regulate internalization pathways [39].
In summary, we showed that CD9 and CD36 co-associate on the macrophage surface, suggesting that CD36 may be part of the tetraspanin web. Loss of this association by genetic deletion of CD9 led to a modest but statistically significant decrement in CD36mediated signaling in response to oxLDL and a concomitant modest decrease in lipid accumulation and foam cell formation.
Materials and Methods
Animals, antibodies and other reagents cd36 null mice [40], and cd9 null mice [41] were described previously. All mouse studies were approved by the Cleveland Clinic Institutional Animal Care and Use Committee (Approval ID is ARC08938). Peritoneal macrophages were obtained by lavage 4 d after injection with thioglycollate and adherent cells were maintained in culture. Cell culture reagents were purchased from Invitrogen, CA, USA. Antibodies to phosphorylated forms of JNK1/2 and to total JNK1/2 were from Cell Signaling, Beverly, MA. Unlabeled or biotin-conjuncted mouse monoclonal anti-CD9 antibody was from BD Biosciences, CA, USA. Rabbit anti-CD9 monoclonal antibody was purchased from Epitomics, CA, USA. Mouse anti-mouse CD36 IgA was prepared as previously described [28]. Rat anti-mouse CD36 IgG was a kind gift from Prof. Laura Helming (Munich, Germany) [42]. Rabbit anti-CD36 antibody was from Novus biologicals, CO, USA. Anti-CD31 and anti-CD40 for Proximity Ligation Cross-linking Assay were from BD Biosciences, CA, USA. LDL was isolated from human plasma as previously described [17] and oxidized with a myeloperoxidase based system as previously described [3]. In some experiments LDL was exposed to all elements of the system except the oxidant to create control non-oxidized LDL. All chemicals were obtained from Sigma (St. Louis, MO, USA) unless otherwise indicated.
Co-Immunoprecipitation (Co-IP)
Mouse monoclonal anti-mouse CD36 IgA was coupled to NHSactivated agarose beads (GE life sciences, NJ, USA) according to manufacturer's instruction. Peritoneal macrophages were treated with Dithiobis-succinimidylpropionate and then lysed in 1% CHAPS in buffer made up of 50 mM Tris-HCl (pH 7.5), 150 mM NaCl, 1 mM EDTA, 1 mM EGTA, 2.5 mM sodium pyrophosphate, 1 mM b-glycerophosphate, 1 mM Na 3 VO 4 , and a broad spectrum protease inhibitor cocktail (Roche Applied Science, IN, USA). Lysates were centrifuged at 12000 g for 10 min and the supernatants containing 750 mg protein were incubated with antibody beads rotating overnight at 4uC. After extensive washing, beads were boiled in SDS-PAGE loading buffer and the bound material run on SDS-PAGE for further analysis.
Mass Spectrometry
Lanes from SDS-PAGE gels prepared as above from wt and cd36 null macrophages were cut horizontally into 10 sections. The gel pieces were then reduced with DTT and alkylated with iodoacetamide before digestion with trypsin overnight. Peptides were then extracted from the gel slices and the extracts evaporated to ,30 ml for LC-MS analysis using a Finnegan LCQ ion trap mass spectrometer system. The HPLC column was a self-packed 8 cm675 mM internal diameter Phenomenex Jupiter C18 reversephase capillary chromatography column. Peptides were eluted from the column by an acetonitrile/0.05 M acetic acid gradient and introduced into the mass spectrometer on-line. The microelectrospray ion source was operated at 2.5 kV. Data were analyzed using all CID spectra collected to search NCBI databases with the search program Mascot. Immunoblot For co-IP studies proteins from SDS-PAGE gels prepared as described above were transferred to PVDF membranes (BioRad, CA, USA) and probed with specific antibodies to CD36 and CD9 using a chemiluminescence based detection system (GE life sciences). In some studies the IP was done with anti-CD9 beads instead of anti-CD36. For studies of JNK activation, cells were treated with oxidized LDL (50 mg/ml) for timed periods and then washed twice in ice-cold PBS before lysis in 50 mM Tris-HCl (pH 7.5), 150 mM NaCl, 1 mM EDTA, 1 mM EGTA, 1% NP-40, 0.5% sodium deoxycholate, 2.5 mM sodium pyrophosphate, 1 mM b-glycerophosphate, 1 mM Na 3 VO 4 , and proteinase inhibitor cocktail. After centrifuging at 12000 g for 10 minutes, the cleared supernatants were run on SDS-PAGE, transferred onto PVDF membranes, and probed with antibodies to phospho-JNK using a chemiluminescence detection system. Blots were stripped and re-probed with antibodies to control proteins (b-actin or JNK) to assess loading. For quantification, blots were scanned and band densities determined using NIH Image-J software.
Immnofluorescence microscopy
Peritoneal macrophages from wt mice were seeded on coverslips and cultured in RPMI 1640 medium supplied with 10% FCS. Attached cells were fixed in 4% formaldehyde and then incubated with FITC-labeled monoclonal anti-CD36 IgA (Cayman Chemical, Mi, USA) and/or unlabeled anti-CD9 antibody followed by Alexa-594 labeled Goat anti-rabbit antibody (Invitrogen, CA, USA). Cells were then counterstained with DAPI to detect nuclei and analyzed by laser confocal fluorescence microscopy.
Proximity Ligation Cross-linking Assay
Fixed peritoneal macrophages prepared as above were incubated with rabbit anti-CD9 and rat anti-CD36 monoclonal antibodies. Coverslips were then washed and incubated with species specific secondary antibodies (DuolinkH; Olink, Inc) conjugated to unique DNA strands that serve as templates for hybridization of specific oligonucleotides. The oligonucleotides were then added as per the manufacturer's protocol along with a ligase to form a circular template. The anchored template was then amplified and detected using complementary fluorescently labeled probes. Distinct spots representing single-molecule protein interaction events were visualized using a laser confocal fluorescence microscope.
oxLDL uptake and foam cell formation Peritoneal macrophages from wt, cd9 null and cd36 null mice adherent to coverslips were incubated with DiI-labeled NO 2 LDL (10 mg/ml) for timed points up to 60 minutes at 37uC. Cells were then fixed in 4% formaldehyde and internalized fluorescence examined by confocal microscopy. In other studies cells were cultured in 12 well plates, incubated with 50 mg/ml unlabeled NO 2 LDL for 16 hours, and then fixed with 4% formaldehyde and stained with Oil Red O to detect neutral lipids. After washing away non-bound dye, the internalized Oil Red O was extracted in methanol and quantified by absorbance at 520 nm using a 96 well plate reader (Spectra Max 190, Molecular Devices). Total cellular cholesterol content was also assessed in parallel cultures using a commercial kit (Cayman Chemical, MI, USA).
Statistical analysis
In vitro assays were performed in quadruplicate cultures. All experiments were done using macrophages from at least three mice for each group. All numerical results are expressed as mean 6 SEM. Statistical differences were determined by Student's t test. | 4,456.2 | 2011-12-21T00:00:00.000 | [
"Biology",
"Medicine"
] |
Adsorption of Se(IV) in aqueous solution by zeolites synthesized from fl y ashes with different compositions
Low-calcium fl y ash (LC-F) and high-calcium fl y ash (HC-F) were used to synthesize corresponding zeolites (LC-Z and HC-Z), then for adsorption of Se(IV) in water. The results showed that c zeolites can effectively adsorb Se(IV). The optimal adsorption conditions were set at contact time ¼ 360 min; pH ¼ 2.0; the amount of adsorbent ¼ 5.0 g·L (cid:2) 1 ; temperature ¼ 25 (cid:3) C; initial Se(IV) concentration ¼ 10 mg·L (cid:2) 1 . The removal ef fi ciency of HC-Z was higher than the LC-Z after it had fully reacted because the speci fi c surface area (SSA) of HC-Z was higher than LC-Z. The adsorption kinetics model of Se(IV) uptake by HC-Z followed the pseudo-second-order model. The Freundlich isotherm model agreed better with the equilibrium data for HC-Z and LC-Z. The maximum Se(IV) adsorption capacity was 4.16 mg/g for the HC-Z and 3.93 mg/g for the LC-Z. For the coexisting anions, SO 2 (cid:2) 4 barely affected Se(IV) removal, while PO 3 (cid:2) 4 signi fi cant affected it. Regenerated zeolites still had high capacity for Se(IV) removal. In conclusion, zeolites synthesized from fl y ashes are a promising material for adsorbing Se(IV) from wastewater, and selenium-loaded zeolite has the potential to be used as a Se fertilizer to release selenium in Se-de fi cient areas. Low-calcium fly ash (LC-F) and high-calcium fly ash (HC-F) were used to synthesize
INTRODUCTION
Selenium is an essential trace element with multiple biological functions in many organisms, including humans. The In recent years, wastewater treatment using zeolite as lowcost adsorbent has been examined by many researchers.
It is found that the zeolite, either natural or synthetic, can improve water quality and wastewater treatment effectiveness by removing substance such as heavy metals,
Zeolite synthesis
The low-calcium and high-calcium fly ashes (LC-F and HC-F) used in this study were obtained from a power plant located in Hebei Province in China. An alkaline fusion method followed by a hydrothermal treatment was adopted for the synthesis of zeolites (Wang et al. ; Zhang et al. a). In brief, 10 g of fly ashes was mixed with 12 g of NaOH powder (analytical reagent grade) to obtain a homogeneous mixture. The mass ratio of the fly ash to NaOH powder was 1:1.2 (w/w). The homogeneous mixture was then heated in a nickel crucible in 600 C air for 180 min. The fusion products were ground and poured into a flask, to which distilled water was then added to form a mixture. The mass ratio of the fusion products to water was 0.1725 (w/w). The mixture was stirred intensely at 80 C for 2 h to form aluminosilicate gel and was subsequently poured into a stainless alloy autoclave and kept in an oven at 100 C for 9 h. After hydrothermal treatment, solid samples were extracted and then washed thoroughly with distilled water until their pH was less than 10. The resultant solid products were dried at 100 C for 12 h and ground to pass through a 100-mesh sieve for further use. Co., USA). The amount of Se(IV) adsorption by the synthesized zeolites (q(mg/g)) and the removal efficiency was calculated using Equations (1) and (2), respectively: Removal efficiency(%) where C 0 and C e are the initial and equilibrium Se(IV) concentrations of the test solution (mg/L), respectively. V is the testing solution volume (L), and W is the mass of the adsorbent (g).
Adsorbent regeneration
To investigate the practical reusability of the synthesized zeolites as a potential adsorbent, a regeneration experiment was carried out using 0.1 M NaOH (Bleiman & Mishael ). In this experiment 50 mL of the NaOH solution was
RESULTS AND DISCUSSION
Characterization of fly ashes and synthesized zeolites The composition (XRF), XRD patterns and SEM images of fly ashes and synthesized zeolites are shown in Table 1, At initial pH range from 1 to 2, the removal efficiency of LC-Z was smaller than HC-Z, but from pH ¼ 3 and above, the removal efficiency of LC-Z was higher than HC-Z.
This can be attributed to the impact of SSA. The SSA of HC-Z was larger than LC-Z. When the pH ranges from 1 to 2, HC-Z could provide more contact sites than LC-Z, but when the pH was equal to or higher than 3, contact sites carried negative charges. The larger the SSA of zeolite, the more negative charges it carries, leading to stronger competition with anions and lower removal efficiency.
Effect of adsorbent dosage
The effect of synthesized zeolite quantity on the Se(IV) removal efficiency and adsorption capacity (q) was investigated to determine the optimal adsorbent dosage, as shown in Figure 5. bind relatively weakly to surface sites. Complexes often form in the outer sphere (β-plane), which is significantly affected by ionic strength. As mentioned above, the negative effect of coexisting anions on Se(IV) removal by the HC-Z was greater than that by LC-Z. Since PO 3À 4 is usually present in wastewater, its effect should not be ignored.
Adsorption kinetics
The following pseudo-first-order (Equation (3)) and pseudosecond-order (Equation (4)) kinetic models were applied to simulate the experimental data (Lagergren ; Ho & McKay ): where Q e and Q t are the amounts of Se(IV) adsorbed on the synthesized zeolites (mg/g) at equilibrium and at time t, respectively. The values of k 1 (min À1 ) and k 2 (g·mg À1 ·min À1 ) are the sorption rate constants of the pseudo-first-and pseudo-second-order kinetic models, respectively. The figures are shown in Figure 9. The constants are summarized in Table 2.
In the sorption of Se(IV) by both the HC-Z and LC-Z, the coefficients of determination for the pseudo-secondorder kinetic model (R 2 ¼ 0.9351 and 0.7876, respectively) were higher than that for the pseudo-first-order kinetic
Adsorption isotherm
The mechanism of interaction between adsorbent and adsorbate can be described by adsorption isotherms. Of the many models that describe this process, the commonly used Langmuir and Freundlich isotherms were applied in this study to fit the adsorption isotherm data (Herbert ; Langmuir ).
Langmuir isotherm
The Langmuir isotherm assumes that monolayer adsorption occurs on an adsorbent surface, and there is no interaction between the adsorbate molecules. The Langmuir equation is as follows: where K L is the Langmuir coefficient (L/mg). Q m is the maximum monolayer adsorption capacity, and Q e is the amount of adsorbed Se(IV) on a mass unit of the adsorbent at equilibrium (mg/g). C e is the equilibrium Se(IV) concentration (mg/L).
The Langmuir isotherm-fitting results are summarized in
Freundlich isotherm
The Freundlich isotherm is an empirical equation, as follows: where K F and n F are the Freundlich parameter and adsorption intensity, respectively. The n F should be in the range of 0.1-1 for beneficial adsorption. The results are shown and listed in Figure 10 and Within this study, the maximum sorption capacity of the HC-Z was slightly higher than that of LC-Z; this was because the HC-Z has larger SSA, which can provide
Adsorbent regeneration
The adsorption ability of regenerative zeolites was evaluated to investigate their reusability. A comparison of Se(IV) adsorption between regenerated zeolites at different dosages and the original zeolites is shown in Figure 11. Under the same adsorbent dosage, the removal efficiency of the initial zeolite was higher than the regenerated zeolite. When the adsorbent dosage was 5 g/L, the removal efficiency of regenerated zeolites reached the maximum and the differences in removal efficiency between initial zeolite and regenerated zeolite reached the minimum (25.02% for HC-Z and 1.2% for LC-Z). Subsequently increasing the regenerated zeolite dosage resulted in decreased removal efficiency, similar to the trend seen with the original zeolites. The reduction of removal efficiency of regenerated zeolite relative to the initial zeolite may be attributable to the incomplete release of Se(IV) adsorbed to the original zeolites during regeneration, leading to a collapse of the structure of zeolites.
These preliminary results indicate that regenerated zeolites still had a relatively high capacity for Se(IV) adsorption.
Hence, synthesized zeolites can be recycled and utilized for Se(IV) removal from wastewater.
Application prospect
Selenium is an essential trace micronutrient for both human the QA-SB could be desorbed efficiently and 27.8% of laden phosphate released in soils after 4 days (Shang et al. ).
Meanwhile, the use of zeolites as feed supplements for animals and medical applications indicates that zeolites are not harmful to humans (Smedt et al. ). Zeolite had been used as both carriers of nutrients and medium for free nutrients (Ramesh & Reddy ). Compared with the above research, we could assume that selenium-loaded zeolite can be used as a Se fertilizer to release selenium in Se-deficient areas. Of course, selenium-loaded zeolites may also release other chemical elements while releasing selenium, which needs to be clarified in the next study. In the subsequent studies, we will study the slow release of selenium by the selenium-loaded zeolite and the available potential and risk.
CONCLUSIONS
This study confirms that the zeolites synthesized from differ- tion. The study shows that zeolites produced from fly ashes are inexpensive alternative adsorbents for the removal of Se(IV) from industrial wastewater. Furthermore, using fly ash to synthesize zeolite achieved the reutilization of the fly ash resources. Selenium-loaded zeolite has the potential to be used as a Se fertilizer to release selenium in Sedeficient areas. | 2,181.2 | 2019-09-09T00:00:00.000 | [
"Chemistry",
"Engineering"
] |
Pore structure evolution in andesite rocks induced by freeze–thaw cycles examined by non-destructive methods
In this paper, we compare the values of petrophysical properties before and after 100 freeze–thaw (F–T) cycles, as well as recorded length change behaviour and temperature development on a vacuum-saturated fractured andesite rock sample taken from the Babina Quarry in Slovakia using a specially-constructed thermodilatometer, VLAP 04, equipped with two HIRT-LVDT sensors. We also used non-destructive visualization of the rock pore network by µCT imaging in order to study the development of the pore structure and fracture network in pyroxene andesites during the freeze–thaw process. The results show that the andesite rock samples, due to good fabric cohesion, low porosity, and low pore interconnection, showed good resistance against frost-induced damage. However, it must be stated that the main process causing disintegration of this type of rock is fracture opening, which is caused by internal stresses induced by water–ice phase transition. The overall residual strain recorded after 100 F–T cycles was not significant, however, the increase of 31 pp in volume of the fracture showed us that repeated freezing and thawing can lead to long term deterioration in terms of subcritical crack growth in brittle-elastic solids like pyroxene-andesite rocks.
Mechanical or physical weathering (physical rock breakdown), or more specifically, frost weathering induced by repeated freeze-thaw cycles, is a key process that influences the appearance of geo-relief, especially at high altitudes or in polar periglacial regions. It is closely related to the regional and global climate 1 . Physical rock breakdown mainly stems from the propagation of cracks in the rock matrix 2 . The sources of stresses which are induced by freezing and thawing (further only F-T cycling) and cause deterioration of rock material are still discussed 3 . Frost damage may include a combination of several mechanisms and is currently attributed mainly to crystallization pressure 4-10 , subsequent hydraulic pressure 10 , and volumetric expansion 11 . One or another usually predominates depending on material properties, moisture conditions, and thermal conditions 12 .
Ice crystals tend to grow in thermodynamical disequilibria 3,6,13 . This crystal growth exerts crystallization pressure on the pore wall through a nanometer-thick liquid layer, which is present between two solid phases (ice crystal and pore wall). In a cylindrical pore (it is not generally applicable), the crystallization pressure-Pc is represented by the tensile hoop stress written in Eq. (1) 6 as follows: where P A is additional pressure provided by the pore wall; γ CL is the crystal/liquid interfacial free energy; r p is the radius of the cylindrical pore and δ is the thickness of the water film between the ice crystal and the pore wall (≈ 0.9 nm).
According to the theory of hydraulic pressure 10 , water crystallizes first in the larger pores of the system. There are two possible development scenarios. The first one is that the 9% volume expansion 11 causes part of the water in these larger pores to be forced out to the smaller pores in the vicinity. Therefore, if this expelled water cannot find a pore large enough to relieve pressure, hydraulic pressure builds up. The process is called cryosuction and was first quantified by Everett 4 . Cryosuction could be explained by a shortened version of Clausius-Clapeyron Eq. (2) as follows: (1) www.nature.com/scientificreports/ where P c and P l are the ice and water pressure, ρ l is the density of the water, L f the specific latent heat of fusion of the ice at the bulk freezing temperature T m and the atmospheric pressure and T the total temperature of the system. During this process, the ice front is progressing until it encounters a temperature under which it cannot further penetrate the surrounding pores of the network. The water which remains in supercooled state in smaller pores serves as a water reservoir for further crystallization. In the second scenario, unfrozen water migrates towards ice crystals in macropores driven by the difference of chemical potential between ice and water.
Based on poromechanics models 3 the total stress is related to the crystallization pressure P c according to Eq. (3). In a case that it exceeds the local tensile strength of the frozen rock, a cracking of the rock material will occur (4).
in which b is the Biot coefficient; S c is the ice crystal saturation degree; σ z the tensile strength and ν the Poisson's ratio. Ruedrich and Siegesmund 14 emphasized the importance of water saturation in the frost weathering process. The saturation in turn depends on various material properties, such as mineralogical composition 15 and petrophysical properties-the porosity, pore size distribution, or permeability [16][17][18] . The higher the saturation degree, the higher is the risk of stress and subsequent damage induced by the crystallization pressure, due to the larger volume fraction of ice in the rock pore system. Other important material parameters that co-define F-T resistance are mineral composition, tensile strength, and anisotropy 19 . Materials in which moisture is below the critical degree of saturation also tend to degrade over time due to exposure to repeated F-T cycles. However, the weakening of the material is induced by stresses lower than the strength of the material. This phenomenon is referred to as subcritical crack growth [20][21][22][23] or fatigue cracks 12,24 .
As it is clear from the above-mentioned studies, sufficient information about the mass characteristics of the pore environment is one of the key components of understanding the mechanism of frost damage in rock or any porous material. The main goal of this paper is to quantify the petrophysical parameters of the pore network system of the tested andesite before and after F-T cycling. The hypothesis is based on the assumption that damage mechanisms induced by ice crystallization in vacuum-saturated fractured andesite specimen will result in significant microstructural changes of pore network parameters-specifically increase in porosity, changes in pore size distribution, increase in pore interconnection and subsequent residual strain.
Methods of frost damage assessment in rocks are based on parameters obtained mainly by destructive testing of samples. On the other hand, we used modern, non-destructive experimental laboratory and visualization techniques, such as the spontaneous imbibition method 25,26 , a newly-developed indicative rock pore structure identification method 27 , and X-ray µTomography based on non-destructive visualization. These methods were applied in this combination applied for the first time in a study of weathering. The study focused on the change in andesite rock pore distribution triggered by F-T cycling, and special attention was placed on a single transverse crack present in the tested specimen. The novelty of the proposed paper is found in the integration and comparison of strain measurement and temperature monitoring carried out by the specially constructed thermodilatometer VLAP 04 with the above-mentioned nondestructive techniques. Using VLAP 04 we were able to identify the production and diffusion of latent heat manifested by the temperature spike and determine its intensity and duration, as well as determine the strain pattern by LVDT sensors. Understanding fracture dynamics in porous media in relation to pore space is essential in environmental and material studies of hard rocks in natural conditions, as well as rocks used as building material, since fractures can compromise the structural integrity of the exposed materials 28 .
One of the new tools for quantification of the pore space evolution of the specimen caused by freeze-thaw action is also X-ray computed microtomography (µCT) [28][29][30] . The main advantage of this method is the nondestructive character, 3D visualization, localization, and evaluation of internal structural changes prior to and after F-T cycling. Dewanckele et al. 31 employed µCT for studying the effect of frost damage on limestones at the microscale. According to his study, freezing and thawing depends significantly on the presence of microfacies in limestone, and thus on the existence of pre-existing flaws. The results also showed that the pore size distribution affects the location of crystallization spots inside the tested rock. In a separate study, Park et al. 32 used µCT for studying igneous rock samples. They scanned them before and after subjecting them to a laboratory-controlled freeze-thaw test. According to their study, frost susceptibility mainly depends on porosity and tensile strength of the tested rocks. De Kock et al. 28 worked with state-of-the-art µCT as well. They studied the fracture propagation and pore-scale dynamics as a response to repeated freezing and thawing. In their work, they observed the progressive discontinuity opening during the freezing period, while during the thawing period, it closed again. After a certain number of cycles, it stopped closing during melting, and did not expand further during freezing. According to their hypothesis, this demonstrated the effect of a stable degree of saturation during the freeze-thaw experiment, because after a certain number of F-T cycles, sufficient space was created for the ice to grow indefinitely. After adding water to the system, crack growth continued due to frost wedging.
Continuous F-T cycling is capable of progressively generating more microcracks in the matrix of the rock material. Once a critical threshold is exceeded, microcrack growth will progress rapidly over a small number of
Material and methods
Initial rock fabric properties. The propagation of microcracks is a significant phenomenon of the weakening of brittle crystalline rocks with usually high mechanical strength; this is the main reason why we decided to work with the rock core of pyroxene rich andesite (Fig. 1a), having a size of 51 mm ± 1 mm in length and 32 mm ± 1 mm in diameter, which was cut and machined from a specimen block collected from the Babina-Hanišberg quarry-BH(A) type. The rock fabric is one of the main factors which controls petrophysical properties and material behaviour during the weathering process. Mineral content analyses were performed by SEM-Scanning Electron Microscopy of standard thin sections. Andesite from the Babina quarry Hanisberg is coarse, amphibole-pyroxene porphyritic andesite. The main components are orthopyroxene, clinopyroxene, and plagioclases with basic accessory minerals like apatite or titanite (Fig. 1b).
Igneous rocks, which formed by cooling and subsequent crystallization from magma or lava, undergo considerable contraction during the cooling period. This gives rise to tensile forces strong enough to break the rock into several jointed blocks. Such contraction or shrinkage is generally accepted to be the cause of the vertical type of joints in granites, as well as the well-known columnar joints in andesites and other effusive rocks. These oriented joints often serve as preferential weakness planes in various mechanical weathering processes. Therefore, in our research, we focused on a sample with a visible transverse joint, parallel to the bedding of the studied andesite (Fig. 1a).
As was previously mentioned, rock fabric is an important factor that affects and controls the topologic and geometric properties of the pore system. Various researchers have reported that the petrophysical properties are important rock parameters that significantly affect the frost resistance of rocks. To quantify the changes in pore systems induced by repeated freeze-thaw cycles, it is necessary to provide detailed information about total porosity and pore size distribution. It is important not only because of the amount of water that is absorbed in the rock matrix and pore system, but also due to the possible amount of ice that will crystallize in given environmental conditions, and thus, for the interpretation of the internal frost weathering damage mechanisms. In order to provide detailed information about initial parameters, we applied standard destructive procedures 34,35 applicable for the determination of bulk density, total porosity, and pore size distribution. Representative pore radii distribution was measured by standard mercury intrusion porosimetry (MIP). Based on data from MIP, we can state that the pore size distribution pattern of these andesites is bimodal, which means that this lithological type is characterized with two maxima: the first one in larger capillary pores and the second in pores with a small radius. The BH (A) andesites are characterized by a large number of pores smaller than 1 × 10 1 nm, while the second one has a large number of pores larger than 1 × 10 5 nm (Fig. 2).
Changes in geometrical and topological parameters of pore space after repeated freeze-thaw cycles can be quantified by non-destructive techniques, specifically helium pycnometry (effective porosity), spontaneous imbibition testing (pore interconnectivity), and the experimental rock pore structure identification method (pore size distribution). In essence, these techniques appear very valuable; however, they are not standardized because they www.nature.com/scientificreports/ are relatively quick to perform, cause no damage to the sample, and can be carried out repeatedly on the same specimens. Therefore, they are suitable for evaluation of the effect of frost on a certain type of porous material. Effective porosity was analyzed by the pycnometric method, which is described as helium pycnometry. Helium is used in helium pycnometers to determine the particle density (or specific weight) of pulverized samples. Helium pycnometer was applied to determine rocks' effective porosity 36 . The measurement is based on the Archimedes principle, but instead, water is substituted by inert technical high-purity helium replaced by the sample volume Vx in the test chamber. After putting a cylinder-shaped rock sample into the chamber of known volume VC, gas is let in till the required pressure is achieved. When the inlet is closed, gas penetrates all effective pores of the rock or other porous sample until the equilibrium pressure p1 is reached. After opening the vent to another/additional chamber of known volume VA, helium expands and pressure drops until a new equilibrium at the pressure p2 is reached, following the equation: The volume of a solid object can be calculated from this equation as a function of the ratio p1:p2. The custom evaluation method using a precise calibration curve was developed. For rock sample testing, calculated volume Vx represents the solid phase volume (without helium accessible pores, i.e., He-effective pores). However, closed pores are included in Vx. Afterwards, the volume of the effective pores is calculated as V efHe = V − Vx, where V is the total (or "envelope") volume. Regularly-shaped cylindrical samples are used, volume V is calculated from their dimensions.
Before each sample measurement, the sample was intensively washed by helium flow. One measurement is interpreted from the mean of 10 steady ratios p1:p2, with maximum deviation ± 0.001.
Pore size distribution. For the identification of the rock pore structure, two different methods were used. As was stated above, representative pore radii distribution was measured by standard mercury intrusion porosimetry according to STN 72 1011 35 in addition to a newly-developed experimental rock pore structure identification method 27 . The new method is based on the assumption that water sorption (adsorption and absorption) into rock pores is controlled by rock pore structure; thus, water sorption under controlled conditions determines the rock pore structure. The controlled conditions are achieved by three different suction tests: 72 h water vapor adsorption at 98% RH, 48 h water absorption under atmospheric pressure, and 24 h vacuum water absorption. In this way, four different rock pores types can be identified according to their size and accessibility to water: (6) micropores and mesopores, (7) easily-accessible macropores, (8) partially accessible macropores, and (9) closed pores of any size. The pore size classification, micropores, mesopores and macropores is the one by IUPAC (International union of Pure and Applied Chemistry) 37 . The main advantage of this method is its nondestructive character and ability to be applied repeatedly prior, during, or after rock testing.
The 72 h water vapor adsorption test at 98% RH was performed to identify the content of micropores and mesopores in the rock samples. Prior to the adsorption test, the sample was dried in an air circulating oven at 105 °C and weighed with the accuracy ± 0.001 g. Dry sample was put into airtight chamber with 98% RH and constant temperature (21 °C). The 98% RH was maintained by using a super-saturated solution of hydrated copper sulphate (CuSO 4 ·5H 2 O) 38 . After 72 h, the sample was individually weighed without removing from the climatic chamber. From the rock weight difference, the content of micropores and mesopores was calculated according to Eq. (6). The 48 h water absorption test made according to STN EN 13755 39 was used to identify macropores that were easily accessible to water. The 24 h water vacuum absorption test was further performed in order to identify macropores that were partially accessible to water. Finally, the content of closed pores was identified from the total rock porosity determined according to STN EN 1936 34 . In order to calculate the content of micropores (5) p1 · V1 = p2 · V2 Boyle ′ s law of gas expansion . where N AD98 is the content of adsorbed water, N 48 is the content of absorbed water after 48 h saturation under atmospheric pressure, N V is the content of water saturated into pores vacuum, n is the total porosity in vol.%, ρ w is the density of water, and ρ d is the apparent density of dry rock.
Pore connectivity by the spontaneous imbibition method. The method of spontaneous imbibition is a simple, nondestructive test procedure for determination of the connectivity of rock pores, which is an important topological parameter of pore systems. During the test, one face of the sample is exposed to water, where the mass of water uptake is measured over time 25 . It exploits the analogy with diffusion, where, in homogeneous materials (without taking gravitational effects into account), the distance to the wetted front increases with the square root of time l ~ t 0.5 . In this work, a modified testing procedure to the one employed by Hu et al. 25 was adopted to mitigate for any errors due to buoyancy force or effect of evaporation in such a way that the specimen was tested in a larger water tank compared to a Petri dish by Hu et al. 25 . The specimen was dried at 105 °C for 48 h and then wrapped in PE foil, except for the base of the core sample, including a 1 mm side wall exposed to water in order to avoid evaporation losses. The sample was then stored in a desiccator and tested suspended under a balance with automatic reading and recording of data every 1 s for the duration of 15 min. A glass water reservoir with dimensions of 202 × 122 × 125 mm filled with distilled water up to 65 mm height was placed on a supporting jack to bring the rock specimen in contact with the water in the reservoir, and the sample was submerged to a depth of about 1 mm (Fig. 3a-c). The temperature was maintained at 23.0 ± 1 °C. Imbibition tests (SI) were carried out in triplicate on the sample. The sample was dried at 105 °C for 48 h between tests. Each test was analyzed by plotting cumulative imbibition height (mm) against square root of time (min) in a log scale. The apparent slope of the linear regression curve C(I) provides an imbibition slope, thereby characterizing the speed of water imbibition into the sample. This relation, which is originally based on Handy's 40 model, is valid only for small size samples where the effects of gravity are minimal-this is also the case here. Two distinct imbibition slopes were determined separately for the time intervals 0.1-1 min and 1-10 min. Imbibition slopes for fast 0.1-1 min interval were marked C(I) 0 f for initial state and C(I) 100 f after 100 freeze-thaw cycles. Slopes in the medium time range 1-10 min were marked C(I) 0 m and C(I) 100 m respectively (Fig. 3d). www.nature.com/scientificreports/ Non-destructive visualizations. Non-destructive visualizations of pore space and fracture propagation was performed on High-energy micro-CT Phoenix v | tome | × L 240. The acquired images are based on the attenuation of the X-ray through the investigated specimen. The design of industrial µCT, unlike clinical, is based on a fixed X-ray tube and a fixed detector, between which the sample rotates, and higher radiation energy > 12.4 keV is used. In our work, we used a voltage of 170 kV. With the rock cores of size 51 mm ± 1 mm in length and 32 mm ± 1 mm in diameter, the voxel size was equal to 34 µm, so we were able to determine only macropores within the sample. The sample had to be scanned 2 times-before and after 100 F-T cycles; the main goal of this procedure was to match the 3D structural information of each scan. For one sample, a total of 2200 projections were registered. To reduce the noise factor, 1 frame was taken from each projection with an exposure time of 500 ms. The data were reconstructed by using the phoenix datex 2-reconstruction program and then exported to a vgl format. VGS studio MAX 2.2 software was used for the final visualization of the pore space of the specimen with a transverse joint before and after F-T testing. The acquired images were processed using several 3D visualization techniques. Quantitative data, such as porosity and pore size distribution, were derived using these methods and compared with the above-mentioned methods (e.g., spontaneous imbibition method, helium pycnometry). In the first step, it was necessary to distinguish the pore space from the matrix and mineral grains by binarizing the obtained images. For this purpose, the method of thresholding was used. The selected thresholding method was based on visual feedback. Binarization transforms a gray-level image into a binary image. This method is used when the relevant information in the gray level image corresponds to a specific gray level interval. In the output binary image, all pixels with an initial gray level value that were lying between the two bounds were set to 1, all the other pixels were set to 0. In addition, because the pore separation method is prone to errors in the presence of noise, we applied despeckle filtering to remove noises generated by the thresholding method. Image binarization was firstly performed on 2D ortho-images and then used to generate a 3D image, which was further processed. The interconnected pore space was separated from the binary images using the concept of the watershed segmentation algorithm. The principle of the watershed algorithm is to compute watershed splitting lines on a segmented 3-D image, which would detect surfaces and separate agglomerated particles, which are subtracted from the initial image 28 . The result of the analysis was then determination of the porosity and the pore size distribution of pore systems. The results were processed in the form of logarithmic histograms and visualizations.
Freeze-thaw test. After completion of spontaneous imbibition testing, the specimen was unwrapped, dried at 105 °C for 24 h, its weight was measured, and thereafter vacuum-saturated with distilled water for 48 h to obtain maximal water saturation. The sample was further weighed, wrapped in PE foil to prevent evaporation, and placed inside the thermal chamber of the thermodilatometer VLAP04. The VLAP04, which is used for the frost damage test, is able to control the temperature in the range from − 17 to + 60 °C. A control thermocouple from the thermal chamber was placed along the central rotational axis of the cylindrical dummy sample, which was made from the same material as the tested specimen, and the temperature was cycled from − 10 to + 10 °C. The specimen was subjected to 100 freeze-thaw cycles with a cooling rate of − 0.18 °C/min and heating rate of 0.21 °C/min. The length change of the specimen was measured using LVDT (linear variable differential transformer) sensor HIRT-LVDT-T101 F with an accuracy of ± 1 × 10 -3 mm. The temperature inside the sample and strain was constantly monitored and recorded in 1 min intervals, which allowed for monitoring of dilatometric behaviour of the specimen during the phase transition of water in the pores.
Results
Petrophysical properties. Porosity. The mineralogical composition and fabric (structure and texture) determine the rock's petrophysical properties. The structure of the pore system of andesite from Babiná is typical for extrusive crystalline igneous rocks. Their primary porosity is very low, and they are characterized by secondary porosity in the form of cracks, the extent of which depends on the time and extent of the temperature processes associated with the cooling of the lava.
The total amount of water adsorbed into the pore system is closely related to the porosity of the rock and was calculated as the degree of saturation Sr (%) before and after 100 F-T cycles with respect to the total water absorption under vacuum of the respective sample according to the following equation: where N is the sorptivity; ρ b is the bulk density and n is the total porosity of the sample. Sr before cycling was 69.4% and increased to 98.02% after 100 F-T cycles. It might seem that after vacuum saturation, the whole volume of pores represented by effective porosity is filled with water, which is not the case. During vacuum saturation, only pores which are big enough for a molecule of water to enter the pores and are somehow interconnected to the surface of the sample are saturated. During F-T cycling, the pore interconnection, increased and that was the reason that we were able to measure a higher degree of water saturation after repeated vacuum saturation, so the difference between Sr determined before and after 100 F-T cycles is the evidence of mechanical stresses within the sample.
Initial values of porosities and the degree of saturation, as well as their changes after F-T cycling can be found in Table 1.
Pore radii distribution by the indicative rock pore structure method. The andesite sample from Babina has a specific and complicated rock pore structure (Fig. 4b). Our results show that most of the pore volume consists of www.nature.com/scientificreports/ pores that are hardly accessible to water (N void = 0.50 wt%), which are represented by macropores of size greater than 50 nm (Fig. 4b-B), and by pores containing adsorbed water (N ads = 0.569) (Fig. 4b-F) (this corresponds to the MIP results). The first pore type is represented by inkbottle-shaped macropores, or large macropores (cavities) surrounded and communicating with the rock surface (the source of water) via micropores, mesopores, or much smaller macropores which create obstacle for water intake into the cavities. Water can enter into these macropores only if the three-phase system is disturbed, because capillary suction does not release water from smaller pores into larger macropores, and the air cannot escape from the inkbottle macropores. The pores which are easily accessible to water are in fact not present in BH(A) 4 andesite (N bulk = 0.008) (Fig. 4b-A). Thus, we can state that the rock pore structure of BH (A) 4 sample is made of a large number of blind pores and isolated pores, which are interconnected by micropores and mesopores. After 100 F-T, the rock pore structure of the test sample changed significantly. A significant increase in the volume of pores which are easily accessible to water is most likely related to crack growth and increase in the hydraulic conductivity of the sample in terms of increasing pore interconnection. The volume of N ads micro and mesopores also increased slightly, as did the volume of pores which were hardly accessible to water. However, the volume of isolated Nc pores decreased significantly by up to 0.446 wt% (Fig. 4a). This difference corresponds to the sum increment of N bulk and N void which is 0.242 + 0.203 = 0.445 wt%. Theoretically, this may suggest that due to F-T cycling, the volume of water- www.nature.com/scientificreports/ isolated pores changed in equal proportions to the volume of N bulk and N void pores. Such a process is perhaps a consequence of subcritical crack growth by a significant microcrack opening.
Pore connectivity. The spontaneous imbibition test carried out on andesitic sample, which had been subjected to 100 freeze-thaw cycles, showed an increase in the imbibition slope, thus indicating an increased rate of water uptake for both fast and medium time intervals. In a fast timeframe, the average value of C(I) 0 f = 0.184 increased by 155.26 pp. to an eventual 0.471 after application of F-T cycling. For the medium time scale, the imbibition slope increased even more by 264.8 pp. from initial C(I) 0 m = 0.194 to a final value of C(I) 100 m = 0.708 (Table 1). A considerable standard error resulting from the F-T cycling in andesite needs to be noted, since it is an igneous rock with extremely low porosity. F-T cycling generated either new, or widened already existing cracks, forming preferential paths for water uptake; therefore, a higher apparent imbibition slope in the tested andesite sample was observed.
Freeze-thaw tests. The strain behaviour resulting from ice crystallization of the water-saturated sample is based on various pore system characteristics, such as the water content in rock pores, rock fabric and mineralogical composition. The strain and temperature behaviour during F-T cycling is further plotted as strain versus time diagrams in Fig. 5 and divided into three characteristic zones-I; II; III, by applying a modified division by Ruedrich and Siegesmund 14 . This allowed us to make a conclusion about the effective mechanisms of the defor- www.nature.com/scientificreports/ mation. Above the freezing point, the strain curve was in zone I. This process is not yet affected by ice crystallization in the rock pores. The contraction in zone I continues with the expansion of the specimen in zone II. The phenomenon of specimen expansion can be traced back to the initial crystallization of ice in zone II. Since crystallization is an exothermal process, it is associated with the release of energy in the form of latent heat. During the crystallization (nucleation), temperature remains constant, and after its completion, the temperature starts to decrease again. During ice crystallization, there is a 1 nm thick liquid layer present between ice crystal and the pore wall, which causes the ice crystal to exert pressure on the pore wall 6 . The liquid layer sustains itself by disjoining van der Waals forces between the crystal and the pore wall with a magnitude of several tens of MPa 41 . At the same time, due to thermodynamic imbalance, water is attracted to the forming ice front. At 0 °C, the water is supercooled in the micro and mesopores and serves as a water reservoir for the forming ice front. The migration of water to larger pores, where ice crystallizes, causes hydraulic pressures on the pore walls (if the pores are small enough or pore interconnection is low) and at the same time, causes reduction in pore pressures (negative capillary pressures) in smaller pores. This reduction resulted in the shrinking of the material, which explains the strong contraction of specimen in zone III. However, the ice crystals found in the macropores continue to grow, causing further generation of crystallization pressures on the pore walls. On the other hand, crystallization pressures can overcome negative capillary pressures only if the thermodynamic equilibrium is disturbed 13 . After 100 F-T cycles, a residual strain of 8 × 10 -5 was observed (Fig. 5). Residual strain with a value of 8 × 10 -5 is probably caused by subcritical crack growth. This process can lead to long-term deterioration of the material, which normally exhibits relatively high mechanical strength.
Non-destructive visualizations. 100 F-T cycles with temperature oscillations from 10 to − 10 °C results in the microstructural evolution of pore spaces and leads to the development of fractures. Dimensional changes are obtained by measuring the equivalent diameter (ED) of each pore structure before and after F-T cycling. This parameter represents the diameter of a sphere with the same total volume as measured object. The defect volume distribution as a function of position in X, Y, Z-directions is shown in Fig. 6. Total volume of detected pores was 127.29 mm 3 , which means total porosity was 0.73%. After 100 F-T cycles, the volume of detected pores increased to 153.89 mm 3 and total porosity changed to 0.89%. This porosity is much lower than the total porosity calculated by standard methods. This indicates that most of the pore volume consists of pores smaller than 100 µm and corresponds to MIP results. Because of the used voxel size of 34 µm/ voxel, we were able to scan only pores with an effective diameter bigger than 136 µm. On this basis, we can state that most porosity is mainly built up by pores smaller than 0.1 mm 3 (Fig. 7). Table 2 shows the overall increase in five porosity categories according to their ED pores > 3 mm; > 2 mm; > 1 mm; > 0.5 mm and > 0.1 mm.
Microcrack opening is one of the most important processes of rock disintegration, especially in andesitic rocks, since they normally exhibit good fabric cohesion. One single fracture running transversally to the sample with a total volume of 97.55 mm 3 was detected and visualized. After 100 F-T cycles, fracture volume increased to 128.22 mm 3 (Fig. 8).
The largest dimensional changes of the porous system were bound to and near the position of the fracture. This information provides complementary information about microcrack opening in this sample. The fracture grew perpendicular to the bedding of the andesite and the pore structure changed.
Discussion
Changes in pore space properties induced by ice crystallization, which has been reported by various researchers, is a process that significantly affects the durability of rock material. According to Ruedrich et al. 42 and Ruedrich and Siegesmund 14 , the mineralogical composition and fabric components control the stress that is developed in the pore space of the material. On the other hand, rock pore structure parameters, both geometric (porosity, pore size distribution, and shape of the pores) and topological (pore space interconnection), control the amount and distribution of moisture in the rock, and hence the possibility of ice crystallization. In this study, we focused on the behaviour of andesitic rock with a pre-existing crack under the cyclic 100 F-T cycles. Based on the changes in petrophysical properties, through the monitoring of strain behaviour, as well as temperature development and its comparison with non-destructive visualization using industrial µCT, we can outline a conclusion that may help clarify the degeneration of rock with a pre-existing fracture during frost weathering. The tested sample of andesite from the Babina Quarry is a rock which exhibits good fabric cohesion. Its porosity is extremely low, and its rock pore structure system is specific and complicated. According to MIP results, its pore size distribution pattern is bimodal, with a large number of macro and mesopores, which are interconnected by micropores. Stresses which developed during the ice crystallization led to dimensional changes in its pore structure. According to Sun and Sherer 43 , at lower temperatures, ice crystallization takes place in smaller pores, because the probability of having a good nucleation surface decreases with pore volume. In our study, a relatively intense expansion started at about − 0.5 °C in Zone II. This indicated the start of water crystallization within larger pores, which is more favorable according to equilibrium thermodynamics. Specimen expansion is probably induced by crystallization and hydraulic pressures inside the pore system. The growing crystals also draw unfrozen water towards them 4 , and this process is called cryosuction. The crystallization pressures are larger in smaller pores; however, for ice to penetrate smaller pores, the temperature needs to be lower. At − 0.5 °C, water in micropores remains liquid in metastable-supercooled condition and supports crystal growth in larger pores. As water crystallizes first in the larger pores of the system in a fast manner, the 9% volume expansion causes part of the water in these larger pores to be forced out into the smaller neighboring pores. Hence, if this expelled water cannot find a pore large enough, or pore interconnection is too low to relieve this pressure, hydraulic pressures then build up, and along with crystallization pressures, can exceed the cohesive strength of the material and cause cracks www.nature.com/scientificreports/ to extend. On the other hand, ice crystallization induced processes cause negative pore pressure in micropores, which means that the material shrinks instead of expanding in Zone III. Freezing of crack water to ice resulted in crack opening and subsequent fracture volume increase of 31 pp. This process decreased rock cohesive strength progressively and led to a residual strain of 8 × 10 -5 after 100 F-T. Residual strain of 8 × 10 -5 is not large enough to be macroscopically observable on the specimen, but it is sufficient for a relatively high increase in porosity, pore interconnection, and changes in the pore size distribution pattern of the rock. Such progressive deterioration of the properties of the rock material is called subcritical crack growth. In a recent review of subcritical crack growth mechanics, Eppes and Keanini 2 argue that climate-dependent subcritical microcrack expansion is a potentially responsible process for the growth of most cracks in surface and near-surface rocks. Subcritical www.nature.com/scientificreports/ crack growth is affected by many known, as well as anticipated chemical and physical processes associated with climate influence. Therefore, climate change will alter the frequency and magnitude of weathering processes, and thus the current understanding of frost weathering is required to anticipate future occurrence of processes which result in microcrack propagation and the resulting deterioration of building materials, natural stones, or freeze-thaw triggered rockfalls 44 .
Conclusions
Imaging applied to multiple µCT images before and after 100 F-T cycles, which were performed on a vacuumsaturated sample, allows for non-destructive visualizations of the rock pore structure evolution in the andesite.
In combination with the study of volume modification by changes in petrophysical properties, as well as the recorded strain path behaviour with temperature development by the specially constructed thermodilatometer VLAP 04 with two HIRT-LVDT sensors, we can make conclusions regarding the effective mechanism of rock material deterioration in additional to commonly used standard methods. Based on our results it can be stated: • The tested andesite from Babiná is a rock with extremely low porosity and a specific pore size distribution pattern with a large number of small capillary pores and micropores, as well as a large amount of macropores. This corresponds to the results from the indicative rock pore structure method. Based on those results, the rock pore structure of Babina andesite predominantly contains hardly-accessible macropores, which are Histogram showing pore size distribution of macropores (due to the used voxel size of 34 µm/voxel, we were able to scan only pores with an effective diameter larger than 136 µm) in andesite determined by Micro X-ray tomography before and after 100 F-T cycles. www.nature.com/scientificreports/ interconnected by micropores and mesopores. A part of this specimen's matrix contains a large amount of blind and isolated pores. Pore interconnection determined by the imbibition curve slope C(I) is also extremely low, but with a significant increase after F-T cycling. • non-destructive visualization by µCT showed only a slight increase in macroporosity of the sample after 100 F-T cycles. On the other hand, significant fracture opening corresponds to a 31 pp. increase of fracture volume. The largest dimensional changes of porous structures are also bound to the locations near the fracture. • the physical breakdown of rock necessarily stems from the propagation of fractures. Freeze-thaw induced cracking of a brittle-elastic solid like pyroxene-andesite is caused by ice crystallization and hydraulic pressure build-up which led to rock fatigue failure. Subcritical cracking of the tested andesite results in total residual strain of 8 × 10 -5 recorded after 100 F-T cycles. | 9,202.2 | 2022-05-19T00:00:00.000 | [
"Geology"
] |