text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Impact of Trade Liberalization on Economic Growth in Japan: Autoregressive Distributed Lag Model (ARDL) The objective of this study is to identify the impact of trade liberalization on economic growth in Japan. Annual data are utilized from 1985 to 2016 via on Autoregressive Distributed Lag Model (ARDL) Cointegration test and Vector Error Correction Model (VECM) based Granger causality. The findings from unit root tests revealed that all the variables of mixed results whereby they are integrated at I(0) and I(1) and could proceed to the ARDL Cointegration test. Furthermore, all the variables have long-run relationships between trade openness, investment, education, inflation and economic growth in Japan. However, this study found a significant positive of trade openness and investment on economic growth in the long run. Lastly, VECM based Granger causality showed some of the causality relationships between variables in the short run for Japan. Introduction Trade liberalization is the process of removing barriers and opening the economy of one country to abroad investment and competition. According to Narayan and Smyth (2005), trade liberalization can refer to three aspects, namely diminution in a barrier of imports with unchanged in the incentive of exports; the composition in relative prices towards neutrality; and the substitution of cheaper for expensive forms of protection. The history of 70 years in Japan, it's economic had built by a strong work ethic, mastery of high technology, a comparatively small defense allocation, and cooperation with government-industry (Central Intelligence Agency's World Factbook [CIA], 2018). The fourth biggest industrialized and freemarket economy in the world is Japan. The main economy of Japan as well-known by its competitiveness and efficiency in exports oriented sectors, but the productivity of services, agriculture, and distribution are lower compare to other sectors. Japan had the secondhighest gross domestic product (GDP) in the world during the 1970s but in the beginning of 1990s Japan has succumbed to the economic recession of 10 years, also called "Lost Decade". This is because Japan was a speculative asset price bubble during a boom cycle that sent valuations soaring throughout the 1980s (Kuepper, 2018). During the year 2011 to 2016, Japan's exports have decreased at an annualized rate of -4.4%, from JYP 65,546.48 billion in 2011 to JYP 70,035.77 billion in 2016. Besides that, Japan's imports totaled JYP 66,041.97 billion in 2016, decreasing -15.77% compared with the previous year. It effects the economic growth of Japan growing in a moderate rate. However, based on the export-led growth theory, Japan's economic growth should grow at an accelerated rate. Therefore, the economy of Japan may yet recover from the Lost Decade economic crisis. However, academic are sceptic whether the trade liberalization brings more positive or negative impacts to the economic growth. According to Drozdz and Miskinis (2011), a positive effect between free trade toward economic growth may make a good intention for producers to expand their business to larger markets and help developing countries access the capital goods and as an intermediate in the process of development. If the import item of the country is an important raw material in the production, thus the country will become more dependent on other countries' supplies and markets (United Nations Development Programme [UNDP], 2018). Trade liberalization may bring a negative impact on the developing countries due to the unstable economy, thus it will increase the pressure to liberalize trade. For example, as stated by Freckleton (2007), trade liberalization had a negative effect on the economic growth of Jamaica due to trade liberalization effected the depreciation of price incentives, it shows that trade liberalization not necessary can reduce the bias against imports and exports but insufficient to solve the structural constraints such as weak industrial sectors, dependence on primary commodity exports, underdeveloped human resource, deficient technology, and inadequate infrastructure. Therefore, it shows that trade liberalization is unlikely to positively impact growth; it may also negatively impact the economic growth of developing countries. The aim of this study is to identify the impact of trade liberalization on economic growth in Japan by determining the relationship between trade openness, investment, human capital accumulation, inflation, and economic growth in Japan and examine the pre and post of trade liberalization on the economic growth of Japan Literature Review The literature study concluded that the trade openness can have a positive effect or negative effect on GDP growth. Most of the researchers argued that trade liberalization has a positive effect on economic growth. However, some of the researchers showed a negative effect of trade openness on economic growth in the long run. Onafowora and Owoye (1998) found a positive relationship between trade policies and economic growth by used the VECM test for the period from the year 1963 to 1993 in 12 sub-Saharan African (SSA) countries. They also stated that the importance of export expansion and an outward-oriented trade policy in enhancing economic growth. After the initial phase of trade liberalization, the imports of 42 developing countries are increased following by the exports, and the overall the balance of trade is deficits (Parikh, 2004). The author also postulated that in the short to medium-run, trade liberalization enhances GDP growth, which means there is a significant relationship between trade openness and economic growth. Matadeen et al (2011) hypothesized that the impact of trade openness on economic growth in the long run is the openness stimulates growth. In the short run, the results of the VECM based Granger causality test depicted the existence of bi-directional causality between the trade liberalization proxy and economic growth. Thus, trade liberalization proved as an important ingredient for growth in Mauritius. On the other hand, Trejos and Barboza (2014) used dynamic error correction model (ECM) found that one of the major determinants of the growth rate of output per worker was trade liberalization arises during the post-crisis period, while in the pre-crisis period, trade liberalization no as the main determinant. They also suggested that the basis of large-scale capital accumulation and mobilization of labor will enhance economic growth. Another research found that the real exports is significantly positive impact on economic growth, but the trade openness warps the economic growth of selected developing and least developed countries. The negative effect of trade openness index shows the existence of trade deficits (Shujaat, 2014). The impact of international trade on economic growth in Tanzania between 1970 to 2010 is positive and significant. Thus, it was expected that increase the removal of barriers will increase the balance of payment as well as promotes economic growth (Hamad, Burhan & Stabua, 2014). According to Pratibha and Preeti (2015), the relationship among the international trade and economic growth in China from 1980 to 2013 are cointegrated and bi-direction causality. For the result of VECM is statistically significant, negative and less than one which indicates that in the long run relationship between trade openness and growth not existing any problem. Therefore, increasing the foreign trade has made positive contribution in the GDP. The impact of trade openness toward economic growth on 12 selected MENA countries is positive due to the balance of payment is surplus (Hozouri, 2016). The author also found that the movement of economic growth had significant and negative correlated with the changing of tariff, and hence its relationship with the volume of trade is positive. Another previous study hypothesized that the link between trade openness and the economic growth of 87 selected OECD and developing countries stated that greater growth and higher economic performance will cause the higher trade openness in those countries. Trade openness had a significantly positive coefficient, which proved that it is a good incentive for growth postulated by studies such as Zarra-Nezhad, Hosseinpour and Arman (2014), Jamilah, Zulkornain and Muzafar (2016), Keho (2017), Idris, Yusop, Habibullah and Chin (2018). Methodology In this study, the main estimation technique is the time series approach because of this study is analyzing the movement of those variables of interest over the time period. The time period used in this study is annual which from year 1985 to 2016, which is total of 32 observations and the dependent variable is GDP growth, whereas the independent variables are trade openness, investment, human capital accumulation and inflation. This study employs the core model to investigate the effect of trade liberalization is based on the augmented aggregate production function. The following models are employed: where, Y is gross domestic product ( From the core model can be written into an econometric model: where LGDPt, LTOt, LINVt, LHCt, and LINFt are the logarithm of GDP growth per capita, trade ratio on GDP, gross fixed capital formation, secondary school enrollment and inflation rate, respectively; DUMt is dummy variable indicating the value of zero (0) for periods before trade liberalization era and one (1) periods after the trade liberalization; β0 is constant term; β1, β2, β3 β4 and β5 are coefficient to measure the impact of trade openness, investment, human capital accumulation, inflation and the dummy variable on the GDP growth respectively; t is time period (1,…, T); and t  is the stochastic error term. This study will use the Autoregressive Distributed Lags (ARDL) to cointegrate to determine the short-run and long-run relationship between trade liberalization and economic growth in Japan. Thus, it will conduct the three types of test which are Unit Root test, ARDL Cointegration test and Vector Error Correction Model (VECM) based Granger Causality test to identify the relationship between the trade liberalization and economic growth and the interrelationship between the explanatory variables. First, determine the stationarity of time series variables by unit root tests. The spurious regression exists when those time series variables are non-stationary (Mahadeva & Robinson, 2004). Therefore, the unit root test is the pre-condition of cointegration test. In this study, three types of root tests will be used which are Augmented Dickey-Fuller(ADF) and Kwiatkowski, Philips, Schmidt and Shin (KPSS), those tests will determine the order of integration of among each variable. According to Nkoro and Uko (2016), the null hypothesis of the ADF test is that the time series variable has a unit root that means the time series variable is not stationary, while the null hypothesis of the KPSS test is the time series variable is stationary. After that, proceed to test cointegration test that indicates the existence of a long-run equilibrium relationship between trade openness, investment, human capital accumulation, inflation, dummy variable and economic growth within a multivariate framework. As stated in the introduction of the chapter, to test for the existence of any long-run relation among the variables, conduct the ARDL bounds testing procedure. This involves investigating the existence of a long-run relationship using the following ARDL framework: LGDP LGDP where ∆ is the lag operator and ut is the error term. The ARDL cointegration test is used the overall of F-test statistic and t-statistic to test on the regression. The null hypothesis of Fstatistic for equation (3.15) as follows: ion) cointegrat (no : The way determines the decision rule of long-run relationship: when the F-test statistic is greater than the critical value, then the H0 can be rejected so exist the long-run relationship; and when the F-test statistic is less than the critical value, then the H0 cannot be rejected so do not exist the long-run relationship. Besides that, the t-statistic is tested through (3). After the existing the long-run relationship and proceed to estimate the long run and short coefficients. The ARDL approach estimates (p+1) k by obtain the number of optimal lags for each variable on regressions, where p is the maximum number of used lags and k is the number of variables in the regression. Based on the exception of a study by Narayan and Smyth (2004), since their used annual data in this study, and the maximum number of lags their used in the ARDL model was set equal to two. To ensure the goodness of fit of the ARDL approach, the diagnostic tests are conducted. This study used the Granger Causality test to keep the variables constant to determine the direction of the relations among those variables. To avoid the issue of misspecification, this study use the technique of Vector Error Correction Model (VECM) Granger Causality test, this approach is used when there is a set of variables found to have one or more cointegrating vectors the equation (Granger, 1988). One advantage by using VECM based Granger Causality test is that it can distinguish both the long run and short-run causal relationship that consist of those variables in the equation. The significance of F-statistic shows the short-run causality while the error correction term, ECt-1 indicates the long-run effects. Below indicates the equations of VECM based Granger Causality Test: where Δ is the lag operator; ς and λ is the coefficient to be estimated; ϖt is serially independent random errors with mean zero and finite covariance matrix; LGDPt the value of logarithm GDP growth per capita in t th year; LTOt is the value of logarithm trade ratio of GDP in t th year; LINVt is the value of logarithm gross fixed capital formation in tth year; LHCt the value of logarithm secondary school enrollment rate in t th year; LINFt the value of logarithm inflation rate in t th year; DUMt is the dummy variables of before and after trade in t th year; and ECt-1 is the error correction term. In every case of the dependent variable is returned against previous values of itself and other variables. The number of lag length (p) is determine based on the Schwarz Bayesian Criterion. The existence of a cointegrating relationship among [LGDPt, LTOt, LINVt, LHCt, LINFt, DUMt] suggests that there must be at least one direction of Granger causality, but it does not show the direction of temporal causality between the variables. Findings This chapter will discuss and interpreting the findings of Eview analysis. The following sections will present the Unit Root Tests which include Augmented Dickey-Fuller (ADF) test and Kwiatkowski-Phillips-Schmidt-Shin (KPSS). LGDP (2) 0.073(3) Notes: * denotes 10% significance level, ** denotes 5% significance level, *** denotes 1% significance level. The number in parentheses ( ) is the number of lags. Lag lengths for the ADF unit root test are based on the Schwarz's Information Criterion, while lag lengths for the KPSS test is based on the Newey-West Bandwidth which estimate using the Barlett Kernel. LGDP, LTO, LINV, LEDU, LINF, and DUM refer as the logarithm of GDP growth per capita, trade ratio of GDP, real gross fixed capital formation, secondary school enrollment rate, inflation rate, and dummy variable, respectively The empirical result of ADF test in Table 2 portrayed mix result at level as well as at first difference, while all of the variables are statistically significant at 5% significant level. However, the result of KPSS test showed that all the six variables are able to reject the null hypothesis at level, since KPSS test has an inverse hypothesis compare to ADF test. Therefore, it shows that all six variables are integrated at I(0) and I(1). Overall, the test proved there is exist of integration among all six variables in Japan. After unit root tests, it proceeds into the ARDL Cointegration test to determine the long-run relationship between the dependent and independent variables. Since the calculated F-statistics are larger than upper bound of critical bounds values at 1% of significant level, thus it indicated that all variables are cointegrated. The results of ARDL in long run stated that some of the variables such as trade openness and investment are exhibit a positive impact on economic growth in Japan while both variables are statistically significant at 1% significance level. Meanwhile, the dummy variable is statistically significant negative effect on economic growth in the long run. On the other side, education and inflation are insignificant positive and negative impacts respectively on economic growth in long run. The result of Table 3 reports the calculated F-statistic is 4.3548, which greater than the critical values for both of the lower and upper bound at 2.5% of the significance level. This shows that all the variable which LGDP, LTO, LINV, LEDU, LINF, and DUM are cointegrated. In this study, Akaike Info Criterion (AIC) is used to determining optimal lag length for the model. Therefore, the optimal lag length selected based on AIC of the ARDL model is (3, 0, 0, 0, 0, 0). Table 5 states about the long-run ARDL model results in this study. The long-run regression results report that LTO and LINV are statistically significant positive impact on LGDP while DUM is significant negative impact on LGDP in Japan. On the others side, the LINF and LEDU are not significant with respectively negative and positive impact on economic growth of Japan in the long run. The result of trade openness showed significant positive effect to economic growth, in the long run, is constant with the finding of Nana and Barnes (2016). The result also indicates that an increase in trade openness might lead to a rise in exports, thus increasing economic growth in the long run. For the long run result of gross fixed capital formation, there is a statistically significant positive effect on GDP growth supported by Yavari and Mohseni (2012). Hence, the larger investment will boost the aggregate demand and economic growth in long run. Meanwhile, the result of LEDU positively impacts LGDP, but not significant in the long run relationship. The estimated result supported by Narayan and Smyth (2010) which employed research by utilized quarterly time series data from 1962 to 2000 in Fiji whereby education have greatest positive effect on GDP in long run relationship. The finding of an insignificant negative relationship between LINF and LGDP had been proved by Mireku, Agyei and Domeher (2017) in Ghana. This shows that when consumer price index (CPI) increase will effect consumption decrease, economic growth will decrease in the long run. The education and inflation indicated insignificant result towards the economic growth of Japan in long run. Therefore, it shows that education and inflation do not have effect on economic growth in the long run. In addition, the result of dummy variable, in the long run, is statistically significant negative effect on the economic growth of Japan. This indicates that the JGFTA has a negative impact on economic growth in the long run, while it may due to the post period of JGFTA too short in this study, and it may have a positive impact on economic growth in the future. -2.2052*** 0.6092 -3.6197 0.0017 Note: *, ** and *** denotes 10%, 5% and 1% significant levels, respectively. D denotes first difference operator. Dependent variable is LGDP. Based on Table 6, the short-run ARDL regression results indicate that 3 of the variables are statistically significant at different significance level which LTO, LINV and DUM are statistically significant at 1%, and 10% of the significance level, respectively. The coefficient of ECM in table 4.3.3 is -2.2052, which it consists a negative sign and statistically significant at 1% of the significance level. Therefore, it is preferable and consistent in the short run regression. The coefficient of ECM also indicates that the adjustment speed for the variables to reach the long-run equilibrium is about 220.52% annually. In conclusion, the calculated R-squared of the selected ARDL approach is approximately 98.15%, which means the ADRL model is fits well and about 98.15% variation of LGDP can be explained by LTO, LINV, LEDU, LINF and DUM. Note: *, ** and *** denotes the rejection of null hypothesis at 10%, 5% and 1% significance levels, respectively while the number in ( ) represents the p-value. Table 7 shows that this study existing five bidirectional Granger causality and three unidirectional Granger causality in short-run by used the VECM based Granger causality. The results indicate the existence of a bidirectional relationship between LTO and LGDP in the short run which both of the probability are less than 0.05 and rejected H0. Next, LINV does granger cause LGDP and LGDP does granger cause LINV in the short run. Besides, two bidirectional Granger causality which LINV to LTO and from LTO to LINV while from LTO to LINF and from LINF to LTO in this study. On the other word, the rejections of the 5% of significant level occur between LTO, LINF and LINV. In addition, the LINV does granger cause LINF and the LINF does granger cause LINV with bidirectional which caused by the probabilities are less than 5% of significance level. Moreover, the results of DUM does granger cause LGDP, which the probability (0.0559) is less than 0.10 and rejected H0. For the relationship between DUM and LTO is a unidirectional Granger causality that indicated by the rejection of H0 where the probability is less than 0.01 and the DUM dose granger cause LTO. Furthermore, the probability value (0.0416) of direction from DUM to LINV is lower than 5%, thus rejected H0 whereby the DUM dose granger causes LINV. Lastly, there is no causality of education with other variables. The result of VECM based Granger Causality exhibited the existence of five bidirectional Granger causality and three uni-directional Granger causality in this study. One of the bidirectional Granger causality from LTO to LGDP is constant with the finding of Pratibha and Preeti (2015). The existence of another four bidirectional Granger causality are running from LINV to LGDP, from LINF to LTO, from LINV to LTO, and LINV to LINF where the probability are lower than 5% significance level and rejected null hypothesis. Besides that, the three uni-directional Granger causality all running from DUM to LGDP, LTO, and LINV respectively. In addition, the VECM Granger Causality results also found there is no causality of education to other variables. Table 8 show that all the probabilities of diagnostic tests are greater than 0.05 and do not reject the null hypothesis, which mean the model does not have the problems of serial correlation, heteroscedasticity, functional misspecification while the model is normally distributed. The cumulative sum of recursive residual (CUSUM) and cumulative sum of squares of recursive residual (CUSUMQ) of the model do not exceed the critical limits at 5% significance level. Thus, it appeared to be stable in this study. Conclusion This study aims to examine the effects of trade liberalization on economic growth in Japan. The relationship between trade openness, investment, human capital accumulation, inflation and economic growth is examined using the Autoregressive Distributed lag (ARDL) approach. Prior to the estimation, unit root tests, namely the Augmented Dickey-Fuller test, Phillips-Perron test and Kwiatkowsi-Phillips-Schmidt-Shin test were conducted to test on the stationarity of each variable. Stationarity test results indicate that all variables are integrated at I(1). ARDL result indicates that trade has a positive relationship with economic growth in Japan. In term of the policy perspective, the results indicated that trade openness do Granger cause the economic growth. Meanwhile, this study proved that the positive long-run relationship between trade openness and investment on the economic growth in Japan. The result existed a positive impact of trade liberalization and foreign direct investment on economic growth since short run regression. Besides, the result of pre and post trade agreement between Japan and the Gulf Cooperation Council is stated that a negative effect on economic growth in both of the periods. These results could be used as a guideline to the trade participants likes governments, investors, policymakers, exporter and others in order to enhance the economic growth in Japan. In conclusion, this study has stated that there is a relationship between trade openness, capital stock, human capital accumulation, inflation and economic growth. On the other side, the relationship between the economic growth and those independent variables might be different due to the different data used as proxy of certain variables. Therefore, some limitations that may falsify the finding's accuracy in this study are only focused on one selected country, and lack of information arises from insufficient studies specifically related to the trade liberalization on economic growth in Japan. The future research can be done by utilizing the panel data to compare with other countries and add more important variables like labor force, exchange rates, and taxes on trade. Furthermore, the results and findings can give those trade participants more understanding about the importance of trade liberalization on economic growth as a way to encourage producers to increase the productivity by comparative advantage and increase the aggregate economic output in Japan. Based on the results in this study, Japan will still highly dependent on trade liberalization as the main engine on economic growth in the future.
5,691
2021-02-27T00:00:00.000
[ "Economics" ]
Decomposing the misery index: A dynamic approach Abstract The misery index (the unweighted sum of unemployment and inflation rates) was probably the first attempt to develop a single statistic to measure the level of a population’s economic malaise. In this letter, we develop a dynamic approach to decompose the misery index using two basic relations of modern macroeconomics: the expectations-augmented Phillips curve and Okun’s law. Our reformulation of the misery index is closer in spirit to Okun’s idea. However, we are able to offer an improved version of the index, mainly based on output and unemployment. Specifically, this new Okun’s index measures the level of economic discomfort as a function of three key factors: (1) the misery index in the previous period; (2) the output gap in growth rate terms; and (3) cyclical unemployment. This dynamic approach differs substantially from the standard one utilised to develop the misery index, and allow us to obtain an index with five main interesting features: (1) it focuses on output, unemployment and inflation; (2) it considers only objective variables; (3) it allows a distinction between short-run and long-run phenomena; (4) it places more importance on output and unemployment rather than inflation; and (5) it weights recessions more than expansions. ABOUT THE AUTHORS Dr Ivan K. Cohen is an associate professor of economics and finance at Richmond-The American International University in London. His current research interests include financial economics and economics of pension fund. Dr Fabrizio Ferretti is an assistant professor of economics at the University of Modena and Reggio Emilia. His current research interests include Keynesian economics and health economics. Dr Bryan McIntosh is a senior lecture in health management at the University of Bradford. His current research interests include health economics, management and organizational behaviour. PUBLIC INTEREST STATEMENT The Great Recession has refocused the attention of macroeconomists on the determinants of business cycles, as well as on the consequences of recession on individual and community well-being. Originally proposed by Arthur Okun, the misery index (the unweighted sum of the unemployment and inflation rates) was probably the first attempt to develop a single statistic to measure the level of a population's "economic malaise". In this paper, we rewrite the misery index in order to improve its ability to track the state of health of the macroeconomy, without losing the clarity and conciseness of Okun's original intuition. Specifically, we develop a new approach in order to decompose the misery index into its main determinants. This "new misery index" focuses especially on unemployment and growth. Introduction Following the well-publicised financial crisis that began in 2007, many of the world's most advanced economies experienced one of the longest and deepest recessions recorded. In the USA, the Great Recession, as it has come to be known-officially began in December 2007 and ended in June 2009-was the largest macroeconomic downturn since the Great Depression of the 1930s. This set of largely unpredicted and dramatic events refocused the attention of macroeconomists on the determinants of business cycles as well as on the consequences of recessions on individual and community well-being (Grusky, Western, & Wimer, 2011). Also known as Okun's misery index, the "Economic Discomfort Index" (EDI) probably formed the first attempt to summarise a range of macroeconomic indicators into a single statistic in order to track the state of health of the macroeconomy during the business cycle. In its original version, the misery index combines two fundamental targets of macroeconomic policy (unemployment and inflation) in a basic aggregate disutility function. This function measures the level of economic discomfort as the unweighted sum of unemployment and inflation rates (Mankiw, 2010). Albeit remarkably simple, the intuition underlying the EDI has been developed in different useful ways (Blanchflower, Bell, Montagnoli, & Moro, 2013;Setterfield, 2009). In this letter, we offer a new approach to compute the misery index. Specifically, we attempt to rewrite the EDI by using two basic macroeconomic tools: the expectations-augmented Phillips curve and Okun's law. The aim of this work is to show a simple way to decompose the misery index, in order to improve Okun's original idea without losing its simplicity. The remainder of the paper is structured as follows. In Section 2, we briefly review the history of the EDI. In Section 3, we try to reformulate the misery index. In Section 4, we discuss some interesting properties of the new index. Finally, Section 5 concludes the paper. A short history of the misery index The EDI was invented by economist Arthur Okun 1 in the early 1970s, when the United States began experiencing a combination of both increasing unemployment and increasing inflation (the so-called "stagflation"). Because both inflation and unemployment impose significant costs, the index was suggested by Okun as a means of providing a simple yet objective measure of "economic malaise". A higher level of either of these variables has negative effects on national welfare. Therefore, the EDI can be considered as a reverse measure of economic well-being (Nessen, 2008). Calculated on either a quarterly or an annual basis, the EDI in period t (m t ) is simply the sum of the current unemployment rate (u t ) and the current inflation rate (π t ): where π t is measured by the rate of change of the consumer price index, and is expressed as an absolute value, recognising that deflation may be as harmful as inflation (Lovell & Tien, 2000). The index rapidly gained a degree of notoriety following a key article in The Wall Street Journal: … a year like 1970 is difficult to sum up-you wish for one number that would tell all. Although it can be criticized as whimsically simplistic, there is such as index […]. Mr. Okun constructs a "discomfort factor" for the economy. It is derived by simply lumping together the unemployment rate and the annual rate of change in consumer prices-apples and oranges, surely, but it is those two bitter fruits which feed much of our economic discontent […]. The higher this index, the greater the discomfort-we are less pained by inflation if the job market is jumping, and less sensitive to others' unemployment if a placid price level is widely enjoyed … (Janseen, 1971) (1) and then it received popular attention when used as a campaign tool, especially during the US presidential elections of the 1970s and 1980s. In particular, in his 1976 presidential campaign, Jimmy Carter referred to Okun's macroeconomic indicator as an index of "economic misery", using it to argue against the economic policies of presidential incumbent Gerald Ford. The so-called misery index received further significant public attention and eventually became popular during the second 1980 presidential debate, when Governor Ronald Reagan-wrongly-attributed the index to Carter, using it to criticise the Carter administration's economic policy: … when he was a candidate in 1976, President Carter invented a thing he called the misery index. He added the rate of unemployment and the rate of inflation, and it came, at that time, to 12.5 under President Ford. He said that no man with that size misery index has a right to seek re-election to the Presidency. Today, by his own decision, the misery index is in excess of 20, and I think this must suggest something. (Reagan, 1980) Since its formulation, the evolution of Okun's misery index over the prior presidential term has often been used to presage the election outcome (Susino, 2012) as well as to provide some information about the presidential approval rating (Kleykamp, 2003). At first glance, Okun's approach seems to be overly simplistic: it takes into account only two aspects of a country's economic performance and it weights the unemployment rate and the inflation rate equally. These criticisms can create the temptation to reject the index in toto, as a rough and excessive simplification. On the contrary, the EDI remains a useful basic tool for two main reasons. First, the misery index seems to provide a useful approximation of the influence of macroeconomic conditions on population well-being, as measured by specific indicators such as consumer sentiment (Lovell & Tien, 2000), the crime rate (Lean & Tang, 2009), the poverty rate (Lechman, 2009) and even the suicide rate (Yang & Lester, 1992), among others. Second, and more importantly, the misery index has turned out to be an insightful idea. Further research has extended the EDI along two, partially overlapping, paths. On the one hand, authors such as Barro (1999) and Hufbauer, Kim, and Rosen (2008) have attempted to improve the original index by including more indicators of the state of health of the macroeconomy (e.g. the GDP growth rate, the real long-term interest rate, the house and share prices, and so forth). This idea of an "augmented misery index" has been further developed by adding (and weighting) new variables to obtain a full composite indicator of a country's macroeconomic performance (Setterfield, 2009). On the other hand, the EDI served as a starting point in applied research on the "macroeconomic loss function" (Mayer, 2003). Motivated by the misery index, the pioneering studies by Di Tella, MacCulloch, and Oswald (2001) and Welsh (2007), among others, investigated the relation between macroeconomic performance and subjective well-being in an attempt to develop a reliable social welfare function that might be used to evaluate the effects of shocks and policies on population well-being (Blanchflower et al., 2013). An alternative approach to compute the misery index A somewhat different use for the EDI is the analysis of the "optimal levels of inflation and unemployment" (Golden, Orescovich, & Ostafin, 1987, 1990Yang, 1992;Zaleski, 1990). This approach involves a distinction between the actual and natural rates of unemployment (Wiseman, 1992). The attempt in what follows is to develop these insights by using the expectations-augmented Phillips curve and Okun's law. As is well known, the aggregate supply function can also be expressed as a relation between unanticipated inflation (i.e. the difference between actual (π t ) and expected inflation (π e )) and cyclical unemployment, as follows: where α is a constant that measures the change in π t − π e associated with a 1-unit change in the difference between actual (u t ) and natural unemployment (u n ) (Abel, Bernanke, & Croushore, 2008). When the rate of inflation is low and relatively stable-as in the case of today's US and many other high-income economies-the expected inflation rate may reasonably be approximated by the inflation rate in the previous period (π t − 1 ). Thus, the equation for the expectations-augmented Phillips curve becomes (Blanchard, 2011): Finally, by adding π t − 1 to both sides of Equation 3, we obtain a simple expression for the inflation rate in period t: In other words, given the parameter α, current inflation depends on past inflation and on the deviations of unemployment from its natural rate. This expression will replace π t in the original misery index. Turning our attention from inflation to unemployment, we introduce the statistical relation between changes in unemployment and changes in output growth. This is actually another influential contribution of Okun (1962). Several slightly different equations connecting the behaviour of unemployment and GDP during business cycle are commonly known as "Okun's law" (Knotek, 2007). For the purposes of this note, we utilise a gap version of this law, which relates the change in the unemployment rate from period t to period t − 1 (u t − u t − 1 ) to the difference between actual (g t ) and potential (g*) output growth, as follows: where the coefficient β measures how quickly deviations from the "normal" rate of growth are translated into changes in the unemployment rate (Blanchard, 2011). Again, if we add u t − 1 to both sides of Equation 5, we obtain a new expression for the unemployment rate in period t, as follows: where u t is a function of the past rate of unemployment, minus some fraction (β) of the difference between the rate of growth of effective and potential output. We will use this expression to replace u t in the original misery index. A reformulation of Okun's misery index By replacing both the inflation rate and the unemployment rate in the original misery index-Equation 1-with their expressions from Equations 4 and 6, respectively, we obtain a new formulation of Okun's misery index, as follows: where the level of the population's economic malaise, or discomfort, now depends explicitly on those underlying forces that drive the behaviour of unemployment and inflation during the course of the business cycle. Let us consider, for instance, the US economy. Using the FRED (Federal Reserve Economic Data) database from the Federal Reserve Bank of St. Louis, we can easily compute both the original as well as the revised EDI 2 . In a year like 2008, for example, unemployment was 5.5% and inflation was 4.1%. By putting these numbers into Equation 1, such conditions produce a misery index of 9.6%. Equation 7 allows us to decompose this result into its main determinants, as follows: (2) Specifically, if we set β = 0.40 and α = 0.73 (Blanchard, 2011), Equation 8 gives: In the same way, it is straightforward to calculate the level of economic discomfort in any one year (as shown in Table 1). The evolution of the original and the revised misery index in the US economy, over the period 1953-2013, is depicted in Figure 1. According to Equation 7, for a given value of the parameters α and β, the level of m in period t is a function of three key factors, namely: (1) the original misery index in the previous period (i.e. the sum of the unemployment and inflation rates at time t − 1 , u t − 1 + π t − 1 ); (2) the output gap, in growth rate terms (i.e. the difference in the growth rate between actual and potential GDP, g t − g*); and (3) cyclical unemployment (i.e. the difference between the actual rate and natural rate-or non-accelerating inflation rate-of unemployment, u t − u n ). Some features of the "new EDI" It is worth noting some interesting properties of this reformulation of the EDI. First, the new EDI takes into account the three essential phenomena first considered in verifying a country's macroeconomic conditions: output, unemployment and inflation. Second, given π t − 1 , rising inflation only starts increasing the level of economic discomfort when the unemployment rate falls below its natural rate. That is, as measured by Equation 7, the output gap and cyclical unemployment are the crucial factors in determining the magnitude of economic misery. Third, the reformulated EDI distinguishes between the trend and the cycle components of both the rate of growth of GDP and the unemployment rate. In other words, it breaks up the short-run and long-run determinants of the population's economic malaise. Fourth, the weighting scheme for both the output gap and cyclical unemployment comes directly from the functioning of the economy, meaning that we are able to measure the parameters α and β by estimating Okun's law and the Phillips curve, respectively. Thus, there is no need to infer α and β by using subjective variables (e.g. individual opinions on personal happiness expressed in life satisfaction surveys). Fifth, and finally, since the growth rate of potential GDP is typically greater than one, the negative impact of recessions on a population's economic well-being is always stronger than the positive impact of expansions, ceteris paribus. Conclusions Business cycles are complex phenomena, able to influence economic well-being in several different and interrelated ways. There are, however, some key variables (such as unemployment and inflation rates and the rate of growth of GDP) that play a fundamental role in determining national welfare. That is why Okun's original idea has been found to be a useful application in economics and political sciences. This conceptual paper contributes to the literature on the misery index. Our approach, however, differs substantially from the standard one. Specifically, instead of incorporating new variables into the EDI or investigating the structure of individual preferences about inflation and unemployment, we rewrite the misery index by using the two basic relations of modern macroeconomics. This reformulation is closer in spirit to Okun's intuition, but offers an improved version of the misery index. In particular, regarding the effect of the macroeconomic conditions on a population's economic discomfort, this new misery index focuses on the output gap and cyclical unemployment, allows a distinction between short-run and long-run phenomena, places more importance on output and unemployment rather than inflation, is based only on objective variables and weights recessions more than expansions. In a nutshell, reformulating the EDI by explicitly including the expectations-augmented Phillips curve and Okun's law is a fruitful way to improve Okun's original idea without any loss of clarity or conciseness. Johnson's Council of Economic Advisers (CEA) and as a Chairman of the CEA between 1968and 1969(Brookings Institution, 1980. Some of Okun's main contributions to modern macroeconomics theory and policy are now collected in Pechman (2004). 2. Table 1A in Appendix contains a short description and some basic descriptive statistics of all variables included in Equation 7. The complete database is available as supplementary material.
3,992.8
2014-09-03T00:00:00.000
[ "Economics" ]
International Journal of Swarm Intelligence and Evolutionary Computation The future arrival of super intelligence and its impact in society raises numerous concerns. Grounded in the research hitherto elaborated by the field of machine ethics, this paper contemplates the challenge of formulating a code of ethics that regulates super intelligence’s behavior. The first section discusses the need for this code of conduct and contends why it should be centered on ethics. The second section examines the various complexities of this endeavor Indeed, the prospect of super intelligence, once alien to the realm of academia, has over the past decades become an increasingly popular focal point of study. The pace at which technology has evolved in the past and is currently evolving has led us to maintain that it will not be long until fully automated, super intelligent, non-human entities environ their creators. As a matter of fact, I.J. Good's renowned "Intelligence Explosion", whereby the fabrication of the first advanced, artificially intelligent entity will catalyze an indefinite progressive evolution of machine intelligence -otherwise considered to be the onset of what Ray Kurzweil regards as the period of 'Singularity' -is believed to be the root of this seemingly fictitious technology. As Anderson [2] explains it in his paper entitled Ethical Issues in Advanced Artificial Intelligence, "several authors have argued that there is a substantial chance that super intelligence may be created within a few decades, perhaps as a result of growing hardware performance and increased ability to implement algorithms and architectures similar to those used by human brains. " And yet, for all their inherent mysticism, the means through which humanity will eventually procure super intelligence are naught but a minute, trivial bit of the full picture. Rather, we had better draw our attention towards the more impending matter: the impact that super intelligence will have in our society. Disregarding the widespread notion of machines' suitability for the "three Ds" -that is, dull, dangerous, and dirty jobs -super intelligence's elevated computational power and ensuing proficiency in any task known to man will bring about a proliferation of machines whose roles in society will be infinitely more complex than they are now, exhibiting both a degree of dexterity that eclipses human capabilities and a will to reengineer themselves ad infinitum. It is indisputable; therefore, that super intelligence will surpass its creator. An effective method to guarantee super intelligence's harmless behavior is, hence, in place. For this reason, academia's increasingly popular field of machine ethics has taken to the investigation, discussion, and reflection of the moral dimension of artificial intelligence and machines. Throughout this paper, I will use machine ethics' underpinning notions to explore the plausibility of developing a code of ethics for the future's super intelligent machines. To do so, I will first attend to the debate of whether machine ethics is, in fact, the most suitable approach to the development of behavioral guidelines that mitigate any potential risks that super intelligence might bring by having it ethically evaluate its possible courses of action. Secondly, I will examine the viability of different practical approaches to controlling super intelligence, highlighting the complications of the attainment of less advanced, present artificial intelligence ethics, and subsequently outlining what I consider to be a theoretically feasible modus operandi. Finally, I will contemplate the differences and similarities between human and artificial ethical actors in order to further raise the question of who would ultimately have the last word given a contradiction between human ethics and super intelligence's ethics. The Need for Machine Ethics in our Pursuit of Super Intelligence The prospect of super intelligence is unquestionably attractive. However, the mere thought of coexisting with a lifeless entity infinitely more intelligent than any biological creature known to man is enough to spark distress in even the most fervent of its advocates. Capable of surpassing human achievement in practically any field or activity, the power of super intelligence must not only be regarded as an ideal source of widespread benefit to man but as a potential root of uncertainty and harm as well. Therefore, it comes as no surprise that the short answer to the seminal question "Why do we need machine ethics?" is, simply put, because it is in the ethical or unethical behavior of super intelligence In telli genc e a n d E v ol utio nary C o m p u ta tio n that the prosperity or demise of our existence lies. Notwithstanding, let us explore a more thorough and convincing response. From a historical standpoint, the development of super intelligence might be looked upon as a marked parallelism to that of computers [3]. As noted by Asimov [4] albeit the exponential expansion of the computer industry throughout the second half of the 20th Century and, more prominently, the first two decades of the 21st Century has been accompanied by a myriad of societal benefits that have facilitated man's survival, the computerization of our culture has also been the root of numerous unpropitious trends such as, but not limited to, cyber-terrorism, child pornography, and the black market. Hence, in the discussion of the need for machine ethics, understandably call for the consideration of the negative impacts that futuristic developments entail, asserting that, without foresight, emerging technologies have come at a cost -a remark that becomes all the more critical when discussing super intelligence. Still, their research goes on to claim that rather than labeling these fears as sufficient motives for the termination of our pursuit of non-human intelligence, these concerns underline the necessity of contemplating the risks that the materialization of such technologies supposes and, subsequently, the need for our collective effort to ensure their mitigation. As a matter of fact, it is these very preoccupations which form the bedrock for machine ethics as the field seeks to develop a sense of action that might allow autonomous beings to not only refrain from acting unethically but also have the inherent will to consistently act ethically. Therefore, the field must progress on par with, if not lie at the crux of, technological advancement. And yet, opposition to the development of machine ethics still remains passionately adamant. Arguably, it doesn't take a profound instruction on the inner workings of machines to understand how electrical systems work. In the most fundamental sense, therefore, an antagonist to the field of machine ethics might claim that since any electrical machine has an absolute dependence on the flow of electricity through its circuits, rather than trouble ourselves with the philosophical quandaries that obscure the attribution of moral sense to these non-human beings, the technological development of super intelligence should be the sole focus of our attention, for, in the event of its misdemeanor, turning off a switch will suffice to prevent any potential injury from being inflicted on human beings. But this is a rather shortsighted claim, considering the latent ramifications of an antagonistic form of super intelligence: albeit it is true that these beings could be turned off with a single switch, the extent of their future involvement in society will make them so imperative to the adequate functioning of our societal structure that simply "turning them off" would practically amount to suicide [5]. In other instances, by resorting to naught but a vague apprehension of its very definition, dissentients might assert that, given the insurmountable brainpower of super intelligence, scientific furtherance should not pay much heed to the ethical virtues of the technology, but rather its actual creation, for not only will it inherently strive to do good (assuming that the definition of good is clear -herein lies another problem, which will be explained in more depth later on) but it is also through the delegation of important decisions to this entity that social benefit can be maximized. Notwithstanding, as Bergman [5] elegantly points out: "The option to defer many decisions to the super intelligence does not mean that we can afford to be complacent in how we construct the super intelligence. On the contrary, the setting up of initial conditions, and in particular the selection of a top-level goal for the super intelligence, is of the utmost importance. Our entire future may hinge on how we solve these problems. " And yet, it is machine ethics' extensive contemplation of the difficulties entailed by choosing an ethical theory which both suits our society's needs and is commensurate with our expectations of moral machine behavior that most adversaries of the field disregard. As a result, aware of the potential harm that artificial intelligence and super intelligence might lead to, machine ethics' opponents like Bostrom [6] have devised what is referred to as "Safety Engineering". As it is subtly implied by its name, this emerging field seeks to formulate pathways leading to safe artificial intelligence, autonomous machines, and ensuing super intelligence through the incorporation of recognition of the need for "safe machines" in the field of engineering. Approaching the problem of autonomous systems' correct behavior from a more empirical standpoint, safety engineering discards the deliberation on the ethical dimension of non-human intelligence and favors instead practical experiments in environments that permit the adequate control of these forms of advanced technology, allowing for the study of their behavior. A set of guidelines governing the means to ensure proper machine behavior would therefore ensue. There are two rebuttals to this perspective: first and foremost, considering the extensive amount of variables that pertain to a single action in the real world, we cannot possibly expect that the study of machine behavior in a controlled environment will suffice to adequately understand the resulting consequences of such demeanor when confronted with the outside world. Furthermore, provided this limited study could actually manage to fully grasp all the consequences of a single action, it seems rather implausible that a team of human programmers would be capable of taking them all into account when programming the machine's response to a given situation. Noting with concern that interaction with the outside world is filled with these decision making processes, we can conclude that safety engineering's approach to correct machine behavior seems non-viable. Secondly, safety engineering falls short of understanding the full extent of machine ethics' purpose. While the former perceives artificial intelligence and autonomous machines as mere tools, the latter bears in mind that it is in our best interest to cooperate with them. Therein lies the bright line distinguishing safety engineering's pursuit of preventing unethical behavior in autonomous, intelligent systems from machine ethics' campaign to motivate these systems to act ethically. Unlike safety engineering, the cornerstone of machine ethics is not to refrain super intelligence from having unethical thoughts, but rather to make super intelligence think ethically so that all of its mental processes, be they as they might, will be permeated by a will to help, respect, and value humanity. Hence, when their evolution reaches the point at which they edit and engineer themselves, we will know not that their ethical dimension remains unedited, but rather that this ongoing evolution is, in and of itself, ethical. On the other hand, machine ethics provides some convincing arguments for its pursuit. There are strong reasons to believe that machines -and thus super intelligence -would amount to a better ethical actor than man himself. According to research by Bostrom [7] this is due to three fundamental reasons: First, machines have a greater computational power, which facilitates the prevision of consequences of actions and therefore makes ethical decisions more accurate; second, human beings display a tendency towards bias when making ethical decisions, usually favoring those close to them, while machines do not; and third, whether or not due to their remarkably inferior processing speed, human beings are likely to fail to consider all the possible actions that might be taken in a given situation. Other advantages of incorporating ethics into super intelligent systems include their capacity to carry out an action repeatedly and competently at high speeds, as well as their ability to share information between them at an equally efficient rate. Perhaps most importantly, unlike human beings, machines are adept at making decisions unemotionally, which "means that they can strictly follow rules, whereas humans tend to favor themselves and let emotions get in the way of clear thinking. Thus, machines might even be better suited to ethical decision-making than human beings [8]". Furthermore, as Gips [9] points out, the inherent detachment entailed by the consideration of human virtues, ethics, and morals in machine ethics will enable us to understand ourselves more profoundly. In other words, by attempting to formalize our ethical behavior and make our own morality the subject of this field's study, not only will we plow the seeds of a brighter, super intelligenceencompassing future, but we will also reap the fundamental benefit of further comprehending what it means to act, think, and exist ethically. In conclusion, therefore, machine ethics should not only be regarded as the preferable means to approach the challenge of ensuring advanced artificial intelligence's adequate, safe, and beneficial behavior, but it should also be seen as a theoretical-practical venture to break down and formalize the ethical dimension of human beings. Ergo, the complete answer to the question "Why do we need machine ethics?" may very well be: because it is the field which uses the knowledge of the ethical self-hitherto developed by our species in order to analyze what correctness constitutes in our present society so that it can help ensure the safe, fruitful propitiousness of our seemingly unrealistic, technologically-dependent future [10]. The Approach towards the Ethics of super intelligence The endeavor to control super intelligence by instilling in it a sense of morality that will dictate its behavior is not an exclusive competence of machine ethics. As a matter of fact, there is substantial literature on the topic that adopts an approach unlike that of this emerging scientificphilosophical field. An example of an alternative approach is that of Bostrom [7]. In his book, entitled super intelligence: Paths, Dangers, Strategies puts forth the concept of a "box" in which super intelligence can be contained, which, he notes, would render the powerful entity inside the enclosure harmless by isolating it from any contact with the outside world save a single, controlled communication channel with scientists. Furthermore, Bostrom argues, the controlled environment would allow scientists to determine the super intelligence's knowledge of our real world. Notwithstanding its undoubted effectiveness, however, this method of control is, to my mind, rather futile, for albeit it mitigates the dangers of super intelligence, it does so at the cost of exploiting its potential to aid human beings in the search for a solution to global issues such as hunger, poverty, and inequality, inter alia. In light of this unsuitableness, Hall [11] proposes yet another method for control which happens to be slightly more akin to that sought by machine ethics. Coining the term "motivational control", the author suggests giving this advanced form of artificial intelligence a sound, beneficial, ultimate goal whose achievement should be the supreme objective of each and every one of super intelligence's actions. As he explains it himself, "Its top goal should be Friendliness. How exactly friendliness should be understood and how it should be implemented, and how the amity should be apportioned between different people and nonhuman creatures is a matter that merits further consideration" [12]. According to the author, because a sound, rational person would whose ultimate goal is X would not turn into Y if, in doing so, it would contradict its pursuit of X, super intelligence would refrain from acting in such a way that contradicts its friendliness towards humanity. Despite appearing detached from any ethical considerations, this method for control's alarming lack of clarity and clear need for further consideration -to which Bostrom himself alludes -is, in fact, all but a desperate call for machine ethics. The ethical dimension of non-human intelligence, therefore, is central to the complex consideration of super intelligence's reliability when interacting with the real world. Even so, some of machine ethics' approaches to the control of super intelligence -strongly resembling sci-fi science -seem disproportionately implausible at this point in time. Namely, it has been put forth that, taking into consideration the power that super intelligence will provide us and the developments that the field of neuroscience will achieve in the future, it should not be ludicrous to contemplate the possibility of mentally scanning a human brain and incorporating that scan to an artificial neural network. The artificial intelligence would thus possess the ethical thoughts of the human being. On the other hand, one could propose that, given its superior intellect, super intelligence could be taught ethical virtues, as it is done with young children -a Turing Child approach of sorts. Notwithstanding, not only are both of these propositions currently inviable but they also entail super intelligence's arrival prior to the development of its ethics. Their pursuit would therefore result in the potential risk of creating an unsafe entity that may either trick us into believing that it is learning to act ethically when in truth it is not, or it might just blatantly refuse to adopt the ethical behavior we seek to impose on it -in which case its potential will cease to be exploitable, lest we are willing to risk the consequences. Hence, in its pursuit of super ethics, machine ethics must first address the more tangible issue of artificial intelligence's ethics, for it is in the hands of this upcoming human-level intelligence that the creation of a safe, rational, and benevolent super intelligence largely lies. To address this concern, it must first be noted that these forms of intelligence need to undergo a pivotal transition from being implicit ethical agents who are programmed to act ethically (or at least avoid acting unethically) to explicit ethical agents -autonomous entities capable of reasoning appropriately in the face of an ethical dilemma and make a justified decision [13]. In venturing into the exploration of the means by which artificial intelligence's ethics -and the ensuing super ethics -can be attained, it is critical to first undertake the fundamental question "can machines think?". According to Kurzweil [14], the answer is, simply put, no: in his paper, Minds, Brains and Programs, the author uses the famous example of the Chinese Room to disprove the claim that machines can understand what they are being told, maintaining instead that their computational processes are naught but a set of rules being followed but not comprehended. Therefore, he concludes, while the machine appears to understand, in truth, it does not. While the point of this paper is not to discuss this aspect of machine behavior, I will try to refute Searle's argument as succinctly as possible. In essence, the answer to the question of whether machines are capable of thought boils down to the definition that "thought" is given. From my point of view, thinking is the process through which human beings process information by using knowledge that has been acquired previously. Human beings understand that eating food entails chewing because they have learnt this based on experience. Through methods like deep learning, Artificial Intelligence is capable of processing data, altering its algorithms based on a trial and error basis, and process new data using these new algorithms, only to repeat these steps indefinitely and continuously hone its performance. Could a machine then not relate eating with chewing? Would this, then, not be considered thinking? As a matter of fact, do human beings not learn through rules? Does a child not learn to speak, read, and write, amongst countless other things, through rules? Furthermore, there is vagueness in how we define thinking as an actual state. How can we prove somebody is actually thinking? The best proof we can ascribe to thought is a behavior that demonstrates it. Would a machine which behaves as if it is thinking not be considered to be thinking then? The underlying notion on which Searle's argument rests is that the different parts that make up the 'Chinese Room' -the human in charge of translating the input, the book containing the rules of translation, etc. -do not individually understand Chinese. Instead, they are merely gears that work in unison to give this impression. And yet, is this not akin to how our brain digests sensory input? Let us briefly examine, for instance, the act of listening. While our ears are capable of picking up and processing sound waves, it would be misguided to ascertain that they understand speech. In a similar fashion, it would be erroneous to ascribe this ability to the neurons within our brains that process the sensory input from our ears by transmitting electrochemical signals. The list could go on indefinitely, but the bottom line is this: When listening or undertaking almost any mental process, human beings display understanding, or thought. Yet because a single ear or a piece of our brain would not suffice to mentally digest speech, it is fair to claim that it is our system, and not each one of its individual components, which is exhibiting this behavior. Equally so, it is not the duty of human in charge of translating the Chinese input or that of the book containing the translation rules to understand or think about the Chinese symbols being interpreted. Rather, the comprehension of the content is the product of their collaboration: the system -or what is the same, the Chinese Room as a whole. Evidently, a more thorough discussion on this matter is required, but due to the space available on this paper this will have to suffice. Another concern relevant to the consideration of the practical approach to machine ethics is artificial intelligence and automated systems' limited capacity to take their surroundings into account. This awareness must transcend beyond mere hardware-based recognition of real-world elements around them and incorporate a deeper, more profound understanding of the consequences of their actions in a real-world scenario. The complication underlying this aspect of machine ethics is that it is not easy to clearly formalize and compute the fundamental effects of actions in these scenarios. Put differently, it is not short of difficult to clearly define key words in ethical considerations such as "good", "bad", "beneficial", "detrimental", etc. and further make a machine comprehend their meaning. In addition, even if we did manage to achieve this latter goal, we cannot be sure that we possess an adequate conception of what these words constitute. Not only is this meaning obscured by basic considerations such as conflicts between ethical theories, but the multiplicity of cultures around the globe and the subsequent variations in what they individually consider to be right and wrong further complicates the quest of granting these terms a computationally-applicable meaning. The dimension of this computational problem is made clearer by the contemplation of how specific "ethical laws" designed to ensure that machines apply their understanding of such key words needs to be. It is clearly not the same to tell a self-driving car to "stop at red lights" than "do not cause harm" [15]. As a matter of fact, the laws hitherto formulated by our legal systems lack a clarity that is essential to dictating an automated computer's behavior [16]. Albeit one might argue that super intelligence entails an inherent comprehension of the world as a whole, we must understand that artificial intelligence, as its precursor and co-creator, does not excel at this to such a high degree. Therefore, how to guarantee that intelligent machines understand key words that characterize actions as "positive" or "negative" and thus act in such manner that maximizes the wellbeing of humans remains a subject in need of study. A last point of contention in the exploration of applicable machine ethics is the question of whether or not emotions should be an integral part of ethical machines. Despite classical literature's opposition to the presence of emotions in the process of rational decision making, more recent research on the topic labels emotions as an element necessary to making these choices. Personally, I side with the classical perspective on the matter. While it might be true that emotions play an important role in our decisions, it is precisely the absence of emotions and the bias they lead to which characterizes humans as the inferior ethical actors. Nevertheless, this does not mean that forms of non-human intelligence should not understand emotions, for this is a critical aspect of the evaluation of an action's consequences. That is, in spite of the fact that it is necessary that machines are capable of understanding the emotions that a human being might feel as a result of a certain action being performed, it would be detrimental to the attainment of an ethical artificial intelligence if emotions actually affected how the decision-making process is carried out. Put differently, it is imperative that an understanding of emotions is taken into account in the ethical calculations carried out by ethical machines, not that a machines' emotions -if at all existent -determine if or how the ethical calculations will take place. And yet, the greatest problem faced by machine ethicists continues to be the determination of the best ethical theory to incorporate in the automated systems. Much of this field's literature concurs that there is no single one which can be considered to be absolutely correct. This is arguably due to the fact that all ethical theories and their discussions are subject to the controversial issues described above, and it is therefore no easy task to choose or formulate a single theory that satisfies them all. Notwithstanding, the most practical approaches towards the creation of an ethical artificial intelligence have been governed by utilitarianism and action-based ethics, namely. The former's appeal lies in that it provides a simple method to compute and determine the correctness of an action. By subtracting the pain caused to a person from the pleasure that person receives, the machine could easily make a choice when faced with an ethical dilemma. Furthermore, because the information a machine would require making its calculations is virtually the same as that required by a human being, the formalization of this ethical theory is a relatively straightforward task. According to Anderson et al. [2], however, Utilitarianism cannot be considered an ethical theory appropriate for the challenge faced by machine ethics given that it cannot only violate people's rights, since it is capable of justifying blatantly immoral actions (enslaving the few, for instance, for the benefit of the many) but it also fails to take our notion of justice into account, for it judges actions based on their consequences as opposed to what is just -what people deserve. Actionbased ethics, on the other hand, evaluates the action's morality in itself. As is the case with W.D. Ross' prima facie duties [17,18] -essentially a set of variables that must be taken into account when considering an ethical action -this method of calculation allows the actor to extend his ethical scope beyond the consideration of the pain or pleasure caused by an action and evaluate instead the justifiability of the action itself. Because there is no absolute duty, it is the ethical actor's responsibility to give each duty a specific weight depending on the situation. This makes the ethical theory infinitely more applicable, since it is malleable enough to be used by automated systems in different environments. A demonstration of the application of action-based ethics carried out by Anderson [3] involved the programming of a system that would require the user to assign the different weights to each duty for a single action. Subsequently, through a series of computations, the program would determine whether the action should be taken or not. According to the researchers, this program could be further enhanced by taking into account the effects of these duties on the different individuals impacted by the action. Furthermore, it was pointed out that the software could potentially be allowed to attempt to make the ethical decisions on its own by assigning weights to the duties autonomously [19]. The researchers could then compare the computer's results with what they considered to be ethical, and "teach" the machine what the correct weights should be. The machine would then relate the correct weights to the specific characteristics of that particular case. As a result, through this process of trial and error, the system would learn to assign the weights in a way that is considered to be ethical for a specific situation, and progressively become better at it. Although this method is an effective and controlled means of formalizing a "human" approach to ethical decision-making, it restricts the "correct" assignment of the weights to the judgment of the researchers. Machines operating in a real-world context, however, would be faced by a whole host of situations where the assignment of the weights requires knowledge that transcends beyond the scope of the scientists' knowledge. In these cases, the development of the ethical program would greatly benefit from the input of experts in the different fields of ethical machines' application. As it is explained by La Chat [15]. "For example, one computer program seeks to capture the medical diagnostic ability of a certain physician who has the reputation as one of the best diagnosticians in the world. The computer programmer working with him tries to break this procedure down into a series of logical steps of what to the physician was an irreducible intuition of how to go about doing it. With a lot of prodding, however, the diagnostician was soon able to break these intuitions down into their logical steps [20,21]. Perhaps this is true with all "intuitive" thinking, or is it? If we assume that ethics is a reasonable, cognitive undertaking, we are prone to formalize it in a series of rules, not exceptionless rules but something like W. D. Ross's list of prima facie obligations: a list of rules, any one of which might be binding in a particular situation." In order to further facilitate the formalization of human beings' approach to ethical decision-making so as to make it computational, then, it would also be ideal to merge this approach to action-based ethics with the concept of casuistry. Based on the idea of comparison between cases, casuistry proposes that ethical decision-making be addressed by contrasting different situations and their characteristics in order to relatively decide what the best course of action is for a specific case. By drawing an exhaustive analogy between 16th-Century Jesuit Matteo Ricci's Memory Palace, where the storage of memory is facilitated through the mental simulation of a palace with numerous rooms and the attachment of that which one wishes to remember to those rooms and the items contained within them, Searle describes casuistry as the modus operandi of approaching an ethical decision by juxtaposing the case at hand with other ethical situations of the same nature and subsequently comparing their individual characteristics, or circumstances that define them. Put differently, in relation to Ricci's mental edifice, casuistry would amount to walking around the palace's rooms, referring to the ethical decisions or situations, and contrasting their interiors, or particular features/characteristics. As a result, the cardinal perquisite of implementing a casuist approach, Jonsen explains, is that "the ultimate view of the case and its appropriate resolution comes, not from a single principle, nor from a dominant theory, but from the converging impression made by all of the relevant facts and arguments that appear in each of those spaces" [22]. Hence, by adopting a casuist procedure, the machine could potentially be exposed to millions of situations where a human being makes a decision regarding an ethical dilemma that is believed to be morally correct by ethicists. This information could then be processed through refined methods at which machines are progressively excelling such as deep learning. This would facilitate the evaluation of the factors involved in a situation immensely, for instead of having a programmer manually compute all the possible variables that are involved in a single case, the machine could learn to draw patterns between the situations and thus learn to recognize these variables or features in previously unseen scenarios. Anderson [3] and Wallach et al. [23] system, for instance, could learn to form patterns relating the appearance of certain factors in different ethical situations and the weights assigned to each one of Ross' duties for those situations. This way, the presence or absence of one of these variables could translate into a more accurate assignment of weights. Such a pattern recognition sprouting from casuistry would also highly simplify machines' understanding of emotions. By having human beings label the emotions present in different situations and having the machine compare multiple scenarios, the system could be able to better grasp the causes that sparked those emotions and therefore act in a way that maximizes wellbeing. Through the fundamental methodology of comparison that casuistry proposes, therefore, not only would the scope of ethical machines' learning be widened significantly, drawing conclusions from a myriad of real-life cases as opposed to a narrowed research database, but it would also facilitate machines' grasp of a situational factors that human beings subconsciously account for, if not potentially overlook, when making ethical decisions. I concur with Anderson et al. [2] insofar as the integration of ethical machines in society is concerned. As it is proposed in their paper, entitled Towards Machine Ethics [23,24]: "We suggest, first, designing machines to serve as ethical advisors, machines well-versed in ethical theory and its application to dilemmas specific to a given domain that offer advice concerning the ethical dimensions of these dilemmas as they arise. The next step might be adding an ethical dimension to machines that already serve in areas that have ethical ramifications, such as medicine and business, by providing them with a means to warn when some ethical transgression appears imminent. These steps could lead to fully autonomous machines with an ethical dimension that consider the ethical impact of their decisions before taking action. " In essence, in suggesting that machines first advise human beings by processing data pertaining to an ethical circumstance and then coming up with a plausible course of action, Yampolskiy [24] are essentially alluding to an augmented cognition of sorts. This approach bears a strong resemblance to the decision support systems discussed by David Martinez in his paper entitled Architecture for Machine Learning Techniques to Enable Augmented Cognition in the Context of Decision Support Systems. As Treatise and Martinez [19] explains, "The field of augmented cognition facilitates reaching insight after a significant amount of processing is done in the front-end of the decision support system," whose "objective is to drive, via a human-machine interaction, to the shortest decision time with the right amount of data volume." In other words, the main objectives of decision support systems are collecting and processing data in order to facilitate its understanding, developing models of human cognition that can be extrapolated to machine learning, and providing assistance in decision-making. To do so, the author points out, these artificial advisors first acquire data from the external world through multiple sensors or machine-to-machine communication. The data is then grouped into the appropriate categories, and analyzed through various computational processes in order to transform information into knowledge. Finally, a probabilistic measurement offers possible courses of action to the user and provides numerical estimates of their consequences. If at this point the user feels the decision support system is lacking information, she may ask for more data. This is the underlying basis for the ethical advisor to which Anderson [3] allude. And yet, the utility of modeling the first ethical machines as decision support systems capable of augmenting and learning from human cognition lies in the Human-Machine Interaction (HMI) that these systems involve. As highlighted by Treatise and Martinez [19], the corrective feedback the user provides to the machine is critical in order to make probabilistically-reached decisions more accurate and minimize false positives or false negatives. This supervised learning would play a key role in the improvement of machines' ethical decision-making. Furthermore, as ethical machines become more autonomous, their understanding of human cognition and behavior should also be increased. Therefore, it would be ideal to integrate a degree of collaboration between the user and the ethical advisor in Anderson et al.'s gradual approach. As noted by Miller and Ju [20], there are many benefits to reap from the cooperation between human beings and automated systems. While the former excel at handling novel situations, the latter are superior when it comes to executing preset actions given a determined set of inputs. To do so, it would be imperative that the user be predisposed to act ethically and abide by the predefined moral standards that we seek to make machines understand. It would also be necessary that an effective communication between the machine and the user be established, whereby the automated system can understand human beings' mental approach to ethical decisionmaking. Not only would this allow the automated system to gain a "powerful extra dimension of capability". According to Miller and Ju [20] but it would further allow the machine to learn what the user takes into account when facing ethical decisions. Moreover, as Miller and Ju [20] explain, both the user and the machine must possess a clear notion of each other's roles in this cooperation: "The necessity for the computer to hold a model of the [user], and for the [user] to hold a model of the computer presents a design challengedesigning understandable systems and feedback mechanisms so that the two entities can truly share control. With sensors and machine intelligence enhancing the capabilities of the [user], and backstopping human failings, and with human intelligence expanding the capabilities of the automated systems, the two can be considered to extend or expand each other's capabilities. " Were these prerequisites to be met satisfactorily, such collaboration could potentially improve human beings' ethical decision-making capabilities in the short run thanks to the provision of relevant data and, in the long run, enhance machines' understanding of human beings' ethical notions, facilitating their supervised learning of ethics. By adopting Anderson gradual approach in the integration of ethical machines that abide by action-based, casuist machine ethics, and first structuring these automated systems as decision support systems intended to enhance human cognition, not only would humans be exempt from having to judge machines for their actions, since it would ultimately be humans who would be making the decisions, but this gradual process of integration would also grant us more control with regards to the real-world scenarios that machines are exposed to and translate into short and long term benefits. This controlled exposure, then, would further enable us to collect data on ethical machines' behavior in real-life contexts, allowing us to hone the ethics of artificial intelligence and, in the future, its infinitely more powerful successor. The Implications of an Ethical super intelligence The attainment of an ethical super intelligence capable of perceiving the world in a manner akin, if not superior, to that of its creator for the very purpose of giving him counsel is, in truth, a rather disturbing thought. And yet, from a historical standpoint, such a pivotal cataclysm seems all but predictable: throughout its existence, humanity has not been obedient to a single entity, or held the word of a single entity to be true, but has instead progressively transitioned from the worship of one entity to another, gradually detaching itself from the realm of the ethereal and moving on to that of the physical -while the ancient Romans worshiped their Gods and granted them responsibility for the occurrences of the world around them, and the Renaissance bequeathed this accountability to man himself, it now appears as if it is the oncoming technological era of the Singularity which will bestow this power to man's creation: technology. Now, as if the Roman Gods had created man for the sole purpose of yielding them their will, man is at the brink of a revolutionary epoch in which it will be advised by the product of his intelligence. Super intelligence, however, is going to evolve. Whatever entity it is we manage to contrive will enhance itself exponentially. Merely thinking that, at some point or another, we will be advised by an entity too complex for even us -its creator -to understand is unquestionably frightening. Will it actually behave ethically, then? We have hitherto addressed the query of how to make super intelligence as ethically right as possible. And yet, what would happen if this human ethics-bred superethics turns out to be 'righter' than its creator's ethics? Put differently, what if superethics and human ethics turn out to disagree? Who would have the last word? Man, or machine? In order to properly address this question, we must first scrutinize the similarities between human beings and super intelligence. To do this, I will try to refute different claims aimed at differentiating them. In doing so, it is not my purpose to claim that machine is man's equal, but rather to further raise the question of whether the creator and its creature truly are blatantly distinguishable. Referred to as the "Bright line argument" by Moor [21] in his paper entitled The Nature, Importance, and Difficulty of Machine Ethics, this claim states that only full ethical agents can be regarded as ethical agents -agents capable of making reasonable, justified ethical decisions. However, as the author himself goes on to explain, this assertion is misguided for two fundamental reasons: First and foremost, it implies a disregard for other lesser types of ethical agents, such as implicit (an autopilot system on a plane that has been programmed to take its passengers to the correct destination) and explicit (a machine capable of making a choice when faced with a controversial ethical dilemma) agents. Albeit not as proficient in resembling a human's ethical decisionmaking process, these agents nonetheless clearly display a form of ethical behavior that must not be undermined. Secondly, in response to the allegation that, since consciousness, intentionality, and free will are the key characteristics of full ethical agents, or human beings, Moor contends that, even though non-human intelligence might fall short of exhibiting these traits, there is no empirical evidence to dispel the claim that the reality might not be otherwise at some point in the future. Furthermore, I would dare affirm that super intelligence will, in fact, possess these features. From machine ethics' theoretical standpoint, the foundations for this seemingly illusory ethical accomplishment are presently being laid: consciousness will be given to machines because, albeit computational, the incorporation of ethical programs in these systems will grant them an awareness of the consequences of their actions; intentionality is the very groundwork of machine ethics' theory, for at the very crux of the field's research lies the objective of having machines intend to minimize harm and maximize wellbeing; and lastly, these automated systems will also be furnished with free will, for they will choose how to act in every situation. Admittedly, their choice will be limited to a set of ethically-sound alternatives, but they are being given a choice nonetheless. Moreover, as LaChat [15] puts it, "If free will is real in some sense, there is again no reason to believe that it might not be an emergent property of a sophisticated level of technical organization, just as it might be asserted to arise through a slow maturation process in humans. I should also add that not all AI experts are convinced an AI could not attain free will." Another argument intended to highlight the differences between human beings and machines is that of their supposedly different learning processes. Specifically, the argument states that human beings and machines cannot be regarded as equivalent ethical actors given the dissimilarity in the way in which they grasp ethics. While the former largely learns "moral rules by osmosis, internalizing them not unlike the rules of grammar of their native language, structuring every act as unconsciously as our inbuilt grammar structures our sentences" the latter would just require a chip containing an ethical program in order to operate ethically. Therefore, the argument goes, machines do not possess the profound understanding of the world around them that is imperative for adequate ethical decision-making. This latter claim -notwithstanding the truthfulness of Hall's previous assertion -can be refuted with the following observations: firstly, modern machinelearning algorithms, such as deep learning, literally enable machines to learn from the analysis of previous experience. Through a cyclical procedure of trial and error not unlike that proposed in the previous section, involving the combination of action-based theories, casuistry, and corrective feedback, machines theoretically could learn and be taught to act as an ethical human being would. Therefore, maintaining that ethical machines would not possess a gradually honed perception of their surroundings is erroneous. The fact that a machine's learning process would incorporate the in-depth scrutiny of millions of diverse scenarios is clear proof of the contrary, and could further support the claim that these systems' perception would be superior to that of human beings. As a rebuttal to this reflection, it would be tempting to assert that humans, unlike machines, are aware of contextual factors that transcend beyond mere evaluations of their physical surroundings and englobe traditional and cultural beliefs that have a potential impact on ethical reasoning. In other words, as it is put by psychologist Lawrence Kohlberg, "situational factors are extremely important in moral action," for in many cases peer group and institutional shared norms may be moral or nonmoral in their content." Hence, one might contend, machines will never attain the moral reasoning that is characteristic of human beings. My response to this claim is simple, and not unlike that of Moor, which was presented earlier: there is no way to prove that this will not be plausible at some point in the future. As a matter of fact, alluding to the action-based, casuistry-guided, HMI-driven ethical approach outlined earlier, it seems conceivable that artificial intelligence could eventually learn to distinguish these cultural trends and take them into consideration when choosing an ethical course of action. Furthermore, bearing in mind the computational power super intelligence is deemed to possess, it is all the more believable to asseverate that it will excel at doing so. And yet, it remains an insurmountable truth that a machine will never truly be man's equal. Although the ethical behavior of the former might bear a strong resemblance to the latter's -as I have tried to point out in the previous paragraphs -I do remain an adamant proponent of Luzac's publication entitled Man More than a Machine (1752), which stresses the differences between both creatures by dispelling any claims that might assert otherwise [18]. Indeed, there are in fact notable dissimilarities between human beings and artificially-intelligent entities, as it was explained in the first section: machines, unlike their counterpart, are exempted from being misguidedly swayed by emotions when making ethical decisions. Whilst a program comparable to that which was proposed previously would grant machines a comprehension of the emotions relevant to the evaluation of an action's impact, this fundamental understanding is central only to the computational process carried out by the machine, not the structure of the process itself. For these same reasons, machines are exclusively capable of overcoming the forces of self-interest and common sense. Furthermore, machines are not subject to the Law of Conscious Realization, whereby moral action precedes and catalyzes moral thought. This translates both into man's arguably innate tendency towards moral, ethically-correct action versus machines' increased reliability as far as ethical behavior is concerned, for the implausibility of the latter to set action before thought ensures that ethically-adequate thought will be followed by equally suitable behavior. Lastly, a stark difference between human beings and an ethically-correct super intelligence lies in the degree of awareness and therefore accuracy that the machine would manage to attain as a result of its computational power. Hence, coupled with an adequate action-governing, ethical program, the awareness of this elevated number of variables when carrying out the decision-making process implies a pronounced superiority of super ethics over more rudimentary human ethics. It would appear, then, that not only are human beings and machines utterly clashing, but the latter's ethical dimension appears to be superior to that of the former. In other words, albeit super ethics is unquestionably different from human ethics, it does, at least mildly, come across as the better form of ethical reasoning. And yet, does this then mean that man is inexorably bound to listen to super ethics, holding its mathematically-wrought counsel to an unparalleled regard? Put differently, if the ancient Romans were to be told by our infinitely different, more evolved and arguably more knowledgeable, modern society that slavery, given its unethical justification, should be abolished in its entirety, ought the Romans to pay heed to our advice, or turn a blind eye to our counsel, adamantly convinced of the ascendancy of their knowledge? The point I seek to draw with this analysis is not that machines are superior to man, nor is it that, in consequence, human ethics should be subordinate to super ethics. Rather, my intention is to underline the question of who would ultimately be right by pointing to the flaws inherent in the seemingly obvious yet misleading answer machines are created by man, and thus it is man who determines what is ultimately right. In lieu of this evasive retort, this examination proposes the further consideration of the question through further studies of the parallelisms and dissimilarities between human beings' and machines' ethical reasoning. In any case, supplementary analysis is urgently in place, for although the correct answer to the unsettling question "Should machines have the last word?" remains unclear, the potential reverberations of the wrong one augur nothing but the onset of a somber, apocalyptic calamity. Conclusion The seemingly fantastical thought of attaining a viable, adequate code of ethics for a futuristic super intelligence is, in conclusion, not entirely surreal. Rather, by discussing the plausibility of materializing these ethics in an artificial form of lesser intelligence, it seems like machine ethics has laid the groundwork that may enable us to deliberate on this subject. While the exactly correct practical approach is yet to be determined, future research must fail not to be mindful of the various complex requirements that have been outlined in this paper, for the adequacy of such systems depends on whether or not they are met satisfactorily. Notwithstanding, in our pursuit of the correct ethical program that will govern the actions of a super intelligent entity, we must not wander away from the equally impending considerations of what implications such a machine, or its possible disagreement with man, might entail. The path towards the attainment of a feasible, safe machine ethics is an obscurely long and winding one, and albeit academia's efforts have granted us the knowledge to steadily tread it, there is still much to be done. The furtherance of research governing the practical application of ethical theories in machine ethics is in place. In spite of the fact that some authors claim that materializing such ethical software without first concurring on a single, correct ethical theory is unbecoming, I disagree. I sincerely believe that the development of digital software designed to make a machine "think ethically" will not only enable us to expand our understanding of advanced artificial intelligence's computational interpretation of the real world -hence contributing to the arduous development of appropriate computational structures -but it will also allow us to carry out an assay of the suitability of different ethical theories or even variated combinations of them. In advocating the progression of such a hands-on approach to machine ethics, however, by no means is it my purpose to discredit its less practical counterpart. Quite on the contrary, I hold the philosophical deliberations of machine ethics in the highest regard, for they lie at the core of the field's purpose. At the same time, however, I am of the opinion that, given the fact that both the practical and theoretical dimensions of machine ethics are intrinsically intertwined, a reciprocal collaboration between both would be all but greatly beneficial to the field as a whole. To conclude, in order for the benefits of super intelligence to be reaped by society, it is imperative that its code of ethics be developed in parallel to super intelligence itself. And yet, what if super intelligence is never materialized? Little of the effort put into this scientificphilosophical endeavor will be lost. To quote LaChat [15] "To the contrary, the failure (…) might eventually bring us to the brink of a mysticism that has, at least, been partially 'tested.' Would it be more mysterious to find intelligent life elsewhere in the universe or to find after unimaginable aeons that we are unique and alone?" The more menacing question, hereafter, is: what if super intelligence is materialized before we manage to formulate its code of ethics? The answer, I am afraid, is deserving not of an elaborated academic discussion, but rather the eeriest science fiction novel, and while I am no prolific creative writer, my best guess is that it ends with a metallic, mathematically-palpitating and inert automaton, as cold-blooded as Isaac Asimov's Dr. Susan Calvin, prosaically reciting, "Veni, vidi, vici" while the lifeless remnant of its creator's existence silently cries "et tu, Brutus.
12,395.6
2016-09-03T00:00:00.000
[ "Philosophy", "Computer Science" ]
Artificial Intelligence as a Service for Immoral Content Detection and Eradication Social media is referred to as active global media because of its seamless binding thanks to COVID-19. Connecting software such as Facebook, Twitter, WhatsApp, WeChat, and others come with a variety of capabilities. They are well-known for their low-cost, quick, and effective communication. Because of the seclusion and travel constraints caused by COVID-19, concerns, such as low physical involvement in many possible activities, have arisen. Depending on their information, knowledge, nature, experience, and way of behavior, various types of human beings have diverse responses to any scenario. As the number of net subscribers grows, inappropriate material has become a major concern. The world’s most prestigious and trustworthy organizations are keenly interested in conducting practical research in this field. The research contributes to using Artificial Intelligence (AI) as a service (AIaaS) for preventing the spread of immoral content. As software as a service (SaaS) and infrastructure as a service (IaaS), AIaaS for immoral content detection and eradication can use effective cloud computing models to leverage this service. It is highly adaptable and dynamic. AIaaS-based immoral content detection is mostly effective for optimizing the outcomes based on big data training data samples. Immoral content is identified for semantic and sentiment evaluation, and content is divided into immoral, cyberbullying, and dislike components. The suggested paper’s main issue is the polarity of immoral content that can be processed using an AI-based optimization approach to control content proliferation. To finish the class and statistical analysis, support vector machine (SVM), selection tree, and Naive Bayes classifiers are employed. Introduction Connecting various people, the Internet has no longer most effectively aided the folks around the arena, ultimately, a huge extent of users can specify their perspectives. It is not acquainted as a city village by way of [1] insisted that despite the range of software applications have their procedures, nevertheless, the public voice is heard to share their views. Shah [2] mentioned big variations in lives. ere are various factors, and COVID-19 has remote human beings from social lifestyles. Plentiful people are distantly related to society. COVID-19 has badly reshaped the lives of people, and now, nearly every research-based domain is actively exploring its consequences by various means. Shan [3] referred that this has made humans specific to their opinions on the Internet because the most effective manner for growth is to be truly mingled. e most popular social media platforms are YouTube, Facebook, TikTok, Twitter, etc. Omar et al. [4] found that community-based software utilities encompass famous systems like Table 1 suggests the number of users improved in any type of social network from the last decade. e contemporary research directs that billions of users are connected through social media organizations. Because of the pandemic, the users prefer the Internet to live participation. Huynh [5] elaborated that the number of customers is countlessly accelerated. Several platforms are providing their specialized services to communiqué because physical presence is not obligatory. Furthermore, because of the hurdles of personal participation, the priority of human beings is the Internet because of its availability, easiness, and fast response. However, it is sometimes a problem to find the relevant information from the results provided by the Internet. Another problem of these social platforms is that the use of these remote platforms is not always productive. Always, a certain group of people is responsible for creating trouble, humiliating, and wasting others' time. It is mentioned by researchers like [6] who elaborated that using these platforms creates troubles for a large number of users. Because of the community boom, this most probably these boards. e problem is crucial since it is now feasible for users to avail the message delivery to the rest of the users after simply signing into the system. e person who is a consumer of the software product can attain the attention of the other users by initiating a hot topic and involving several users. e comments can be used in two ways; constructively for the learning and problem-solving mode; or the manipulated and exaggerated theme by involving comments like racism, extremism, political dispute, or specific objectives. e researchers like [7][8][9] are those examples, including [10] quantified linguistic behavior o the communication method. e techniques of effective communication are verbal exchange and nonverbal communication. If the primary purpose is met, the communication is effective. e receiver understands exactly what the sender wants to say. While verbal communication is not always effective, it indicates that the sender is saying something the receiver does not understand. As virtual communication lacks physical engagement, there is a lack of body language, such as the tone of voice, eye contact, and others, which can lead to misleading impressions. Hu [11] said that this false impression creates conflicts. Because of this, aggressive and abusive language initiates. Even though there are various elements worried. It includes a lack of records, and the distinction makes it hard enough to make it understand. It creates doubts and many other problems. When an individual wants to win the comment, the individual can go up to any level of exchange of comments. ose are the types of individuals who do not hesitate to harass others. Often, the annoyed candidates annoy others with their remarks, scripts, and responses on shared media. e further study is consistent with that of the academician [12]. Harassment is identified by offensive stated material. Mankind's isolation is more usually found in immoral stuff, which is a serious problem. It is increasing daily. Immoral content is produced by a specific group of individuals. It includes a common mentality from a specific period, gender, reputation, education, and religion. It is a problem that a large number of clients on social media are dealing with. Almost every social structure provides some sort of venue for recording or avoiding immoral content [13]. Reporting this person may result in a warning or, in the worstcase scenario, a permanent ban. Toxic information can be quite hazardous to even the most innocent minds. e big populace browsing the net has a variety of kids who are new to the worldwide village. ey are studying and making their minds apprehensive of the sector. A few researchers [14] are still looking into the content, while a few girls are housewives who spend their leisure time working on websites. e Immoral depend is being managed by social media web-based systems in a completely professional manner. ere is a unique mechanism in place to deal with this type of information. e assessment committee and the concerned personnel are constantly on the lookout for complaints and inquiries. e pathetic content displayed on the Internet can be refined using an information technology architecture. Oppressive language character is not just about as basic as a lump of cake. Sorting out exploitative substance material is a difficult task, especially when it needs to be extracted from large data. Oppressive language is of several kinds. e sentence structure is made up of a variety of expressions. e character of hazardous words in this series surely necessitates a handful of particular strategies. Table 1 illustrates that the number of net clients is steadily increasing. With a larger population, there are more opportunities to obtain diverse measures. Governmental concerns have been raised, and strategies are being explored in every impacted district around the world to develop new mechanisms to restore normalcy and control coronavirus. Because the Coronavirus has boosted web traffic, determining the text's sequence is crucial. e prevalence of web material can be spread, and false content can be eliminated. Savelev et al. [15] expressed with regards to quick data sharing and spread that the web is the main source to associate, and similar data is divided among different clients. Some of the time the worry isn't about unwavering quality, however, individuals share content unexpectedly to make their darlings update about the ongoing occurring. Boksova et al. [16] referenced that individuals are classified as web clients and nonclients. Gao et al. [17] made an intensive report on assessment investigation and the utilization of online media content. Social substance investigation is prevalently broken down by opinion examination. e different strategies of opinion examination are utilized, including deceptive substances. ere are various methods, such as determining the uniqueness of senders, messages, and recipients. Krzewniak [18] reported that web-based clients have access to all data on the Internet, however, this might put the information of clients in grave danger. ere has been research on how scientists have used information mining processes to characterize data in the past. Rule-based frameworks, solo learning procedures, administered learning strategies, and other well-known ways are examples of this dynamic. As shown in Table 1, the number of informal community clients is increasing every year. Gianfredi et al. [19] Table 2 shows the most well-known exercises [20] and the level to which they were carried out by individuals in the global information media status report. e table above shows how frequently people use Internet hotspots for various activities in their daily lives. e web is considered the backbone by the majority of the dynamic. Considering these characteristics to be of top priority, the suggested research paper identifies two common issues. e first is the assurance of deceptive content from online media (utilizing AI classifiers). e subsequent issue is the assignment of classifications to untrustworthy substances based on their power degree, including simulated intelligence, to the point where the most dreadful class should be acknowledged for special arrangements. e plausibility of the review is that the web cannot separate the moral and exploitative substance. e valid and bogus assertions cannot be recognized. [21] If questionable data is halted to engender and its ubiquity can be diminished, there are opportunities to guarantee the web is dependable. Likewise, there are opportunities to decrease the consuming circumstance and lead the course toward the quiet circumstance as the consistently famous substance is not a dependable substance as well. Man-made intelligence is so stretched out in different applications that now it is not only utilized for consistent programming but also as a policymaker and a device. Shrewd frameworks (K. Ghosh, 2019) for web-based media are fundamental these days for quality assistance arrangement. Related Work e major purpose of the research is to create text-mining algorithms for detecting immoral content on social networking sites [22]. Elareshi et al. [23] have discussed and evaluated several areas of aspect mining using text in depth. Text mining is the process of analyzing content using a variety of machine learning algorithms established by experts [24]. Supervised learning, unsupervised learning, rulebased learning, pattern-based learning, and so on are common learning methodologies. As a result, there are three categories of literature on this subject. Artificial Intelligence as a Service (AIaaS). Artificial in- telligence is a concept that refers to the scientific studies and practices aimed at improving the efficiency with which robots make decisions. e term "intelligence" encompasses a wide range of concepts. It includes factors, such as addressing the problem in a short amount of time, solving the problem correctly, providing the best answer, and so on, according to the experts [25]. Computers are used to efficiently solve complex problems. Machine learning, with or without the assistance of a person, strives to solve issues using computations and create proper outcomes (Jiwon Kang). Several machine learning techniques exist that combine great computational capabilities with low computing expenses. Li et al. [26] reported that supervised, unsupervised, and reinforcement learning are among the machine learning approaches employed. Shah and Li [27] AI's effects on jobs and society [25] management and strategic challenges associated with AI [25]. ey all use different approaches, however, the end goal is the same: to find the most effective solution. AI is utilized as a third party because it strives to deliver better/improved reasoning. e computed results are employed as a third party, and the data is communicated and evaluated using these results in many research and technology domains. Machine learning and artificial intelligence are employed in a variety of platforms. It is for this reason that it is known as third-party evaluation. Artificial intelligence and machine learning have a big impact on information and communication systems in general. e machine is being programmed to calculate the solution automatically. As a result, the ideal solution is not only quick but also offers precise results. e year Takeuchi and Yamamoto [28]. e more capable a system is of delivering accurate results, the more it is called a smart system [29]. It also offers the capability of halting processing in the event of an error or autonomously rectifying processing before output without contacting an external entity. e term "agents" is used to refer to intelligent systems. Agent decision-making has been a prominent issue in research for years, and now it can be used as a service to run a variety of systems. Scientific Programming 3 Several analysts [30] devised various ways to incorporate AI into their projects to improve the quality and efficiency of their work. Real-time frameworks are those that initiate their actions at a specific interval or timeframe. Social media content is derived from a variety of sources. e majority of the design is built on a dispersed framework layout. e cloud underpins all-encompassing interfacing of workstations in various regions, linked through organized topologies, and ensures error-free data transfer. Such systems are well-prepared so that sending data from one source to a few kilometers away arranged recipient takes only a few seconds. e Internet of ings (IoT) is a connected example of a socially connected distributed system. Gianfredi et al. [19] articulated around the data generated by a variety of devices and disseminated through social media applications. Because Yang et al. [31] is like mimicking human insights in information translation and applying a clever decisionmaking framework, it can be used as a supplement to circumvent social media's forceful content. An intelligent degree for social media substance isn't too brutal rough substance; rather, there should be distinguishing proof of brutal substance and destruction of that smudged stuff some time lately increasing it to various systems. Here, we use AIaaS for immoral content detection and eradication can use effective cloud computing models to leverage this service. It is highly adaptable and dynamic. AIaaS-based immoral content detection is mostly effective for optimizing the outcomes based on big data training data samples. Machine Learning. Text mining techniques, which are applied under the platform of machine learning techniques, help identify immoral content material. Machine mastery for medically-linked textual material polarity is also based on strategies that are a systematic manner to apply some algorithms by teaching a computer to decide for itself. Unsupervised, supervised, and reinforcement learning is used to apply machine learning-based text mining strategies. In addition to labeled datasets, the supervised learning-based skilled models [32] emphasized text mining in the biomedical dataset. e impacts already understood are included in the supervised learning of facts. e outcomes are evaluated using a supervised learning-based approach model. e data set was used for prediction by using a support vector machine (SVM) decision tree. His ideality was a topic for churn, and he developed a neural network for audience-related literary material with the help of [33]. However, the findings of a survey-based dataset linear regression, which are well-known supervised researching approaches for determining immoral text, have been spectacular. e goal of this study is to use supervised learning to identify the outcomes from the data. e application of supervised learning to understand the use of the SVM algorithm was originally done by [34]. It was previously multilingual and employed seven languages, as well as a variety of bootstrap sentiment analysis methodologies. Once, a progressive B4MSA polarity classification was utilized. Forman, et al. [35] and Heri [36] introduced a the detection of negative consequences of publicly publishing humiliating content material on social media. Other multilingual sentiment assessment models employed were SENTIPOLC′14, SemEval'15-16, and TASS′15. It determined the harshness and disrespectfulness of the words. For classification-based tasks, the results were environmentally favorable. Big Data, A Source of Unethical Content. Social media platforms are the sources of revenue gains for various regions. Various small and medium financial industries are now applying technology as a mandatory component of their procedures. Shan et al. [37] emphasized the vitality of platforms for data reforms for the future growth of a region. e increase in data during the last decade is increasing, however, the pandemic since 2019 has isolated the social activities of mankind. Technically, domestic isolation has raised behavioral complications. e vitality of the content generated during a pandemic is more polluted with the combination of less ethical and more unethical statements. People found no other way to communicate, For online social networking applications, where it is recommended to proceed with all the activities while staying at home, people have no alternative ways to establish social networking activities [38]. Only the online social media platforms facilitate them to meet new people virtually. Platforms such as Twitter, Facebook, Netflix, Yahoo, Whatsapp, Wechat, and similar applications can help one to find new people and discuss his ideas fearlessly. Another research by [39] illustrated the sufferings raised because of more use of gadgets, especially by the youngsters. Big data is saturated with more impact of poor communication, inappropriate words, lack of belief, and particularly, it is saturated with the impact of cyberbullying impression. He used the classification approach for unethical text determination. Most of the platforms justify there are more proportions of immoral content. e proposed study emphasizes more on Twitter data set because of its diversity in the same domain, availability, and rapid user interactions. Datasets. e dataset is made of Twitter, Kaggle, and survey-based information. ere was no uniformity in the columns, information format, or aesthetic style because the data was acquired from a variety of sources. Although this unstructured data had previously been in text format, it was no longer appropriate for the next stage of processing. To advance to the next level, the authors converted heterogeneous data into homogeneous data with a standardized form. Twitter also offers an API (Twitter Stream API) that may be used to retrieve data from the website, such as tweets, comments, and likes, but only with authentication. In social networks, the detection of abusive content, cyberbullying, and harassment is typically framed as a classification problem. e homogeneity of the data set was not the only concern. ere was also the issue of multiclass imbalanced datasets, which resulted in a varied distribution of situations into instances. Using the approach of oversampling, the problem was attempted to be minimized up to the lowest stage. Rapidminer is used to process the records to simulate models quickly and accurately. Model. e primary goal of the proposed study is to determine which records contain immoral content. e aggregated records have been reduced to a set of documents containing 13,000 tuples and seven features. AI's utility is machine learning. After consolidation, the factual units were in a homogeneous state. is data was once sophisticated enough to be used as input in the mannequin. e flowchart of the AIaaS-based model for immoral content detection and eradication is shown in Figure 1. e figure indicates there are two sections of the proposed system. e top approach of the flow chart shows the processing of data from the initial stage till the identification of unethical content. e process continues for the entire data chunks. It is sure that if the data set is big enough, the methodology is applicable for the data segments until the entire data is examined. e results are stored and the same logic is applied as service-based architecture. It can be possibly achieved for cloud architecture. e dedicated unethical content identification cache can encounter the text with segmentwise results calculation until the condition is stable. Both types, i.e., semantic and sentiment analysis, are possible in the above model. Data Preprocessing. e preprocessing of records is the initial stage in the suggested paradigm. e facts collection is heterogeneous with little homogeneity and a large number of information elements. e preprocessing of facts is usually the first stage for each mannequin. It is utilized to turn it into a more homogeneous state. e information that has resulted is now of high quality. Data preparation is required to remove noise from the data and to improve the accuracy of the effects purchased once purified records have been utilized for mannequin training. ere were numerous abnormalities in our dataset, including incomplete records with missing values, incorrect values, and special data sorts for a variety of attributes. e information gathered from a variety of sources was also incorrect. It is critical not to remove any useful information from the content while preparing the data. Anomalies were present in the data sets used in our research. It is important that often preprocessing and data cleansing deletes the missing and abnormal or incorrect values. However, in the proposed study, preprocessing is applied to the data items that contain minimum null values. e missing values were replaced with the most possible/nearly estimating possible values. e impure data was found to be approximately 0.002%, which is quite less in proportion. e problem of overfitting and underfitting were carefully observed so that the data quality should be consistent. e data after preprocessing is quality-oriented. Opinion Mining NLP. e data set is ideal for linguistic research. e discoveries that are reached following the information analysis are referred to as features. e two most important types of features are determined in this study. e facts are a collection of social behavior statistics. In this study, two categories of points are specifically examined. Sentiment points and semantic characteristics are what they are called. In the model for detecting abusive language, feature extraction is crucial. ese considerations will aid in the detection of abusive phrases and contextbased abuse. Preprocessed data aids in the extraction of particular elements, such as sentiment features, semantic features, unigram features, and pattern features, to detect abuse and subtypes, such as aggression, dislike, misbehavior, cyberbullying, and vulgar language in the material. e sentiment feature determines whether a tweet or remark has a sentiment thing, whereas the semantic feature aids in the detection of contextual base abuse by the usage of a specific letter, symbol, or word in the tweet. Semantic Analysis. Semantic analysis is used to determine the relationship between the sentences. It can distinguish the sentence's class. It is the type of sentence that is employed in the sentence. It can distinguish the sentence's class. It is the type of sentence that is employed in the sentence which means the clear theme of the context is expressed in terms of semantic analysis [40]. Particularly, it is the comment on the expression's context. e planned research also assesses whether the statement is Scientific Programming 5 straightforward or contains some hidden meaning. In semantic analysis, the selection of words and closures is critical. It is used for approximation analysis in machine learning. e theme of predicate information analysis is used to complete it. e proposition is a collection of predicates and quantifiers. ese assertions are used to complete a sentence's structure. A minimum of one item of information should be included in each proposal. ey result in variables, which are then utilized to form a variety of functions that provide useful data to the systems. When analyzing a sentence for semantic analysis, the same common sense is employed. In semantic analysis, letters, symbols, and quantifiers are used. Sentiment Analysis. Another feature is sentiment evaluation, which is the determination of the sentence's polarity. It is utilized for opinion mining, with the ability to look at things in three ways: positively, negatively, or neutrally. It can tell whether the text is polarised in a very positive, positive, neutral, negative, or extremely poor way. Doaa Mohey El-Din Mohamed Hussein, 2021 identified issues in analyzing social media content (Shambhavi Dinakar). e texts utilized for sentiment analysis include a variety of languages (Rastislav Krchnavy) and negative and slang phrases, hashtags, and emoticons (Dr.Pappu Rajan). A lot of lookup scholars have extended this kind of view for herbal language processing. Microblog textual content analysis (Fotis Aisopos). Table 3 states that out of 4458 tuples of datasets 1,2 and three e category of kind of content is decided by sentiment and semantic analysis. Hence, the classification of subcategories is made viable to in addition follow the model. Feature Extraction. e facts set consisted of 12 features. e computation was once processed by way of using. e key function of function extraction is to determine the most influential features that take part in result generation. In the chosen information set, the aspects that conclude their content as immoral, cyberbullying, or dislike text were text, content, category, and . . ... Particularly, the reachable information units have awesome facets called textual content and effects as two foremost chosen facts. Optimization and Training. is section includes the model optimization and training for the dataset so that the results could be precise and valid. Classification. Additionally, divide the dataset into a teach or look at dataset to train a model for detecting abusive language. As the mannequin is trained using a 70% dataset, the records set is partitioned into the educated and checkfacts sets. e proposed lookup is classified using the supervised computer to get to know the method [41]. e ramifications of the statistics units are already recognized in this case. e ability to classify data is such that the results of trained statistics sets can be compared to the outcomes of checking out records. ere are a variety of classifiers [42]. Classification techniques are even important on encrypted data [43] and [44]. e accuracy measurement determines the overall performance of the mannequin. e three classes that are the consequence of categorization are immoral, cyberbullying, and hatred [45]. e classification process is divided into three stages. e classification is done by three classes. It incorporates binary classification, which divides the records into two categories. e outcome of binary categorization distinguishes the two major groups. Binary categorization has the advantage of displaying the straightforward distribution of statistics into two main groupings. Ternary classification is the next step after categorizing the statistics into binary labeled results. As the proposed study identifies three classes, namely immoral, cyberbullying, and dislike, the tertiary classification is used. e research completes its classification, however, if it needs to be extended to more than three classes, a multivalued classification is used. e three most common and popular classifiers used in this study were Naive Bayes, SVM, and Decision Tree, which were used to divide the content material into categories, such as aggression, misbehaving, dislike, cyberbullying, vulgar message, and ordinary. In Table 4, the categorization consequences are discussed. e commonplace class represents natural language. If a tweet or remark is not harsh, it will be listed here. By modifying these factors, the optimal parameter will help improve the overall performance of classifiers. e accuracy of each classifier will be displayed one by one for each class in the result. is accuracy will exhibit how precisely to discover the abusive language from the content. e exponential growth of big data might increase the chance of abusive and unethical content compared to ethical content [46]. e proportion of neutral content is much less than the other components. us, there are more aspects the people behave unethically over the Internet. It may be because of reasons, such as their identity being unknown and them being remotely available, or concerns, such as they can approach various platforms free of cost and they have freedom of speech. However, there are more traces of nonvaluable text. Result and Discussion e binary classification process produces two main lessons and has a 91.2 percent accuracy rate. e third category is considered neutral. Hence, if one runs the classification model on these three instructions, one gets an accuracy of 85.70%. Table 5 indicates the accuracy of the proposed approach. It means the identification of unethical content via machine Total Data set 1 2517 1941 4458 Data set 2 1375 1433 2808 Data set 3 1341 2307 3648 Tuples 5233 5681 10,914 6 Scientific Programming learning algorithms is promising, particularly for the datasets that are of larger size. e sentiment analysis and semantic analysis evaluations provide noticeable results. is approach can be applied for the larger data sets by selecting the segments of huge chunks of data, and the repetition of the process at various intervals can be deterministic for the data available. Content Oriented AI as a Service. After passing through the model's elements, the outcomes disclose three key parameters. e statistical output is once again used as a source for hospitality analysis [47]. From banking transactions [48]. For content show processing, the impacts obtained after sentiment and semantic analysis are critical. Instead of displaying before the readers, items that are more infected with traces of immoral, cyberbullying, and dislike classes with high accuracy (meaning are more corrupt) can be prevented. Low-accuracy communications may provide a risk of displaying content material on the Internet. e most situation column in Table 1 has a high F1 score. In any event, the recall, precision, and F1 rating factors would no longer be prioritized. AI Influencial Content Control. e content material with excellent polarity is ideal for displaying in front of the reviewers. Negative polarity and a poor F1 score are the examples of material with negative impact. Alternatively, if the content material is displayed, it may harm the users. After the model has been applied, the cumulative results can be separated into different labels and the time stamp can be lowered. If the content is very obnoxious, it may be prohibited at an early stage. e dreadful effect can be mitigated in this way. is method is ideal for websites that have a great reputation and only provide excellent service. ese social media platforms are often at high popularity. Conclusion e suggested investigation is completed to find and remove immoral content from social media networks. Misbehaving, cyberbullying, and use of immoral language; in the statement is unethical content. Textual content mining is done using a supervised learning approach. To obtain reliable results, firstly, unethical content identification is done. en, based on these results, the content containing illegal text can be prevented. e use of a multiclass imbalanced dataset is refined with resampling, undersampling, and oversampling techniques. en, sentiment and semantic analysis methods are applied to find the severity of immoral content. Decision Tree, SVM, and NaveBayes are used for classification. e content polarity and unethical sensitivity are determined. e negative content is limited to the social media display. e feasibility of this study is extremely important for better text-based component decision-making. New policies pushed decision-making, social content delivery, and display, as well as the permissibility and prohibition of ethical writing on reputable and genuine websites. e proposed study's social advantages are measured in terms of the amount of content that can be exhibited, regional characteristics from the community, and many more. Data Availability e data used to support the findings of this study are available from the corresponding author upon request.
7,245
2022-01-17T00:00:00.000
[ "Computer Science" ]
Molecular characteristics of the multi‐functional FAO enzyme ACAD9 illustrate the importance of FADH2/NADH ratios for mitochondrial ROS formation A decade ago I postulated that ROS formation in mitochondria was influenced by different FADH2/NADH (F/N) ratios of catabolic substrates. Thus, fatty acid oxidation (FAO) would give higher ROS formation than glucose oxidation. Both the emergence of peroxisomes and neurons not using FAO, could be explained thus. ROS formation in NADH:ubiquinone oxidoreductase (Complex I) comes about by reverse electron transport (RET) due to high QH2 levels, and scarcity of its electron‐acceptor (Q) during FAO. The then new, unexpected, finding of an FAO enzyme, ACAD9, being involved in complex I biogenesis, hinted at connections in line with the hypothesis. Recent findings about ACAD9's role in regulation of respiration fit with predictions the model makes: cementing connections between ROS production and F/N ratios. I describe how ACAD9 might be central to reversing the oxidative damage in complex I resulting from FAO. This seems to involve two distinct, but intimately connected, ACAD9 characteristics: (i) its upregulation of complex I biogenesis, and (ii) releasing FADH2, with possible conversion into FMN, the crucial prosthetic group of complex I. Also see the video abstract here: https://youtu.be/N7AT_HBNumg also fit with older experimental work, in which purified mature ACAD9 demonstrated activity with various long-chain unsaturated acyl-CoAs as substrates. [5] In the meantime, a lot of follow-up research has culminated in recent, impressive, research papers, which delineate many of the molecular characteristics of this multi-functional enzyme, allowing it to play the roles in mitochondrial metabolic regulation it does. [6,7] However, the interpretation of these latest findings leaves something to be desired. Giachin et al., show that losing its FAD-cofactor ('deflavination') induces the FAO enzyme ACAD9 to switch from a role in FAO to becoming a crucial part of the mitochondrial complex I assembly (MCIA) machinery. The authors only state that their findings ''suggest a unique molecular mechanism for coordinating the regulation of the FAO and OXPHOS pathways to ensure an efficient energy production'' , [6] and do not go beyond that. But just simply describing the complete oxidation of FAs in mitochondria already shows this to be a highly superficial way of interpreting their results. Acyl-CoA-Dehydrogenases, such as ACAD9, catalyse the first step of a recurrent, cyclical, pathway in which every chain of FA's is shortened by 2 carbons at the time, resulting in one FADH 2 (formed during this first step), an NADH, and an acetyl-CoA. The further oxidation of acetyl-CoA in the 8-step TCA-cycle will, in turn, generate another FADH 2 , coming from succinate oxidation by complex II, and 3(!) additional NADH molecules. Thus, high complex I activity is also essential during FAO, and from that perspective, no switch is needed. A more enlightening way of looking at things is that upon extensive oxidative damage to complex I during FAO, a programme can be activated to repair complex I, while at the same time shutting down further FAO. This is exactly what follows the repurposing of ACAD9. In that light, the fact that the major site of ROS-induced oxidative damage is located in the FMN containing, NADH binding, ''N-module'' of complex I, is also telling. Possibly, FMN could be derived from the ''ejected'' FAD and repurposed as well. [8][9][10] Further details and experimental results supporting ideas along these lines are discussed below. An overview of the two repurposing pathways in the context of FAO-related oxidative damage is given in Figure 1. But first, some extra background. BACKGROUND I: ROS, THE ENEMY INSIDE The respiratory ETC allows redox reactions in which a large part of the energy present in ''high energy'' electrons from different food sources is stored in a proton motive force (PMF; Δp) across mitochondrial inner membranes. Complex electron transfer reactions along the chain are directly coupled to proton ''pumping'' across the inner membrane. [11,12] An important part of the explanation for the extraordinary levels of ATP that can be generated in this way lies in the extreme subdivision of the reaction from start to finish and the use of molecular oxygen (O 2 ) as the final electron acceptor. This last redox reaction is catalysed by the ultimate complex of the chain, Cytochrome c oxidase, forming water, see [13] and references therein. However, this process allows reverse reactions to occur. On top of that, O 2 can act as a doubly edged sword by occasional premature reactions with some of the reduced centres of the ETC, giving rise to internal superoxide F I G U R E 1 From ROS to restoration (highly schematized). After ROS damage in Complex I, due to high QH 2 levels and reverse electron transport (RET) during beta oxidation, repair is needed. (A) High F/N ratios with insufficient electron-acceptor (Q) for Complex I; ROS formation at Complex I via RET (*). RET depends on high QH 2 levels and a high delta p (indicated). (B) ROS reduction by lessening FA oxidation (because release of FAD from ACAD9 inhibits initiation of β-oxidation and destabilizes its dimerization with possible implications for the multifunctional FAO complex and OXPHOS supercomplexes; see main text). Possible restoration of Complex I activity by two separate routes: (1) ACAD9 (minus FAD) involvement in mitochondrial complex I assembly (MCIA) complex formation, (2) Hydrolysing the released FAD-cofactor to FMN for use in Complex I. For details see the main text. IMS -intermembrane space; Complex I [59] purple, Complex II light green. ACAD9 (shown as a monomer) dark green, ECSIT pink, NDUFAF1 grey (together forming the core subunits of MCIA). Ubiquinone/ubiquinol (Q) red, electron flow black arrows. ROS-generating site of complex I upon RET: The FMN containing site (IF). [52] Q binding site at IQ. F = FADH 2 oxidising complex, I = NADH dehydrogenase complex. For details see text. Extended and adapted from. [23] Chemical structures of the co-factors (oxidised forms, see absence of hydrogens at N 1 and N 5 ) incorporated under open source licence. Complexes not to scale. ROS generation in complex III is not indicated. [25] anions and other reactive oxygen species (ROS) which are all highly damaging to the eukaryotic cell, [14,15] as they can initiate detrimental reaction cascades involving almost every major group of biological molecules. Eukaryotic evolution has been heavily influenced by internal, mitochondrial, ROS formation, which has given rise to a host of mechanisms to both suppress ROS formation and repair the damage done. This constant co-evolution with internal ROS formation is sometimes reflected in surprising ways. As an example, low amounts of ROS will induce efficient antioxidant responses. They would thus have beneficial effects. The observation of health improving, low-level, ROS induction is thus explained by the so-called ''mitohormesis'' concept. [16][17][18] An extensive overview of the different ROS forms and their oxidative activities can be found in, [19] so only a brief outline is given here. The most reactive species is the hydroxyl radical (OH . ), which does not seem to be formed directly in the ETC much. Instead, amongst others, complex I generates superoxide anions (O 2 -. ). Such superoxide anions are, for the most part, rapidly converted to a relatively stable ROS form: hydrogen peroxide (H 2 O 2 ), in a reaction catalysed by superoxide dismutase. However, this specific type of ROS easily passes membrane barriers, allowing it to function as a long-range signalling molecule. Of course, its long range effects can also be highly detrimental. [20] Although differences of opinion still exist, nowadays most researchers in the field think that some of the largest contributions to mitochondrial ROS formation come from Complex I; on the matrix side in the NADH binding, FMN containing N-module [10,14,21] and ubiquinol cytochrome c oxidoreductase (Complex III); in this case on the intermembrane side. [14,22] It should be stated that many of the findings with regard to pinpointing the elusive sources of mitochondrial ROS formation come from studies that have to manipulate the experimental set-up to such extents that their results have been questioned when considering real life. [14,22] As the Δp contains a lot of chemical potential energy, this can be converted to large amounts of ATP, by ATP-synthase (complex V). The potential energy of ATP is released upon hydrolysis, [12] and its highly efficient synthesis in mitochondria enables the many costly eukaryotic cellular processes. As we will see below, internal ROS formation is always intimately associated with such highly efficient ATP generation, but it is especially a problem for metazoans geared for maximal output at high Δp (e.g., compare human and yeast mitochondria. [23] ) States in which both the QH 2 /Q ratio and Δp are high, can easily give ROS formation in complex I. [14,18,21,24] This is one of the reasons why Δp can also be dissipated as heat by uncoupling agents/proteins, thus influencing ROS formation. Of note, lowering Δp generally means less ROS formation, but a low Δp does not simply equal a low amount of ROS production. BACKGROUND II: FADH 2 /NADH (F/N) RATIOS AND ROS FORMATION The redox state of the central electron carrier in the first part of the Figure 1A). Complete breakdown of a glucose molecule (using glycolysis with the aspartate/malate shuttle to import NADH, mitochondrial breakdown of pyruvate by oxidative decarboxylation and the TCA cycle) will generate 2 FADH 2 and 10 NADH molecules, resulting in an FADH 2 /NADH (F/N) ratio of 0.2. With mitochondrial oxidation of (almost completely) saturated FAs, involving an ACAD -ETF/ETF:QO complex (see Figure 1A), much higher F/N ratios (approaching 0.5 as the FAs become longer) will be generated. [1,18] Such high F/N ratios (especially when confronted with a large number of reducing equivalents in the form of NADH) would translate into acceptor problems for Complex I. On top of this, reverse electron transport (RET) from a combination of raised membrane potential (Δp) and high QH 2 /Q ratios, might ensue. By also taking the Q-cycle of complex III into account, high F/N ratios might be expected to lead to ROS formation by that complex as well. The relevant models are described in. [1,25] BACKGROUND III: OBSERVATIONS LINKING F/N RATIOS AND ROS FORMATION Quite a lot of observations (in)directly link F/N ratios and ROS formation (especially in Complex I), at cellular, but also at higher order levels. Severe oxidative stress inside eukaryotes could explain why FAO started to occur in a new cellular organelle, the peroxisomes. Generating NADH without FADH 2 for the ETC (the electrons instead ending up at H 2 O 2 , with catalase returning that compound to water and molecular oxygen) is likely the oldest role of peroxisomes. [1,[26][27][28] Thus, they could have evolved to lessen the total amount of FAO in mitochondria, lowering overall F/N ratios, together with another eukaryotic innovation, carnitine, controlling the overall rate of mitochondrial FAO, and oxidative damage. [1,18,29,30] Peroxisomal FAO is almost completely copied from the endosymbiont, except, as expected, for the step involving FAD/FADH 2 . [31] Also, peroxisomes can be formed from ERderived and mitochondrial pre-peroxisomes. [32,33] Interestingly, in the trade-off between efficient ATP generation and ROS formation, our mitochondria only allow partial peroxisomal breakdown of very-long chain FAs (the ones with the highest F/N ratios). [1,23] Indirect evidence is also found in studies of supercomplex formation, mitochondrial uncoupling proteins (UCPs), and the strictly carnitine dependent mitochondrial import of FAs. [18,21,29] Catabolism of substrates characterised by high F/N ratios (e.g., FAs and succinate) is used by animal cells to differentiate and/or respond to a changing metabolic environment: in such cases ROS formation plays an indispensable signalling role. [18,21,34,35] Often UCPs are involved. FAO upregulates UCPs both in activity and in number. By allowing protons to return to the matrix, they lower Δp, and thus lessen RET and ROS formation. UCP2 is upregulated in response to FAs (by PPAR transcription factors), as well as by high QH 2 /Q and ROS. [18] UCP2 and UCP3 transcription is co-activated with Glycerol-3-Phosphate Dehydrogenase expression (which leads to high F/N and QH 2 /Q ratios). [23,25,36,37] The catabolism of succinate induces UCP1 in brown (mitochondria rich) adipose tissue. [35] Specific aspects of neuronal metabolism also make sense in light of the hypothesis. For instance: surprisingly, neurons, though consuming huge amounts of ATP do not use FAs as a catabolic substrate, and strictly prefer glucose/lactate (F/N of 0.2). This can be understood, invoking the model, because of the extreme sensitivity of neurons to oxidative damage, especially in complex I [38] (and references therein). It is also reflected in the preponderance of complex I containing supercomplexes, mostly absent in astrocytes. [39] Only upon prolonged starvation can maximally half of the total neuronal energy consumption be supplied by ketone bodies (with F/N ratios only going up slightly). Of note, acetoacetate (lower in energy content, and with a higher F/N ratio) constitutes only 20% of the ketone body supply, beta-hydroxybutyrate making up the rest. [29,40] HOW DOES ACAD9 FIT IN: FILLING IN THE DETAILS We are now equipped to interpret the recent findings surrounding ACAD9 in much more detail. New publications shed light on several unexpected aspects of its regulation. [6,7,10] As mentioned, the central player involved in complex I biogenesis is the so-called mitochondrial complex I assembly (MCIA) complex. In MCIA, ECSIT (see below) fulfils an important function. [41] The careful recent studies by Giachin and acyl-CoA substrates. [5] Follow-up research with VLCAD-deficient fibroblasts showed that ACAD9 is involved in the breakdown of both oleic and palmitic acid in vivo. [3] Later studies showed that ACAD9 is hardly expressed in fibroblasts, and that cells that do have high expression levels can be significantly impaired in FAO by ACAD9 knockout. A clear correlation between higher residual ACAD9 dehydrogenase activity and less severe phenotype in ACAD9-compromized patients could also be demonstrated. [4] Of note, the long-chain (mono) unsaturated FAs that ACAD9 oxidizes are exactly the main FAs found in our diets and the most abundant components of our fat stores. Last, but not least, we should not forget that FAO takes place on multi-enzyme complexes already present in the bacterial ancestors of our mitochondria. [42,43] There are even indications for physical interactions between these complexes and the ETC itself. [44] Thus, one could easily envisage (flavinated) ACAD9 also playing a role in the maintenance of these functional structures. Without FAD, its prosthetic group, ACAD9 becomes a ''card carrying member'' of MCIA. This fact explains its high expression in neurons. [4,45] These highly ROS-susceptible cells have to forego FAO, but need a high capability of repairing oxidatively damaged complex I, implying a constant need for MCIA. [6,7,18,38] But what happens with that prosthetic group? COULD FAD INDEED BE REPURPOSED AS WELL? Above I indicated that the best way to interpret all the experimental data is by stressing the mutually exclusive nature of the ACAD9 molecule: it is either a (strongly rate-limiting?) dehydrogenase functioning in FAO or a necessary component of MCIA involved in the biogenesis of complex I. The main switch is best understood when we consider, for example, hepatocytes, cells that highly express ACAD9 [45] and also can use FAO to cover their normally high-energy needs. An imbalance between ATP production and its use might lead to a high Δp, combined with high F/N and QH 2 /Q ratios, resulting in RET and oxidative damage around the FMN containing, NADH binding, ''N-module'' of complex I (see Figure 1A). Repurposing of ACAD9 allows the simultaneous shutdown of FAO and activation of complex I restoration. When we look at the latest insights in the dynamics of the N-module and its flavin binding site, as well as the mitochondrial import and further processing of flavins in the organelle (nicely reviewed in, [10] ) an exciting possibility arises. FMN could be derived from the ''ejected'' FAD and repurposed as well. [8][9][10] There are promising candidates for the role of a hydrolysing FAD -FMN converter inside mitochondria. [46,47] This would constitute the most literal form of lowering the F/N ratio: converting the FAD of ACAD9 into the NADH recognising FMN of complex I (see Figure 1B). HOW DOES ECSIT FIT IN: FILLING IN THE DETAILS It is somewhat surprising that the researchers studying the versatile role of ACAD9 in both FAO and the MCIA complex stress ''metabolic efficiency'' , but are silent on its function as a ROS suppressor and restoration enzyme, given another core MCIA constituent described above: ECSIT. ECSIT-ROS connections have been found in the context of immunology. ECSIT got its name (Evolutionarily Conserved Signalling intermediate in Toll pathway), from the fact that bacterial activation of Toll receptors allows downstream binding of TRAF6 to ECSIT. This binding then increases mitochondrial ROS production to kill bacterial pathogens. [48,49] Interestingly, ECSIT-deleted macrophages display high mitochondrial ROS production preventing further induction by Toll receptors. [50] The discovery that this cytosolic signalling protein could also localise to mitochondria and interact, amongst others, with chaperone NDUFAF1 to function in complex I biogenesis constituted a major breakthrough, shedding light on other observations and making links with variations in ROS production logical (as complex I is an important ROS producer). Thus, the availability, molecular conformation, and location of the three core subunits of MCIA are key determinants of mitochondrial ROS formation. DISCUSSION AND FUTURE RESEARCH Since I proposed that mitochondrial F/N ratios and internal ROS formation could be considered major determinants in eukaryotic evolution, [1] several new findings turned out to be supportive of the model. For instance, in the case of the evolution of peroxisomes: peroxisomal FA oxidation is mostly derived from the mitochondrial ancestor, [31] and peroxisomal biogenesis can be physically linked to mitochondria. [32] The arrival of peroxisomes can thus be conceived of as an instance of symbiogenesis, the position that tries to understand eukaryogenesis as a series of mutual adaptations of archaeal ''host'' and bacterial endosymbiont. [23,31,32,51] As mentioned above, high F/N ratios can also come about in other conditions, for example, when succinate has accumulated during ischaemia in mammalian tissue and indeed becomes responsible for RET induced mitochondrial ROS production during reperfusion, [24,52] or upon use of the Glycerol-3-Phosphate Dehydrogenase shuttle. [53] Just as in the case of FAO, UCP action is enhanced. [18] Upregulation of UCP2 and UCP3 and activation of the Glycerol-3-Phosphate Dehydrogenase by T3 go hand in hand. [36,37] Though overall all these findings are consistent with the hypothesis, layers of adaptations to internal ROS formation make the interpretation of experimental results complicated. Let me illustrate possible pitfalls with one final example, by discussing the effects of the Glycerol-3-Phosphate shunt in neutrophils, as very recently described. [54] The experiments show that neutrophils use the glycerol 3-phosphate pathway upon glycolysis, to maintain polarised mitochondria (under hypoxia!) and produce ROS, which, in turn, stabilises HIF-1α. Using HIF-1α as a readout for ROS formation, they show (in Figure 1 of their publication) that ROS seems to be coming from both complex I and III (using rotenone and antimycin A as inhibitors of each, respectively). This seems to fit perfectly with the predictions of the F/N ratio hypothesis. [25] But, when using oxaloacetate as a competitive inhibitor of complex II, a stark increase in HIF-1α stabilisation is observed. As one of the sources of ''FADH 2 -linked'' electrons, complex II inhibition seems to be behaving unexpectedly here. However, the ''layers of adaptation'' might be contributing to the observed effects. In a recent broad-ranging, highly worthwhile, review, the multiple-layered effects of succinate (e.g., increased upon inhibition of complex II) are listed. Succinate is eloquently described as a mitochondrial coenzyme Q (and thus, F/N ratio) redox sentinel, [55] and in the review we can find that succinate inhibits cytosolic HIF-alpha prolyl hydroxylases, thus stabilising HIF-1α via a non-ROS pathway. [56] This example makes it abundantly clear that direct, fully convincing, evidence for the F/N hypothesis will be hard to obtain. Maybe studying ROS production by proteobacteria upon shifts between FAO and aerobic glycolysis would be illuminating. Is there significant ROS formation? Is it bigger when going from glycolysis to FAO than vice versa? What complexes are involved? In conclusion, all these experimental results, though highly suggestive, only support the importance of the F/N concept indirectly. The latest insights regarding the dynamic nature of ACAD9 function, however, seem to me a rather strong illustration of the lowering of F/N ratios as a way of both suppressing and repairing ROS-related damage. Thus, the concept may indeed turn out to be a highly enlightening one when it comes to understanding the mechanics of mitochondrial ROS formation, and its possible role during eukaryotic evolution. Though the switch in the role of ACAD9 itself is now very well documented, the repurposing of its prosthetic group is still hypothetical. Labelling experiments could quickly give us the answer as regards to a possible physiological relevance. Dynamic mitochondrial supercomplex formation seems clearly linked with respiratory efficiency and ROS formation, [39,57,58] but insight with regard to the involvement of the FAO machinery is still somewhat lacking. [44] And how about the possible role of ACAD9 in such higher order structures? The complicated, multi-layered, and subtle nature of mitochondrial ROS formation keeps on posing daunting experimental challenges. Probably for years to come. CONFLICT OF INTEREST The authordeclares no conflict of interest. DATA AVAILABILITY STATEMENT Data sharing is not applicable to this article as no new data were created or analyzed in this study.
4,944.8
2022-06-16T00:00:00.000
[ "Biology" ]
Design of a high temperature superconducting magnet for a single silicon crystal growth system This paper presents a study on the design of a high-temperature superconducting (HTS) magnet for a Czochralski single silicon-crystal growth system by evaluating the temperature and flow distributions of silicon melt at the cusp magnetic field. A two-dimensional finite element method (FEM) simulation model was built to determine the effects of the magnetic field on the temperature and flow distributions in the silicon melt. The characteristics of the HTS magnet were analyzed using a three-dimensional FEM model. The HTS magnet was designed using 2G HTS wire and the magnet was validated through FEM simulation. The simulation results showed that the melt convection was significantly suppressed by the Lorentz force, and that the temperature distribution was uniform in the silicon melt under the cusp magnetic field. The shape of the HTS magnet was determined as a magnet ring with a magnetic flux density of 0.35 T at the center of the crucible bottom. The fundamental design specifications and the data obtained from this study can be applied to the development of a real silicon-crystal growth system. Introduction Czochralski (Cz) technology is widely used as a single silicon-crystal growth method, in which a crucible is used to hold the melt from which a crystal is grown. To improve the quality of the crystals, the static magnetic fields of the external magnet around the crucible are used, which are well known for suppressing of the melt convection and the temperature fluctuations when the magnetic field strength is 0.1 -0.5 T [1]- [4]. There are three types of magnetic field used in the Cz method: horizontal magnetic field, vertical magnetic field and cusp magnetic field. The cusp magnetic field, in which the free surface of the melt is centered between two opposite fields generated by two magnets, has the advantages of both horizontal and vertical magnetic fields [3]- [6]. However, to increase crucible size and crystal diameter, the application of a static magnetic field of sufficient strength requires large coil systems that consume a substantial amount of electric power [2]. Thus, the superconducting magnet technique is an attractive solution for reducing the dimensions and energy consumption of crystal growth systems. Nowadays, low-temperature superconducting magnets (NbTi) for single silicon-crystal growth have already been commercialized and widely used. However, high-temperature superconducting (HTS) magnets will operate efficiently and reliably on magnet quench due to their high critical temperature and high capacity, which can lead to a large temperature margin [7]- [8]. In this paper, a high-temperature superconducting (HTS) magnet was proposed for a 300 mm siliconcrystal growth system, and the temperature and flow distributions of a silicon melt within the cusp magnetic field were analyzed. Based on the physical parameters of the Cz crystal growth system, a twodimensional (2D) axisymmetric finite element method (FEM) simulation model was built. The velocity field and temperature distribution in the silicon melt were analyzed for two cases: one with no magnetic field and one with cusp magnetic field. Then, the magnetic field strength and shape were determined to design the HTS magnets using 2G HTS wire with an expected operating temperature of 30 K or less. The metal was insulated using stainless steel tape to provide quench protection and to improve thermal conduction. A characteristic analysis of the magnet was conducted using a three-dimensional (3D) FEM simulation. It was found that melt convection was significantly suppressed by the Lorentz force and that the temperature gradient of the silicon melt was significantly reduced under the cusp magnetic field. The magnetic flux density at the center of the crucible bottom was 0.35 T. The operating current of the magnet was 344 A with four double pancake coils (DPCs). The total wire length was 11.5 km. The fundamental design specifications and the data obtained from this study can be applied to the development of a real silicon-crystal growth system. Magnetic fields in a Cz silicon-crystal growth system 2.1. Configuration of a 300 mm single silicon-crystal growth system In an industrial Cz single silicon growth process, a crystal is grown from molten silicon contained in a silica crucible as shown in figure 1. The melt flow in a crucible is determined based on the material properties of the silicon melt, as shown in table 1. Table 2 shows the parameters of the Cz single siliconcrystal growth system, which were chosen to model the melt flow with a crucible having a diameter of 900 mm. The crystal has a diameter of 300 mm. The silicon melt has relatively high electric conductivity, allowing the melt flow in the crucible to be affected by electromagnetic fields [9]- [11]. Table 2. Parameters of the Cz single silicon-crystal growth system Magnetic field effects on the silicon melt Because the silicon melt has high conductivity, the magnetic field has a considerable influence on melt convection through the Lorentz force, ⃗⃗⃗ , which is described as where is the induced current density determined by where, is the melt velocity field and ⃗ is the magnetic induction. The Lorentz force opposes to the direction of the melt flow. Normal axisymmetric flow is driven by buoyancy, electromagnetic force and heat transfer inside the crucible wall. Assuming that the fluid is incompressible and that the Boussinesq approximation is valid, the time-averaged momentum equation is described as in [2], [12]- [14], where is the pressure, is the reference density, is the dynamic viscosity, is the coefficient of volume expansion, is the gravitational acceleration, and T is the temperature of the silicon melt. Mass conservation and energy transport equations with = 0 are where is the specific heat, and is the thermal conductivity. In this paper, the simulation results for a cusp magnetic field revealed a coil configuration in which the vertical component of the magnetic field equaled zero at the melt free surface. The distribution of the field inside the melt was very inhomogeneous. For simplicity, the field was modeled with solenoidal analytical expressions and the cusp magnetic field as a linear field [2] given as where R and H are the radius and height of the melt in the crucible, respectively. The corresponding unit vectors ⃗⃗⃗ and ⃗⃗⃗ are described in figure 1. 2D FEM simulation model Based on the parameters of the Cz crystal growth system given in tables 1 and 2, a 2D axisymmetric FEM simulation model was built as shown in figure 2. The temperature and flow distributions in the silicon melt were analyzed without magnetic field and with a cusp magnetic field. The crystal rotation rate was 15 rpm, and the crucible counter-rotation rate was 5 rpm. The crystal diameter was 300 mm, and the diameter of the crucible inside wall was 900 mm. The melt height was 280 mm. The temperature of the crucible wall heated by AC induction heater was 1,685 K, and the temperature of the crystal surface in contact with the liquid silicon was 1,400 K. The operating conditions of this analysis were recognized as the most appropriate for the Cz crystal growth process [1]- [3]. Figure 2. FEM simulation model and the magnetic flux density results for a cusp magnetic field The shape, size, and the location of the HTS magnet were determined by the magnetic flux density at points at the center of the free melt surface, ; the free melt surface in the crucible wall, ; and the centre of the crucible bottom, , respectively. FEM analysis results and discussions The comparisons of the velocity field and temperature distributions in the crucible without a magnetic field and with a cusp magnetic field in the 2D FEM simulation results are shown in figures 3 and 4, respectively. For the cusp magnetic field, the melt convection in the crucible wall and the upward melt flow under the crystal were suppressed, while mixing of the melt under the crystal remained strong because the silica dissolution in the crucible wall was reduced. The maximum velocity of the melt flow under the contact surface between the crystal and the free melt surface was 0.158 m/s. Compared to the simulation with no magnetic field case, the temperature distribution in the melt was more homogeneous and the temperature gradient was significantly reduced. No magnetic field Cusp magnetic field Figure 6 shows the design process for the HTS magnet for a Cz 300 mm single silicon-crystal growth system. First, the design targets of the HTS magnet were determined. Here, the authors designed the magnet based on the cusp field configuration and selected a ring-shaped coil. The magnet was cooled below 30 K using a cryogenic conduction cooling method, and the target magnetic flux density was 0.35 T at the center of the crucible bottom. Second, the distance from the free melt to the magnet and the size of the magnet rings (inner and outer radius) were decided. Third, the number of turns and number of DPCs were determined. The coil length and cost were considered. Finally, the critical current of the HTS magnet was estimated and the operating current was chosen. The characteristic analysis of the magnets was conducted using FEM simulation. Determination of specifications of the HTS magnet To estimate the critical current of the HTS magnet, a 3D FEM simulation model was designed. The perpendicular magnetic flux density at an operating current of 1 A was 0.00797 T. The 2G YBCO HTS wires used in the design were manufactured by SuNam company, and were 12 mm wide and 0.22 mm thick. The critical current was 600 A at a temperature of 77 K. The perpendicular magnetic flux density of the magnet was compared to the critical current characteristics curve as shown in figure 7 to determine the critical current of the HTS magnet [15]. At 30 K, the critical current was 430 A, and the target operating current was 344 A which is 80 % of the critical current. Figure 8 shows the configuration of the HTS magnet in the Cz 300 mm single siliconcrystal growth system. The ring-shaped DPCs were applied to the magnet, and four DPCs made up the magnet rings. The inner and outer radii of the HTS magnet were 810 mm and 900 mm, respectively; the distance from the free melt surface to the coils was 400 mm, and the number of turns of one single package coil (SPC) was 500. Table 3 provides detailed specifications for the HTS magnet. Table 3. Specifications of the HTS magnet Confirmation of the design results A 3D FEM simulation model was built to analyze the characteristics of an HTS magnet and to confirm the design results as shown in figure 9. The magnetic flux density at the top of the crucible edge and the center of the crucible bottom were 0.3 T and 0.35 T, respectively. The results of the magnetic flux density satisfied the initial goal of the study. Conclusion This paper presents the results of designing an HTS magnet for a single silicon-crystal growth system. The authors analyzed the effects of static magnetic fields on silicon melting in a single silicon-crystal growth system. Magnetic field strength and shape were determined for an HTS magnet designs, and the characteristics of the designed magnet were analyzed using FEM simulations to determine its specifications. A ring-shaped DPC was applied to the HTS magnet, and the operating current of the HTS magnet with four DPCs was 344 A. The number of turns of one SPC was 500 with a total wire length of 11.5 km. The melt convection was significantly suppressed by the Lorentz force, and the temperature distribution was uniform in the silicon melt under the cusp magnetic field. The magnetic flux density of 0.35 T at the center of the crucible bottom was achieved. The basic design specifications and data from this study can be effectively applied to the development of real silicon-crystal growth systems.
2,758
2018-07-01T00:00:00.000
[ "Engineering", "Physics", "Materials Science" ]
De novo assembly of potential linear artificial chromosome constructs capped with expansive telomeric repeats Background Artificial chromosomes (ACs) are a promising next-generation vector for genetic engineering. The most common methods for developing AC constructs are to clone and combine centromeric DNA and telomeric DNA fragments into a single large DNA construct. The AC constructs developed from such methods will contain very short telomeric DNA fragments because telomeric repeats can not be stably maintained in Escherichia coli. Results We report a novel approach to assemble AC constructs that are capped with long telomeric DNA. We designed a plasmid vector that can be combined with a bacterial artificial chromosome (BAC) clone containing centromeric DNA sequences from a target plant species. The recombined clone can be used as the centromeric DNA backbone of the AC constructs. We also developed two plasmid vectors containing short arrays of plant telomeric DNA. These vectors can be used to generate expanded arrays of telomeric DNA up to several kilobases. The centromeric DNA backbone can be ligated with the telomeric DNA fragments to generate AC constructs consisting of a large centromeric DNA fragment capped with expansive telomeric DNA at both ends. Conclusions We successfully developed a procedure that circumvents the problem of cloning and maintaining long arrays of telomeric DNA sequences that are not stable in E. coli. Our procedure allows development of AC constructs in different eukaryotic species that are capped with long and designed sizes of telomeric DNA fragments. Introduction Artificial chromosomes (ACs) were first developed in budding yeast Saccharomyces cerevisiae through the cloning and assembling of three DNA elements: the centromere, telomeres and origins of replication [1]. The success of yeast artificial chromosomes (YACs) was a driving force for the development of artificial chromosomes in multicellular eukaryotes. Human artificial chromosomes (HACs) and plant artificial chromosomes (PACs) can not only provide important tools for studying chromosome structure and function, but also hold great potential as next generation vectors for human gene therapy and plant genetic engineering [2][3][4]. Development of both HACs and PACs have been reported after a decade long effort involving many laboratories [5][6][7][8][9]. Several different techniques have been developed to assemble AC constructs in mammalian and plant species. Most of these techniques have focused on combining centromeric DNA with telomeric DNA fragments. Origins of replication are poorly defined in higher eukaryotes but presumably exist throughout their genomes [10]. Thus, the centromeric and telomeric DNA used in AC constructs may contain sequence motives required for DNA replication. The most common approach for developing artificial chromosomes is using a cloned centromeric DNA fragment as the backbone of the constructs. YACs or bacterial artificial chromosomes (BACs) containing centromeric DNA were commonly used in construct development [2]. Subsequently, telomeric DNA fragments are added to the ends of the YAC or BAC insert [6,7,9]. This results in a DNA molecule containing a large centromeric DNA fragment capped with telomeric DNA from the targeted animal or plant species. One of the main shortfalls in the current approaches of HAC and PAC assembly is the very short telomeric DNA fragments included in the constructs. Satellite repeats, including telomeric repeats, cannot be stably maintained in E. coli. Thus, if the HAC/PAC construct or part of the construct containing the telomeric DNA fragments is propagated in E. coli, any long arrays of the telomeric DNA may be partially or significantly deleted or rearranged. Due to this problem the lengths of the telomeric DNA of previously reported HAC/PAC constructs were all shorter than those of the native chromosomes, which can reduce the efficiency of artificial chromosome formation and affect the stability of resulting minichromosomes [11,12]. We sought a new strategy to circumvent this problem. Here we report the development of two telomeric DNA vectors that can be used to generate long telomeric DNA fragments up to several kilobases. We also developed a vector that can be combined with BAC clones containing large centromeric DNA inserts. The cloned centromeric DNA can be subsequently recombined with expansive telomeric DNA resulting in an in vitro system for the production of AC constructs. This AC assembly system allows the generation of AC constructs capped with telomeric DNA in different sizes. The technique can be applied in different plant as well as animal species. Development of a vector as a centromeric DNA backbone We first developed the pLL-EH vector ( Figure 1A). This vector (12,012 bp) consists of two DNA fragments. The first fragment (6,212-bp) was isolated from the BAC pBeloBAC11 [13] by double digestion with PciI and SalI ( Figure 1A). This fragment contains all the genes required for stable propagation and maintenance of large DNA fragments in E. coli. The second fragment was synthesized containing a number of restriction sites for cloning and recombination. A hygromycin resistance gene (Hpt) and a reporter gene Egfp were also inserted into this fragment ( Figure 1A). The attP1 site can be used for in vitro site-specific recombination with the attB1 site from telomeric DNA vectors. The lox71 and the C31 attB1 sites can be used to insert additional DNA sequences into the vector or future potential artificial chromosomes of transgenic plants. A BAC clone containing centromeric DNA can be ligated with pLL-EH to form a pLL-EHC vector ( Figure 1C). Vector pLL-EHC, containing centromeric DNA, will represent the centromeric DNA backbone of the AC construct. We used a rice centromeric BAC 38J12 ( Figure 1B) to develop our model pLL-EHC clone. BAC 38J12 contains an~140-kb insert derived from the centromere of rice chromosome 8 (Cen8) [14]. The insert of this BAC spans the entire~65-kb CentO centromeric satellite repeat array associated with rice Cen8. An 110kb FseI fragment of the insert, which spans the CentO repeat array, was released from 38J12 and ligated with a FseI-digested pLL-EH vector to generate the pLL-EHC vector ( Figure 1). This clone, now considered a centromeric seed clone, will subsequently be combined with telomeric DNA to form AC constructs. Development of two seed telomeric DNA vectors It is well documented that long telomeric repeat arrays can not be stably maintained in E. coli [15]. Our strategy was to clone a short telomeric DNA fragment into seed telomeric DNA vectors. The seed vectors are used as templates to amplify long telomeric DNA fragments, which will then be ligated directly to the centromeric DNA backbone by in vitro site-specific recombination. A thermostable DNA polymerase from Thermococcus litoralis (Vent DNA polymerase) was used to generate long telomeric repeats from a short synthetic template/ primer following a previously published protocol [16]. We cloned a 340-bp telomeric repeat fragment into the pGEM-T Easy vector. To develop a plasmid that contains a telomeric repeat segment flanked by the appropriate restriction and recombination sites required for future DNA manipulation, the telomeric DNA fragment originally cloned into pGEM-T Easy vector was subcloned into plasmid pTLT (see Materials and Methods). This resulted in two seed telomere vectors: pLL-TBS and pLL-TSB ( Figure 2). Propagation of pLL-TBS and pLL-TSB in E. coli strain Top 10 resulted in partial deletions of the 340-bp telomeric DNA fragments. The stabilized pLL-TBS and pLL-TSB plasmids contained onlỹ 120-bp of the telomeric repeats. Generation of long "back-to-back" telomeric DNA fragments The two seed telomeric DNA vectors contain a homing endonuclease I-SceI site, an attB1 site, and two BsgI sites ( Figure 2). The arrangement of the sites differs between the two vectors: BsgI/I-SceI/attB1/MCS/Telo/ BsgI for pLL-TBS, and BsgI/attB1/I-SceI/MCS/Telo/BsgI for pLL-TSB ( Figure 2). This arrangement causes the excised long telomeric DNA fragments derived from the two seed vectors to align in opposite orientation such that when later ligated to the ends of the centromeric DNA fragment, produces a linear molecule consisting of a central centromeric DNA element flanked by opposing telomere repeats ( Figure 2). Digesting the pLL-TBS and pLL-TSB vectors with BsgI released the short telomeric DNA inserts, including the attB1 site ( Figure 2). The released DNA fragments were used as templates to generate long telomeric DNA fragments by unidirectional replication. This amplification step was accomplished by using Vent DNA polymerase that can catalyze short repeat expansion [16]. DNA fragments in the range of 2-10 kb were readily amplified using this approach (data not shown). The amplified telomeric DNA fragments were size-fractionated via gel excision to generate telomeric DNA varying in lengths ( Figure 3A). The 5'-(TTTAGGG) n -3' and 3'-(TTTAGGG) n -5' DNA fragments of 2 to 5-kb in size were digested with I-SceI and ligated to form back-toback telomeric DNA ( Figure 2, Figure 3B). Because the homing endonuclease I-SceI recognizes asymmetric sites and the I-SceI sites on the pLL-TBS and pLL-TSB seed vectors are arranged in an opposite orientation, a telomeric DNA fragment derived from pLL-TBS will only ligate with a fragment derived from pLL-TSB. Thus, the resultant back-to-back telomeric DNA will include two 2 to 5-kb synthetic telomeric DNA fragments in opposite orientation, one I-SceI site, and one attB1 site ( Figure 2). Development and characterization of linear AC constructs The back-to-back telomeric DNA molecules were recombined with the pLL-EHC plasmid containing centromeric DNA to generate linear AC constructs. The recombination was accomplished through the attB1 site in the telomeric DNA molecule and the attP1 site in the pLL-EHC plasmid ( Figure 2). This recombination resulted in a linear molecule consisting of the centromeric DNA fragment derived from pLL-EHC capped with expansive telomeric DNA at both ends. The attP1 and attB1 sites were converted into attL1 and attR1 sites after the recombination (Figure 2). The resulting linear molecules can be used directly for plant transformation with the Hpt gene used as the plant selection marker. To confirm the recombination between the back-toback telomeric DNA fragment and the pLL-EHC plasmid specific PCR primers were designed from the junction regions based on the backbone sequences of plasmids pLL-EHC, pLL-TBS, and pLL-TSB ( Figure 4). The linear AC constructs were isolated by pulsed field gel electrophoresis (PFGE). PCR amplification using the junction-specific primers resulted in DNA fragments matching expected sizes ( Figure 4). The amplified PCR fragments were also confirmed by sequencing analysis (data not shown). We also developed linear constructs consisting of centromeric DNA capped with telomeric DNA at only one of the two ends. The single junction associated these constructs was also confirmed by PCR analysis (Figure 4). Southern blot hybridization analysis showed that only AC constructs with telomeric DNA attached at one or both ends hybridized to both telomeric and centromeric DNA probes ( Figure 5). Cytological visualization of linear AC constructs We used a DNA fiber-fluorescence in situ hybridization (fiber-FISH) technique to visualize the AC constructs resulting from the ligations between pLL-EHC and 4 to 8-kb back-to-back telomeric DNA fragments. The recombined DNA was directly spread on poly-lysine coated glass slides and hybridized with telomeric DNA probe (red) and pLL-EHC (green) probes. Linear DNA molecules hybridized with both probes were consistently detected using DNA samples from different ligation experiments ( Figure 6). Non-recombined and circular pLL-EHC molecules were also observed. Some linear molecules showed no telomeric DNA signals or a signal at only one of the two ends. However, this is likely due to the resolution limitations of the fiber-FISH technique in which DNA sequences as short as few kilobases are often not detected as consistently as longer DNA fragments. Discussion Methods for generating either de novo artificial chromosomes or engineered minichromosomes can be grouped into two broad categories: the "top-down" and "bottomup" approaches, respectively. The "top-down" approach uses telomeric DNA to truncate a native chromosome thereby generating minichromosomes [17]. Selectable markers can be inserted into such minichromosomes to eventually be engineered into autonomous vectors highly similar to artificial chromosomes [18]. This "topdown" approach has been demonstrated in maize by truncating the supernumerary B chromosome [8]. While truncation of any normal chromosome in a diploid plant species may be lethal, the truncation of a normal chromosome can be achieved in a tetraploid genetic background of the targeted diploid species [8]. Nevertheless, it remains to be seen if such an approach can be applied to a broad set plant species. The "bottom-up" approach assembles AC constructs in vitro using cloned centromeric and telomeric DNA followed by transforming the target plant with the constructs [9]. The centromeric and telomeric DNA compositions of AC constructs can significantly affect the efficiency of artificial chromosome formation [2,19]. Linear centromeric DNA constructs capped with telomeric DNA generated HACs efficiently. However, a severe reduction in HAC formation coupled with an increase in integration events into human chromosomes was observed when the same constructs were not capped with telomeric DNA [11]. HACs that acquired long telomeres during in vitro propagation were more stable in mitosis than those with short telomeres [12]. These results indicate that adding long telomeric DNA to linear AC constructs has a major impact on the efficiency of artificial chromosome formation and the stability of the resulting minichromosomes. The telomeric DNA may protect the ends of the linear constructs in a similar function as the telomeres of normal chromosomes. Nevertheless, no specific study has been devoted to finding a relationship between telomeric end size in AC constructs and the efficiency of artificial chromosome formation. The average telomere length of human fibroblasts varies from 5 to 10-kb [20]. However, BAC-based HAC constructs were capped by only 0.8 to 1.2-kb of telomeric DNA [19,[21][22][23]. Similarly, only a 239-bp telomeric DNA fragment was included in the BAC-based PAC constructs in maize [9]. It was not specified in these previous reports as to why such short telomeric DNA fragments were included in HAC/PAC constructs. However, it is well known that satellite DNA, such as telomeric repeats [15], are generally not stable in bacterial plasmids, including BAC vectors [24]. Thus, even if BAC-based HAC/PAC constructs containing long telomeric DNA fragments can be developed, it remains unknown if such constructs can be maintained in E. coli. It is also interesting to note that putative de novo artificial chromosomes were recovered only 7 of the 450 transformation events using maize AC constructs capped with very short telomeric DNA [9]. It has been well documented that progressive shortening of telomeric DNA can lead to the loss of function of telomeres and the fusion of chromosomes [25]. Thus, it is reasonable to hypothesize that the amount of telomeric sequence capping AC constructs is important for the efficient formation of artificial chromosomes. We demonstrate that the instability of telomeric repeats can be circumvented by assembling of the AC constructs using expansive telomeric DNA, rather than cloned telomeric DNA. Since most plant species contain the same type of telomeric DNA [(TTTAGGG) n ] [26], the pLL-TBS and pLL-TSB vectors can be used for telomeric DNA amplification and AC construct development in most plant species. In addition, telomeres normally terminate in a 3' single-strand G-rich overhang. This telomeric 3'-overhang is important for the formation of the T-loop, which is believed to protect chromosome ends from being recognized as broken DNA [27]. All previously reported AC constructs were not capped with telomeric DNA with this 3'-overhange structure due to means by which the telomeric DNA was cloned. Telomeric DNA fragments amplified from pLL-TBS and pLL-TSB do not undergo the cloning process and therefore can be modified to add this 3'-overhang structure before ligating with the centromeric DNA to produce linearized AC constructs. Although AC constructs capped with sufficient length of telomeric DNA can be generated, PAC research will continue to face the major challenge of delivering such large constructs into plant cells. Currently, microprojectile bombardment is the most popular method to deliver such large constructs. Biolistic transformation of AC constructs will result in chromosomal integration and chromosome truncation events [9,28]. It will be interesting to investigate if application of AC constructs with expansive telomeric DNA will increase or decrase the frequency of such events. Development of backbone plasmids The backbone of the pLL-EHC vector was constructed by linking two DNA fragments. The first fragment (6,212-bp) was isolated from the pBeloBAC11 vector [13] by digesting with SalI and PciI ( Figure 1A). The second fragment was produced by multiple rounds of oligonucleotide-extension using six different primers. This fragment contains one I-SceI site, two I-CeuI sites, a lox71, an attP1 site, and a multiple cloning site (MCS) consisting of 16 unique restriction sites. The vector pLL-FF was developed by ligating the shared SalI and PciI sites between the synthetic fragment and the pBelo-BAC11-derived fragment. A HindIII fragment containing the Egfp gene isolated from the Pk7GWIWG2D(II) vector (Invitrogen, Carlsbad, California) [29] was inserted into pLL-FF to create pLL-E. Finally, a BstXI digested PCR fragment containing the Hpt gene, a plant selectable marker, from pHZWG7 [29], and a synthetic C31 attP1 site were all inserted into PI-PspI digested pLL-E to create pLL-EH ( Figure 1). An~110-kb fragment from the rice BAC OSJNBa0038J12 was then ligated into the FseI site of pLL-EH to yield pLL-EHC. To generate a telomeric DNA fragment, a PCR reaction was performed using a synthetic (TTTAGGG) 11 DNA fragment as a template and a telomeric 25-mer (TTTAGGG) 3 TTTA) as a primer. The reaction was driven using Vent polymerase (1U) (New England Biolabs, Ipswich, Massachusetts) in a buffer containing 20 mM Tris-HCl (pH 8.8), 10 mM (NH 4 ) 2 SO 4 , 10 mM KCl, 2 mM MgSO 4 , 0.1% Triton X-100, 1 mM dNTPs. The concentrations of both telomeric DNA fragments were 1 μM and the volume of the reaction was 5 μl. The amplified telomeric DNA fragments were cloned into the pGEM-T Easy vector (Promega, Madison, Wisconsin). One recombinant clone containing a 340-bp telomeric DNA fragment, pGEM-TT, was selected and confirmed by direct sequence analysis. The insert of the pGEM-TT plasmid was released using a NdeI and XbaI double digestion, blunt-ended using T4 polymerase (New England Biolabs), and subcloned into a PstI-digested and blunt-ended pTLT plasmid. The pTLT plasmid is a modified pGEM-T Easy vector containing two additional BsgI sites. This final clone was named pTLT-R11. A PCR-based approach was used to insert the I-SceI and attB1 sites into the pTLT-R11 plasmid. The following primers, which contain the attB1 and I-SceI sites, were used in the PCR using pTLT-R11 as a template: TBS5' TTAGTCTCGAGACAAGTTTGTACAAAAAAGCAG GCTCTGCATGCCCTAAATCACTAGTGAATTCG; TBS3' TACTTCTCGAGACAAGTTTGTACAAAAAAG-CAGGCTTGGTCTAGACCAAGATATCCTTGGC; TSB5' TTAGTCTCGAGTAGGGATAACAGGGTAATC TGCATGCCCTAAATCACTAGTGAATTCG. TSB3' TACTTCTCGAGTAGGGATAACAGGGTAATTGGTC-TAGACCAAGATATCCTTGGC. The PCR fragments were digested with XhoI and self-ligated to yield the pLL-TBS and pLL-TSB plasmids. Synthesis of back-to-back telomeric DNA fragments To generate long telomeric DNA fragments, the short telomeric DNA inserts were released from pLL-TBS and pLL-TSB plasmids by digesting with BsgI. Unidirectional telomeric DNA extension was performed using a 5'-(tTTACCC) 12 -3' oligonucleotide. The oligonucleotides and the released plasmid inserts were mixed at a 1:2 ratio in a 100 μl PCR reaction containing 50 mM Tris-HCl pH 9.1, 16 mM NH 4 SO 4 , 3.5 mM MgCl 2 , 150 μg/ ml bovine serum albumin (BSA), 250 μM dNTPs, Klentaq (5 U) (Clontech, Mountain View, California), and Pfu polymerase (0.03 U) (Stratagene, La Jolla, California). The extended DNA fragments were purified and treated with Mung bean nuclease at 30°C for 30 min to remove any single stranded DNA. The DNA fragments were then treated with calf intestinal alkaline phosphatase (New England Biolabs) at 37°C for 60 min to remove a phosphate group, ensuring that one DNA fragment from pLL-TBS and one from pLL-TSB would be ligated in a back-to-back direction. The extended telomeric DNA fragments were separated on 0.7% low-melting agarose gel. Electrophoresis was performed over night at 37 V. DNA fragments of 2 to 10-kb were excised from the gels. The telomeric DNA was purified from the agarose and concentrated using a Microcon YM-50 spin column (Amicon, Houston, Texas) according to the manufacturer's instructions. Equal amounts of the size-fractionated telomeric DNA derived from pLL-TBS and pLL-TSB were mixed and digested with I-SceI in a total volume of 200 μl for 3 h. The homing endonucleases were heat inactivated, and ATP (Epicentre, Madison, Wisconsin) was added to a final concentration of 1 mM. The telomeric DNA was ligated overnight at room temperature using T4 DNA ligase. Assembly of AC constructs For the attB1 × attP1 recombination reaction, 500 ng of the attP1-containg pLL-EHC plasmid DNA and 100 ng of the attB-containing back-to-back telomeric DNA fragments were mixed with 4 μl each of 5 × BP clonase buffer and BP Clonase™ Enzyme Mix (Invitrogen), and adjusted to 20 μl with TE buffer. The mixture was allowed to react at 25°C for 16 h. After the recombinationreaction, the enzymes were inactivated by treatment with Proteinase K for 10 min at 37°C. Similar recombination reactions were also performed using the pLL-EHC vector and expansive telomeric DNA fragments derived from either pLL-TBS or pLL-TSB alone. This resulted in a polarized capping of the in linear centromeric DNA molecules with telomeric DNA at only one of the two ends. The assembled linear AC constructs were separated by PFGE. The DNA band corresponding to the expected size of the linear AC constructs was excised from the gel and placed into 0.5 × TBE. The electro-eluted DNA was then dialyzed into ddH 2 O and concentrated using a Microcon YM-100 spin column (Amicon). The purified DNA fragments were used as a template for PCR analysis (see below). We developed a pLL-BKE plasmid as a control for artificial chromosome confirmation analysis. The pLL-BKE plasmid is a modified pLL-E vector with an attB1 site instead of attP1 site. Recombination between pLL-EHC and a linearized pLL-BKE (~10 kb), resulted in a linear molecule that is not capped with telomeric DNA. Southern blot hybridization and fiber-FISH Southern hybridization of DNA was performed using the digoxigenin (DIG) detection system (Boehringer Mannheim BV, Almere, The Netherlands). The probes used in the Southern hybridization were synthesized and DIG-labeled by random priming. The alkali-labile form of DIG-11-dUTP, the pRCS2 plasmid DNA insert containing the CentO repeats [30], and the NotI/NheI digested 140-bp telomere DNA fragments from pTLT-R11 were used as templates for random labeling with DIG. Hybridization conditions were carried out as recommended by the manufacturer. Fiber-FISH analysis of assembled AC constructs was performed using published protocols [31]. An appropriate amount of target DNA resulting from ligations between pLL-EHC and 4 to 8-kb of back-to-back oriented telomeric DNA, was directly dropped on a poly-lysine coated glass slide and a 18 × 18 cover glass was carefully placed on the top of the DNA drop. Slides were hybridized with a telomeric DNA probe and the pLL-EHC plasmid. The signals were detected following the procedure of standard fiber-FISH [32].
5,208
2011-04-15T00:00:00.000
[ "Biology" ]
Directed Self-Assembly of Polystyrene Nanospheres by Direct Laser-Writing Lithography In this work, we performed a systematic study on the effect of the geometry of pre-patterned templates and spin-coating conditions on the self-assembling process of colloidal nanospheres. To achieve this goal, large-scale templates, with different size and shape, were generated by direct laser-writer lithography over square millimetre areas. When deposited over patterned templates, the ordering dynamics of the self-assembled nanospheres exhibits an inverse trend with respect to that observed for the maximisation of the correlation length ξ on a flat surface. Furthermore, the self-assembly process was found to be strongly dependent on the height (H) of the template sidewalls. In particular, we observed that, when H is 0.6 times the nanospheres diameter and spinning speed 2500 rpm, the formation of a confined and well ordered monolayer is promoted. To unveil the defects generation inside the templates, a systematic assessment of the directed self-assembly quality was performed by a novel method based on Delaunay triangulation. As a result of this study, we found that, in the best deposition conditions, the self-assembly process leads to well-ordered monolayer that extended for tens of micrometres within the linear templates, where 96.2% of them is aligned with the template sidewalls. Introduction Nanospheres lithography (NSL) is a manufacturing technique based on the self-assembly (SA) process of colloidal spheres [1]. Monodisperse suspensions of polystyrene (PS) nanospheres (NSs) deposited on a substrate form colloidal crystals consisting in single or multiple layers, exhibiting hexagonal close-packed (HCP) symmetry. In the last decades, NSL gained increasing attention in nanotechnology due to the possibility to realise several periodic patterns over large area and at reasonable cost, including photonic structures [2] or devices for nanoelectronics [3] and plasmonics [4]. However, the SA process exhibits intrinsic variability, resulting in the generation of lattice defects and the formation of multiple domains. These irregularities hinder advanced applications in which precise spatial positioning of the nanostructures is required. In this context, the design of an experimental procedure with stable output is demanded for the fabrication of well-ordered single domains with controlled size and regular shape. An interesting solution to overcome this limitation is represented by the use of substrate modifications to aid the formation of single-layered crystals of NSs. This approach, also called directed self-assembly (DSA), has been successfully proposed for other self-assembling systems, such as Block Copolymers (BCPs), receiving great consideration so far due to its wide applicability in key technological sectors such as microelectronics [5][6][7]. The substrate can be modified either by a chemical [8,9] or topographic templates generated prior the SA process [10]. In the latter case, the bottom-up SA process is directed by the presence of confining structures such as linear or circular gratings, defined by conventional top-down lithographic approaches [11]. The geometrical dimensions of the topographic templates can be tailored to be commensurate to the characteristic dimensions of the SA material (e.g., diameter of the NSs or center-to-center distance for BCPs). The development of DSA processes applied to NSL has been mainly dedicated to the confinement of few NSs [12][13][14] or to achieve size separation of polydispersed NSs [15]. The present work aims to extend the DSA process over large area, to allow the formation of single-grain domains highly oriented inside pre-patterned templates throughout a several square millimetre area. To meet this objective, direct laser writing (DLW) lithography and reactive ion etching (RIE) were combined to fabricate micrometric templates with different shapes and sizes. The deposition of the NSs in the templates was performed by spin coating, and the dynamic parameters were varied starting from the insights of our previous work [16]. In particular, we investigate the formation of the NSs monolayer through the analysis of the confinement and the ordering processes. The former was carried out by means of atomic force microscopy (AFM) and scanning electron microscopy (SEM), whereas the NSs ordering was evaluated through an image-processing method measuring the domains orientation. These analyses contribute to increase the repeatability of NSL and expand its applicability through DSA to address the necessities of the development of novel devices for photonics [17], chemical sensing [18,19], data storage [20], and optoelectronics [21]. Direct Laser-Writing Patterning The DLW lithography (Heidelberg µPG101 laser writer, Heidelberg, Germany) was performed on polished silicon wafers (MEMC Electronic Materials, Novara, Italy) covered by a thermal oxide layer with thickness ranging between 50 nm and 200 nm. An optical resist (AZ 1505 Merck Performance Materials GmbH, Darmstadt, Germany) was deposited over the SiO 2 substrate (Figure 1a) and exposed with a laser beam (λ = 375 nm, diameter of 800 nm and intensity of 10 mW). The resist was afterward developed for 40 s in a 1:1 solution of the developer (AZ Developer Merck Performance Materials GmbH) and H 2 O. The resulting pattern left the SiO 2 layer exposed, as shown in Figure 1b. The templates sizes were designed to confine an integer number of NSs, including hexagonal templates with a diagonal length of 4.75 µm and linear ones with a width of 3 µm and length of 200 µm, while Figure 1 only reports the hexagonal configuration as an example. Template Fabrication The DLW pattern was transferred to the oxide layer by reactive ion etching process (RIE) (Figure 1c). The chemically reactive plasma was obtained by mixing CHF 3 and Ar with a flow ratio of 54 sccm to 29 sccm. The plasma was generated with a residual pressure of 180 Pa and an applied RF power of 300 W, with a typical reflected power of 25 W. Under these operating conditions, the etching rate on SiO 2 was 10 nm min −1 and the time was selected to reach different depths. After the etching step, the excess resist was removed with acetone and the final patterned substrate was characterised by a non-contact 3D surface profiler (Sensofar S Neox, Barcelona, Spain) ( Figure 1e) and a field emission gun (FEG) SEM (FEI Inspect-FTM, Hillsboro, OR, USA) ( Figure 1f). Nanospheres Deposition Despite the other NSs deposition methods that have been proposed so far, such as doctor blading [22] or Langmuir-Blodgett coating [23], in this work, we used the spin coating technique. Such choice was motivated by the aim to develop proper protocols to promote the applicability of DSA processing in industrial nanomanufacturing already relying on this method. The patterned substrates were cleaned in an ultrasonic bath of acetone and isopropyl alcohol. The surface was treated by O 2 plasma for 6 minutes at 40 W with a residual pressure of 3 Pa to make it hydrophilic. The PS NSs were synthesised using the emulsion polymerisation of styrene using sodium dodecyl sulfate as surfactant and potassium persulfate as the initiator [19]. The NSs had diameter equal to (250 ± 4) nm and presented negative charges at the surface, due to the decomposition of the initiator, thus stabilising the aqueous suspension against aggregation. We drop coated all the samples with 60 µL of the suspension and spread it by spin coating (WS-400B-6NPP/LITE Laurell Technologies, North Wales, PA, USA) in two steps. In the first step, we set the speed and acceleration to 500 rpm and 410 rpm/s, respectively, and the duration to 10 s. For the second step, we modified the spinning speed to test the confinement process while keeping the duration at 30 s. An illustration of the result is shown in Figure 1d. SEM Characterisation and Image Processing The characterisation of NSs self-assembly inside the templates was performed by a systematic analysis of the SEM micrographs. We set conditions for the SEM imaging with V = 10 kV, planar configuration at the optimum working distance of 10 mm and magnification of 10,000. For a quantitative analysis of the DSA process, we processed the images by means of a MATLAB routine which operates by recognising the NSs inside the templates and by mapping the lattice according to Delaunay triangulation. Then, it identifies deviations from the ideal HCP lattice by counting the number of nearest neighbours to each particle. The orientation of all the unit HCP cells is extrapolated with an angular resolution of 1°in the range of possible orientations of the crystals between −30°and 30°. A complete description of the operating principle of the software is reported in reference 16. Atomic Force Microscopy Characterisation The surface topography on the NSs soft material was investigated by means of atomic force microscopy (Bruker Corp. INNOVA microscope) by using etched Si probes (Bruker RTESPA-300, Billerica, MA, USA) with nominal spring constant of 40 N m −1 and tip radius of 8 nm. The measurements were performed in tapping mode with a resonance frequency of 230 kHz and scanning rate of 0.5 Hz. The analysis of the AFM micrographs was carried out by the freeware Gwyddion. The plane inclination was corrected by fitting a plane through three points on the optically flat SiO 2 mesas and by setting the scale zero position at the same level. Nanospheres Ordering The deposition of NSs over the patterned substrates was performed by spin-coating process. We set the spinning speed and acceleration to 1250 rpm and 410 rpm/s, in agreement to our previous experiments focused on the maximisation of the degree of order, expressed in terms of correlation length ξ [16], on flat unpatterned substrates. Figure 2a shows a low-magnification SEM micrograph of both flat and patterned areas on the substrate. On the flat portion of the sample, the formation of large grains is preserved, as highlighted in Figure 2b by the overlapped colour map. Each coloured region corresponds to a grain or domain in which the orientation of the HCP lattice is uniform, whereas it varies randomly in the neighbouring domains separated by the grain boundaries. However, the same spinning conditions were found inadequate for the SA inside the templates, leading to the accumulation of NSs in multiple layers reported in Figure 2c. This preliminary result highlights the differences of the SA induced on a flat substrate and inside the templates. The SA process has been described in literature as the interaction of capillary forces between two adjacent NSs, responsible for the hexagonal packing. In the presence of a geometrical constraint, such capillary forces also act across the edges of the templates, which introduce a perturbation of the conventional SA process [13,24,25]. To quantify the effect of the perturbation on the long-range ordering and to optimise the confinement of the NSs, we realised a new set of samples by varying both the height of the sidewalls and the spinning conditions. In particular, the spinning speed were set to 1250 rpm, 2000 rpm and 2500 rpm, whereas the selected heights H were 50 nm (H = 0.2·D), 100 nm (H = 0.4·D), 150 nm (H = 0.6·D) and 200 nm (H = 0.8·D). The maximum value of H (i.e., 200 nm) was chosen below the NSs diameter since excessive height would result in a physical barrier promoting the stratification in multiple layers. Figure 3 reports a tabular comparison of the SEM micrographs of the colloidal crystal, where the sidewalls height and spinning speed are varied along the columns and rows, respectively. The structures were patterned with hexagonal shape for its similarity to the characteristic packing symmetry of the NSs. In the case of templates with H = 0.2·D (i.e., depth of 50 nm) shown in Figure 3a, the NSs self-assemble into monolayers irrespective on spinning speed. However, under these particular conditions, the orientation of the domains is not influenced by the presence of the template, as testified by the formation of grains with same orientation across the edges. For this reason, these conditions are not proper for NSs confinement and the corresponding images are coloured in orange. On the contrary, in templates with H = 0.4·D and H = 0.6·D in Figure 3b,c, the arrangement of the NSs presents a marked dependence on the spinning parameters. For depositions performed at 1250 rpm, we observed the formation of multiple layers inside the templates (red images in Figure 3), preventing the lithographic use of the confined NSs. Such an issue can be solved by increasing the spinning speed to 2000 rpm and 2500 rpm. In this case, the NSs arrange in a single layer confined inside the templates and, despite the presence of residual NSs on the mesas in between adjacent templates, no domains are continuously ordered across the edges. In these conditions, the formation of the monolayer is facilitated and visibly influenced by the presence of the templates, the corresponding micrographs are coloured in green in Figure 3. Finally, when deposited in templates with H = 0.8·D (i.e., depth of 200 nm), the NSs accumulate in multiple layers independently of the spin-coating speed so that these conditions are not suitable for lithographic purposes (SEM images coloured in red in Figure 3d). In light of this result, the structures with H/D ratios of 0.2 and 0.8 seems to be either too shallow or too deep to produce proper confinement of the NSs. On the other hand, the structures with H equal to 0.4 or 0.6 times the NSs diameter promote the formation of confined and ordered monolayers at 2000 rpm and 2500 rpm. So far, the selection of the optimal self-assembly parameters has been based on a qualitative analysis of the SEM images. To establish the efficiency of the DSA of NSs inside the hexagonal templates in a more rigorous way, the ordering process should be assessed quantitatively. To this goal, the SEM micrographs were processed with a user-defined image-processing routine based on Delaunay triangulation, measuring the orientation of HCP domains. The software recognises the domains and classify them according to their rotations in the angular range between −30°and 30°. The analysis was conducted on the hexagonal templates with H/D ratios of 0.4 and 0.6 highlighted in green in Figure 3. The results are collected in Figure 4, and report the normalised distributions of the orientation of the confined monolayer in different geometrical and dynamic conditions. Such angular distributions are centred on 0°indicating an alignment to the templates edges, while slight deviations in the orientation broaden the distributions. These can be accounted for by calculating the integral of the curve which gives the percent occurrence of the domains in a given orientation range. In the hexagonal templates with H/D ratio of 0.4 and 0.6, spinning speed of 2000 rpm lead to 43.5% and 56.1% of domains with orientation comprised between −10°and 10°, as shown in Figure 4a,b, respectively. In the graphs in Figure 4c,d, the percentage of aligned domains increases to 54.8% and 68.3% when the spin coating speed is set to 2500 rpm for H = 0.4·D and H = 0.6·D, respectively. This quantitative result clearly highlights that templates with H = 0.6·D induce a better ordering of the NSs when deposited at high spinning speed. The confinement process was tested also inside linear templates, chosen for its simple realisation by DLW lithography, using the same optimal spinning conditions. The SEM micrographs reported in Figure 5a-d show the outcome of the SA process in the linear templates. Similarly to what was observed for the hexagonal templates, the micrographs coloured in orange ( Figure 5a) and red (Figure 5d) correspond to unsuitable conditions for the DSA. Conversely, the templates with H/D ratio of 0.4 and 0.6 promote the formation of a confined self-assembled monolayer, as shown in Figure 5b,c. Also, in this case, the SEM micrographs were processed by Delaunay triangulation to evaluate the ordering process in terms of the domains orientation. The results of this study, reported in the graphs in Figure 5e,f, outline an angular distributions centred on 0°with narrow peaks including 89% of domains in the range from −10°to 10°for H = 0.4·D. This percentage rise up to 96.2% inside structures with H = 0.6·D. According to this result, the linear templates induce a finer orientation constraint than the hexagonal ones, as they presented a regular shape and uniform width along their length as visible in Figure 5. On the other hand, the hexagonal structures presented some rounded features that may constitute a cause for the lower quality of the ordering process. Moreover, the dimension of the templates may differ from the pattern design causing incommensurability and the generation of defects in the colloidal lattice. Nanospheres Confinement Although the optimisation of the geometry and process parameters have led to a good result in terms of NSs ordering within the templates, the confinement process can be further investigated by considering the defectivity at the edge of the templates and the presence of residual nanospheres on the mesas, observed in Figures 3 and 5. We performed an AFM analysis of the confinement process focusing our attention on the height profile of the confined nanospheres in the two studied morphologies (i.e., hexagonal and linear) with H = 0.6·D. Figure 6a reports an AFM map acquired on the hexagonal template. The height profile in Figure 6b indicates that, when good confinement is achieved, the NSs are perfectly aligned inside the hexagonal structure and exceed the mesa by ∆ conf = (96 ± 4) nm. This value is quite similar to the one expected for H = 0.6·D as the difference between the sidewalls height and the sphere diameter. The AFM maps acquired in proximity of a defect (Figure 6c) and the corresponding height profile (Figure 6d) show an irregular arrangement of NSs. The nanosphere #2, closest to the confining wall, is found at the level ∆ hex = (167 ± 1) nm above the mesa structure, whereas the NSs #3 and #4 are correctly confined at the level ∆ conf = (90 ± 5) nm. The corresponding height profile reports nanosphere #2 is found at a higher level with respect from the mesa, ∆ hex = (167 ± 1) nm, with respect to NSs #3 and #4. Figure 7a,b reports the AFM micrograph acquired on a linear template and the corresponding height profile, respectively. When the NSs are well confined inside the template (e.g., the NSs labelled as #4 and #5), they lay at the same level for which ∆ conf = (86 ± 2) nm. By approaching the side walls, the height of the nanospheres increases and NS #3 is separated from the top of the mesa by ∆ lin = (136 ± 2) nm. From this analysis, we observed the top of well-confined nanospheres to be at the level ∆ conf from the mesa, approximately equal to the difference between the diameter D and the sidewalls height H. When the separation exceeded this quantity, such as for ∆ hex and ∆ lin larger than ∆ conf , we observed the onset of a defect and the accumulation of unconfined NSs on the mesas. Given that ∆ lin was lower than the corresponding ∆ hex , the linear templates offered a better confinement of the nanospheres with respect to the hexagonal structures. In both templates, the observed distortions from the HCP symmetry can be due to several reasons, including local defectivity in the lithographic template, incommensurability of the graphoepitaxy structures or polydispersity of the nanospheres. These defects can be largely reduced by improving the combination of DLW lithography and RIE to obtain high regularity of the templates and fidelity to the pattern design. A possible strategy to limit the accumulation of excess NSs could be to graft hydrophobic polymer chains on the surface of the mesa. Despite some local defectivity, the use of DLW lithography and RIE makes it simple to tailor the templates with H/D ratio fixed at 0.6 to confine NSs with different dimensions, as shown in Figure 8a,c for NSs with a diameter of 200 nm and 400 nm, respectively. One common application of NSL consists in the realisation of triangular metallic nanoparticles as substrates for surface-enhanced Raman spectroscopy (SERS) applications, for the possibility to tune their geometrical features to match different excitation wavelengths [26]. DSA-NSL constitute a versatile solution to improve the uniformity and reproducibility in the fabrication of such substrates to benefit their spectroscopic responses, as it can be employed in the production of these and other metallic arrays with regular orientation and a high degree of order, as shown for example in Figure 8b,d. Conclusions In this work, we investigated the confinement and ordering process of the self-assembling NSs by changing the deposition parameters and the height of the confining walls in templates with two different shapes. The most appropriate conditions for the DSA-NSL where highlighted by a systematic SEM analysis correlated by the evaluation of the HCP orientation by image processing and atomic force microscopy measurements. High spinning speed of 2500 rpm were found to be necessary to let the NSs overcome the physical barriers of the templates. Sidewalls height H was found to provide proper confinement conditions at 0.6 times the NSs diameter. DSA-NSL inside linear templates, with the previously stated geometrical and dynamic conditions, resulted in a confined monolayer aligned to the template for 96.2%. The knowledge on the DSA process and the control over the geometry through DLW lithography and RIE, allow to direct the SA of colloidal NSs to obtain single-grain crystals with uniform orientation and regular shape over large area. The optimised fabrication protocol could extend the versatility of DSA-NSL for applications requiring different geometries. The linear structures, for example, can be employed to confine the nanostructures in microfluidic channels for multiplexed analysis [27]. Moreover, hexagonal and circular structures with micrometric sizes can serve in site-specific incubation for different analytes in sensing applications, where the templates are easily recognised by optical microscopy to find the area of analysis. Funding: The project 16ENV07 Aeromet has received funding from the EMPIR programme co-financed by the Participating States and from the European Union's Horizon 2020 research and innovation programme.
5,049.8
2020-02-01T00:00:00.000
[ "Materials Science" ]
GNC architecture for autonomous robotic capture of a non-cooperative target: preliminary concept design Recent studies of the space debris population in low Earth orbit (LEO) have concluded that certain regions have already reached a critical density of objects. This will eventually lead to a cascading process called the Kessler syndrome. The time may have come to seriously consider active debris removal (ADR) missions as the only viable way of preserving the space environment for future generations. Among all objects in the current environment, the SL-8 (Kosmos 3M second stages) rocket bodies (R/Bs) are some of the most suitable targets for future robotic ADR missions. However, to date, an autonomous relative navigation to and capture of an non-cooperative target has never been performed. Therefore, there is a need for more advanced, autonomous and modular systems that can cope with uncontrolled, tumbling objects. The guidance, navigation and control (GNC) system is one of the most critical ones. The main objective of this paper is to present a preliminary concept of a modular GNC architecture that should enable a safe and fuel-efficient capture of a known but uncooperative target, such as Kosmos 3M R/B. In particular, the concept was developed having in mind the most critical part of an ADR mission, i. e. close range proximity operations, and state of the art algorithms in the field of autonomous rendezvous and docking. In the end, a brief description of the hardware in the loop (HIL) This document is an accepted version of the manuscript available onlilne at: https: //doi.org/10.1016/j.asr.2015.05.018. © 2015. This manuscript version is made available under the CC-BY-NC-ND 4.0 license. ∗Corresponding Author Email address<EMAIL_ADDRESS>(Marko Jankovic) Introduction The launch of the first artificial satellite, Sputnik-1, a sphere of 58 cm in diameter and mass of 84 kg, in 1957 marked the beginning of human space exploration. However, it also marked the birth of non-functional, man-made, earth-orbiting objects denoted as space debris. Since then, there have been more than 4900 launches, which placed around 6600 satellites in orbit. Almost one half of them is still in orbit and the total amount of mass of intact space hardware is around 6300 t. However, those numbers do not include the fragmented objects. Considering also those, the number of objects goes even higher. Indeed, the total number of objects tracked routinely by the United States Space Surveillance Network (US SSN) is around 23,000 for objects larger than 5-10 cm in low Earth orbit (LEO 1 ) and 30 cm-1 m in geostationary Earth orbit (GEO 2 ) (Space Debris Office, 2013; Wormnes et al., 2013). The population of non traceable particles is estimated to be approximately 500,000 units, for particles between 1-10 cm, and more than 100 million for those smaller than 1 cm (Orbital Debris Program Office, 2012). The origin of 66 % of the cataloged objects is due to more than 200 recorded in-orbit fragmentation events, the majority of which were in-orbit explosions. 28 % of the cataloged objects is represented by decommissioned satellites, spent upper stages and other related objects. The operational satellites represent only 6 % of the total figure (Wormnes et al., 2013;Liou, 2011a). Two recent collision events have however contributed, on their own, to more than half of the objects in the region below 1 000 km thus raising the public awareness of the space debris issue. The first event was the Chinese anti-satellite weapon test (ASAT) on the Fengyun-1C (FY-1C) weather satellite, which occurred in 2007, at an altitude of 862 km. The second was the unintentional collision between the defunct Russian satellite Kosmos 2251 and the operational US satellite Iridium 33, which occurred in 2009, at an altitude of 789 km. This last event in particular has confirmed the concern of the international scientific community of onsetting, in LEO, of a selfsustaining, cascading process known as the "Kessler syndrome" Johnson, 2008, 2009;Liou, 2011a). This event, first predicted by Kessler and Cour-Palais (1978), indicates a phenomenon where the number of objects is expected to increase exponentially due to mutual collisions between the objects creating a belt of debris around the Earth (Kessler and Cour-Palais, 1978). The LEO region is particularly susceptible to this phenomenon since it contains more than 40 % of the total in-orbit mass (i. e. around 2 500 t). More in detail, the majority of that mass is contained in altitudes around 600, 800 and 1000 km (see Figure 1). 97 % of that mass is represented by rocket bodies (R/Bs) and spacecrafts (S/Cs). The latter are mainly concentrated in the 600 km region, while the former are mainly in the 800 and 1000 km regions (Liou, 2011a). To mitigate this phenomenon, various national and international organizations have issued a set of non binding space debris mitigation guidelines aimed among others at (Committee on the Peaceful Uses of Outer Space, 2014): 1. reducing the amount of space debris created during nominal operations 2. minimizing potential brake-ups and collisions 3. limiting the presence of non-operational satellites and rocket bodies Nevertheless, recent studies have shown that those mitigation measures are not enough to stabilize the current space debris environment. In fact, Liou et al. (2010), in their study, concluded that the number of in-orbit objects bigger than 10 cm is expected to rise by 75 % in the next 200 years even with 90 % compliance to post-mission disposal measures and no future inorbit explosions (see Figure 2). The assumed launch rate was the one of the previous years. Moreover, considering even the scenario of "no-futurelaunches", the population of space debris is expected to grow in LEO in the next 200 years. This means that in certain orbital regions the critical density of objects has been reached and an active removal of in-orbit mass has to be considered to stabilize the space debris environment. The active removal of only five objects per year, if started in 2020, coupled with 90 % implementation of mitigation measures, should be enough to maintain the number of objects comparable to that in 2011. In order to reduce the space debris population in LEO to the number it had prior to the two most recent brake-up events, the removal of more objects per year should be considered (Liou et al., 2010;Liou, 2011a). The concept of active debris removal (ADR) has been around for some time, especially the one involving orbital robotics, due to its similarity to on-orbit servicing (OOS). The latter has its origins in early 1980s after the successful usage of the Space Shuttle remote manipulator in STS-2 mission (Yoshida and Wilcox, 2008). Despite this, the idea of an ADR never took off due to tremendous costs, legal and technical issues related to it. Moreover, until recently it has not been possible to quantify the real benefit of an ADR mission (Liou, 2011a). Rendezvousing and capturing large uncooperative objects 3 is not an easy task. In fact, until today it has not been performed without humans in the loop. Naasz et al. (2010) state in a paper that: "...no spacecraft has ever performed autonomous capture of a non-cooperative vehicle, and a full 6 degrees of freedom (DOF) relative navigation sensing to non-cooperative vehicle has only been shown to a limited extent." The autonomy is requested in particular in the final phases of the approach of the chaser vehicle (chaser) to the target vehicle (target) due to the limited reaction time available to face anomalies and/or communication problems 4 that might occur (Nolet and Miller, 2007). The automated rendezvous and docking is nowadays the state of the art of the space technology (see for example (Personne et al., 2006)), but if ADR is going to be performed routinely new technological challenges need to be tackled. Most of them are related to the fact that a typical target is not sufficiently equipped for the capture. Thus, it does not have reflectors, markers or radio beacons that could ease the determination of its relative position and attitude. Moreover, no grappling features are usually available, making the capturing of the target even more complicated. In the end, the target might have some sort of tumbling motion 5 which poses strict requirements on the trajectory safety, due to the increased possibility of collision of the chaser with rotating appendages of the target. It is worth noting that in this paper, automation and autonomy are intended as two different terms. They both indicate processes that can be executed without any human intervention. Automation involves software/hardware processes that substitute a manual routine by following a predetermined step-by-step sequences. However, they could still require human intervention to solve contingencies and unexpected behaviors. Autonomy, on the other hand, implies a more capable system that is able to perform actions and make decisions independently from the ground control. Thus, trying to emulate human processes rather than replacing them with a pre-programed sequences (Truszkowski et al., 2010). Most of those technological challenges are somehow related to the GNC system making it one of the most critical pieces of the whole chaser spacecraft. Given its importance, not only in ADR, but in all space missions, there have been a great deal of fundamental studies in this area of research. However, it is very difficult to select one of them 6 that could readily solve all the phases related to a robotic capture of an uncooperative target. In fact, taking into account all the phases related to close range proximity operations is a difficult task. Different phases (e.g. fly-around, pose estimation, approach, manipulator deployment, grasping and stabilization of the compound) have different problems and considerations. Thus, most of the researchers tend to concentrate just on one phase or one part of the GNC system (e.g. navigation or control). A small body of work was dedicated specifically to the GNC architecture as a whole. Moreover, we have not been able to find until now a research dedicated specifically to the development of a GNC architecture for a robotic removal of upper stages. To fill this gap and support future ADR missions, DFKI, within the initial training network (ITN) Stardust, has committed itself since November 2013 to study close range navigation and manipulation of uncooperative targets. Within that context, the following paper will present a preliminary concept design of a GNC architecture that should enable autonomous robotic cap-5 Intended in this paper as the target's rotation around at least one axis with an angular rate between 1 deg/s and 18 deg/s (Matsumoto et al., 2002). 6 At least to our best knowledge. ture of uncooperative upper stages. The novelty presented here consists of individuating the challenges and critical aspects of such an architecture, and presenting a series of state of the art algorithms that could populate that architecture. Moreover, a comprehensive description of current trends in the GNC for autonomous rendezvous and docking missions is also provided, hoping it could serve as a stepping-stone for the development of future GNC architectures for robotic ADR missions. It is worth noting that in this paper, only the close range rendezvous phase of an ADR mission is considered. However, a quick overview of all mission phases will be illustrated for the sake of completeness. Furthermore, the target is assumed to be uncooperative although well known a priori. The content of this paper is organized as follows. At first, a comprehensive description of the state-of-the-art in the field of autonomous rendezvous and docking/capture is presented in Section 2. Particular attention is given to the past missions, GNC architectures, and algorithms. Next, in Section 3, the envisioned ADR mission scenario is illustrated. A specific target is defined and major characteristics of the chaser are outlined. A preliminary concept of the GNC architecture is instead presented immediately after in Section 4. Various modules composing the architecture are described and the selection of algorithms that could be integrated within the individual modules is presented. A brief description of the robotics module, along with its interaction with the GNC architecture of the spacecraft is also made. The last part of this section is dedicated to a brief presentation of the hardware in the loop (HIL) testing facility intended to be used to validate the adequacy of the presented GNC architecture. The last section, i. e. Section 5, is dedicated to the conclusions and the future road map that will further improve the envisioned concept of the GNC architecture. Autonomous rendezvous and docking: background and related work Autonomous rendezvous and docking (ARVD) between two cooperative spacecrafts is not yet a routine operation, especially when one of the spacecrafts is non-cooperative. Nevertheless, given that it involves areas of research, such as pose 7 estimation, spacecraft control and path planning, there has been a fair amount of work on those topics in the last few decades. Moreover, quite few missions have been able to accomplish some sort of autonomous rendezvous and proximity operations in the past and there are some of them planned for the near future in order to bridge the existing gap. Thus, this section will give a brief overview of some of the most relevant past and future missions dealing with autonomous rendezvous (ARV) as well as of some theoretical work done in the mentioned research areas. The overview does not pretend to be complete but is in our opinion quite representative of the ARV panorama. Past and future missions The first ever in orbit rendezvous occurred on December 15th, 1965, when astronauts Walter Schirra and Thomas Stafford aligned their spacecraft Gemini VI with Gemini VII piloted by James A. Lowell and commanded by Frank Borman. This initial achievement was quickly overrun several months later, on March 16th, 1966, when the first-ever successful orbital rendezvous and docking was performed by astronauts Neil Armstrong and Dave Scott. During it they rendezvoused and docked their Gemini VIII spacecraft with the Agena target vehicle. These successes, along with the objective to favor manned space flight, towards the goal of going to the Moon, marked heavily the automated capabilities of United States (US) spacecrafts. At least until the last two decades as it will be described further on (Woffinden and Geller, 2007). Russians, on the other hand, pursued from the start an automated approach to the space flight, relegating the onboard crew to monitoring the operations and intervening only in cases of emergency. This has led to a first-ever automated rendezvous and docking (RVD) between two unmanned, robotic spacecrafts named Kosmos 186 (chaser) and Kosmos 188 (target), on October 30, 1967. The automated rendezvous system responsible for this success was the Igla radar system. The success of this first mission was then repeated multiple times and in 1968 Russia finally confirmed its path towards automation in space and the building of their space station as a steppingstone towards deep space exploration. A more advanced Russian automated spacecraft, used even today to ferry cargo to the International Space Station (ISS), is the Progress vehicle. It was introduced in 1978 and is equipped with the Kurs rendezvous radar system. This system is still considered to be the current standard of automatic rendezvous systems despite its weight and power consumption (Woffinden and Geller, 2007;Nolet and Miller, 2007). To overcome the cumbersome and aging design of previous automatic navigation systems, recent experimental missions have been performed mainly by the Japan and US authorities towards the goal of autonomous close proximity operations. The first mission in line is the Japanese Engineering Test Satellite (ETS)-VII. It was launched in November 1998 and developed by the National Space Development Agency of Japan (NASDA, currently JAXA) as a demonstration mission of some of the technologies for the H-II Transfer Vehicle (HTV), in particular of advanced ARVD techniques and unmanned orbital operations. It was the first-ever mission with an unmanned spacecraft having a robotic manipulator onboard and the first to perform an ARVD between unmanned spacecrafts. The space segment of the mission consisted of two spacecrafts, the chaser, named Hikoboshi, and the target, named Orihime (Woffinden and Geller, 2007;Nolet and Miller, 2007;Yoshida and Wilcox, 2008). To date, it can be considered as "the most complex successful technological demonstration of a service mission" (Hirzinger et al., 2009). However, the target spacecraft was cooperative and even then the mission experienced an attitude anomaly during one of ARVD maneuvers. This, forced the ground control to reconfigure the Rendezvous Flight Software (RVFS) to recover from the anomaly and accomplish the task (Nolet and Miller, 2007). The Experimental Satellite System-10 (XSS-10) was the first US mission to demonstrate basic autonomous proximity operations capabilities around a resident space object 8 (RSO). Particularly, the mission objectives were to perform: an autonomous navigation around an RSO on a preplanned course, semi-autonomous proximity operation maneuvers and an inspection of the RSO. The 31 kg spacecraft was developed by the US Air Force Research Laboratory (AFRL) and the chosen RSO was the Delta II stage that released the spacecraft into the orbit. The mission was performed in 2003. All primary mission objectives were met although minor problems were encountered during the mission. The most relevant was the connection dropout with the satellite during its closest approach to the RSO. This way the closest distance to it could not be measured and the close-in images of the target could not be downloaded (Davis et al., 2003;Nolet and Miller, 2007). The Experimental Satellite System-11 (XSS-11) was the successor of the previously mentioned spacecraft. It was developed by the Lockheed Martin Space Systems and commissioned by the US AFRL. It was launched in 2005 with the objective to verify the GNC system for a safe and autonomous rendezvous and close proximity operations with multiple space objects. The spacecraft was a microsatellite class vehicle having around 100/145 kg of dry/wet mass. It was equipped with a scanning light detection and ranging (LIDAR) sensor for relative range and angle measurements. The spacecraft was planned to perform maneuvers in complete autonomy by relying on its onboard planner. By the fall of 2005, the spacecraft had successfully performed more than 20 rendezvous maneuvers with its Minotaur 4th stage rocket body and several other close proximity operations (Woffinden and Geller, 2007). The nominal duration of the mission was stated to be 12-18 months with subsequent de-orbiting of the spacecraft, but according to the EoPortal (2007) the spacecraft was still in orbit on February 2007. To our knowledge further information about the mission was not made public. The National Aeronautics and Space Administration (NASA) agency launched its own ARV mission, the Demonstration of Autonomous Rendezvous Technology (DART) just few days after the launch of XSS-11, on April 15th, 2005. The objective of the mission was to demonstrate the US capability of completely autonomous rendezvous. The mission was slated to last only 24 h, during which the DART spacecraft had to autonomously track and rendezvous, within 5 m, with the specially designed target vehicle, the Multiple Paths Beyond-Line-of-Sight Communication (MUBLCOM) satellite. The relative position and orientation was to be determined with advanced video guidance sensor (AVGS). Also in this case the target was cooperative. After the successful orbit insertion and first phases of the rendezvous, the mission failed about 11 h into the mission, due to navigation errors and consequent excessive usage of the fuel. The DART spacecraft eventually collided with the MUBLCOM satellite without even the spacecraft being aware of the collision, given that the AVGS sensor never came into the usage (Woffinden and Geller, 2007). Another relevant US mission in line is the Orbital Express (OE) developed by the US Defense Advanced Research Projects Agency (DARPA) and launched in March 2007. The duration of the mission was 90 days during which the OE needed to demonstrate several key technologies intended to validate the capabilities of autonomous approach, rendezvous, capture and on-orbit servicing (OOS) of a target spacecraft by means of a robotic manipulator. The space segment consisted of two spacecrafts: a servicing satellite, the Autonomous Space Transport & Robotic Operations (ASTRO) vehicle equipped with a 3 m long manipulator and a satellite being serviced, a proto-type of a modular Next Generation serviceable Satellite (NEXTSat). Unlike the ETS-VII mission performed 10 years before, OE had to demonstrate a higher degree of autonomy in all tasks. ASTRO was equipped with several different navigation sensors and imaging software that enabled observation of the target regardless of lighting conditions, range and background (Woffinden and Geller, 2007;Nolet and Miller, 2007;Yoshida, 2009). The mission was successful although the servicer did experience some anomalies, one of which even threatened to end the mission at the day one. The anomaly was related to the flight software that commanded the reaction wheel "backwards" thus preventing the system from achieving a safe sun-pointing attitude. The situation was promptly discovered and solved with a software update issued by the ground control. Another anomaly worth mentioning was the primary sensor computer central processing unit (CPU) fault that ASTRO encountered during the 30-meter ARV scenario. The anomaly triggered an abort command and it took the ground control 8 days to solve the problem (Defense Industry Daily, 2007;Kennedy, 2008;Wright, 2011). Based on the mentioned missions it is therefore possible to note that almost every mission did experience some sort of malfunction that required a promptly intervention from the ground control. This underlines that autonomous rendezvous and proximity operations without humans in the loop is not yet mature enough (Pavone and Starek, 2014). Thus, much work needs still to be done to raise the technological readiness level that will eventually enable routine ARVD. One of the future missions planning on raising this technological level is the Deutsche Orbitale Servicing Mission (DEOS). The mission is currently in the definition phase and it is being developed by DLR and Airbus Defence and Space (as a prime contractor). According to current information, the mission should be ready for launch in 2018 (Airbus Defence & Space, 2012). The main mission objective is the in-orbit demonstration of technologies and techniques needed for unmanned autonomous and tele-operated on-orbit servicing of an uncooperative target. In particular, the mission will demonstrate all different phases of an autonomous rendezvous and docking/capture (ARVD/C) mission with increasing complexity (Rupp et al., 2009). The servicing spacecraft will have a 3 m robotic manipulator with 7 degrees of freedom (DOF), a docking and berthing mechanism. The client spacecraft should exhibit a grappling fixture and also a docking and berthing mechanism. The client will be designed to perform different attitude maneuvers in order to simulate a behavior of a non-cooperative, tumbling client satellite (Sellmaier et al., 2010). State of the art of RV control architectures The standard control architecture traditionally used for automated rendezvous and docking of vehicles, such as the Automated Transfer Vehicle (ATV), HTV or Progress vehicle, has been illustrated by Fehse (2003) in his book entitled Automated Rendezvous and Docking of Spacecraft. The architecture is divided in several modules interconnected between them showing simply the levels of authority. Those modules are 9 : the automatic failure detection, isolation and recovery (FDIR), the automatic mission and vehicle management (MVM) and the GNC. The ground control, as expected, plays in this architecture an important part given that it only has the authority to perform collision avoidance maneuvers (CAM) and impart commands to the rendezvous control system. Nevertheless, capturing a non-cooperative, tumbling target could require some degree of autonomy which is the motivation of the presented research. Nolet and Miller (2007) presented a control architecture developed for a nanosatellite platform SPHERES 10 to demonstrate a series of autonomous docking and formation flight experiments onboard the ISS. The presented architecture is an extended version of Fehse's. It takes into account an autonomous approach and thus grants the onboard computer the authority and capability to perform decisions and in particular to perform a CAM through the FDIR module in case of anomalies. Moreover, in this case the communication with the ground control is assumed to be intermittent or even nonexistent. However, Nolet and Miller (2007) consider that the target vehicle is able to communicate its states to the chaser while tumbling. Our assumption is that the target is not only tumbling but is also non-cooperative meaning that the chaser has to estimate on its own the relative position and attitude of the target prior to its capture. Moreover, Nolet and Miller (2007) consider only the docking scenario while we tackle the capture and manipulation of the target by means of a manipulator. Furthermore, our architecture should eventually include also some of the state-of-the-art GNC algorithms that are missing in the one developed by Nolet and Miller (2007). Nevertheless, given its proven and validated design, we have considered it as a basis for our own GNC architecture. More recently, Sommer and Ahrns (2013) presented a GNC concept for rendezvous and capture, by means of a lightweight manipulator, of a small spacecraft. The methodology of their research relies heavily on the consolidated experience of the ATV thus excluding some of the cutting edge algorithms and techniques. For example, the relative pose estimation is done by using a template matching technique, an iterative closest point algorithm (ICP) and a Kalman filter. The control of the attitude and position in close range is done through a configurable proportional-integral-derivative (PID) controller. In far/mid range the pose control is done simply by comparison of the reference and actual states of the spacecraft. No information is given regarding the guidance algorithm used. The role of the ground control is not explicitly mentioned in the research although it should be expected to be similar to the one of an ATV mission. State of the art of GNC algorithms The idea of an unmanned robotic spacecraft capable of capturing and servicing other malfunctioning spacecrafts dates back in early 1980s after the successful usage of the Space Shuttle remote manipulator system, in the STS-2 mission. Several manned on-orbit servicing missions followed to repair and deploy malfunctioning satellites (such as Anik-B, Intelsat 6 and Hubble telescope), but a completely autonomous, unmanned mission has yet to become reality despite the demonstration missions mentioned at the beginning of the section (Yoshida and Wilcox, 2008). Nevertheless, there has been over the years a tremendous amount of theoretical research dealing with individual areas of ARVD/C missions, especially in the context of guidance, navigation and control. Flores-Abad et al. (2014) have provided an exhaustive review of space robotics technologies for on-orbit servicing. Based on their work we present hereafter some of the state of the art research in the navigation and guidance fields. A description of some of the state-of-the-art control algorithms follows. Starting with the algorithms for the estimation of the pose of a target, Hillenbrand and Lampariello (2005) proposed a method for estimating not only the pose and angular velocity of a free-floating target but also its center of mass and inertia tensor by using range data and a least square method. Tzschichholz et al. (2011) presented an algorithm for spacecraft pose estimation and motion prediction based on rotation-and scale-invariant features using a photonic mixer device (PMD) camera. Aghili et al. (2011) made a study of a fault-tolerant method for pose estimation of space objects using Neptec's Laser Camera System (LCS), Kalman filter (KF) and an iterative closest point (ICP) algorithm in a closed-loop configuration. Regarding guidance techniques for proximity operations, Flores-Abad et al. (2014) mentions the following works: Matsumoto et al. (2003) proposed two methods for safe approach to an uncontrolled rotating spacecraft: a passive fly-by and an optimized trajectory. Ma et al. (2012) optimized the approach trajectory of a chaser to a tumbling target such that the relative motion between the two is zero by minimizing the approach time and fuel. Pontryagin's Maximum Principle was used for the optimization process. A method using mixed-integer linear programing (MILP) or alternatively only linear programming (LP) for generating on-line fuel-efficient and safe trajectories was developed by Breger and How (2008). Concerning the control algorithms, there have been a wide variety of researchers tackling both linear and nonlinear control methodologies. Luo et al. (2014) have provided in their research paper a good overview of current modern control methods based on fuzzy logic (Karr and Freeman, 1997), neural networks (Youmans and Lutze, 1998) and simulated annealing algorithms (Luo and Tang, 2005). The State-dependent Riccati equation (SDRE) approach recently seems to attract a lot of research in this field as proven by the works of Çimen (2010); Lee and Pernicka (2010); Di Mauro (2013). The Linear quadratic tracking controller (LQT) and linear quadratic regulator (LQR), based on linear systems, have been studied and proposed by Lee and Pernicka (2010) and Arantes and Martins-Filho (2014). Reference mission The reference mission selected for this research is a robotic ADR mission aiming at capturing and de-orbiting several targets, all of the same type. This approach has several advantages over the single object mission. Namely, the reduced research and development effort (R&D) for the whole mission and the overall cost. Following is a more detailed description of the selected target object, the robotic chaser spacecraft and the overall mission profile. The description is more focused on proximity operations since our aim is just to give a context to the architecture that is described in the next section. Target object According to the US Space Track catalog 11 there are in orbit, at the time of writing 12 , 3974 intact payloads, 1998 intact rocket bodies and 11157 tracked space debris objects. If the objective of future ADR missions is to stabilize the space debris environment by limiting the number of fragments arising from accidental collisions, targets of those missions should be mainly large intact objects from the most crowded regions. This means, to focus the ADR efforts towards targets that exhibit the highest product of collision probability and mass. The objects on highly eccentric GEO-transfer orbits should however be excluded from this metrics given their limited presence in LEO (Liou, 2011a). According to the above mentioned ranking method, (Liou, 2011a) has identified the top 500 targets that should be first tackled by any future debris removal mission in order to stabilize the space debris environment. The prograde region, and in particular the h = 950 ± 100 km, i = 82 ± 1 deg 13 band is especially interesting due to the fact that it is dominated mainly by several well-known RSOs (e. g. SL-3 R/Bs-Vostok second stages, SL-8 R/Bs-Kosmos 3M second stages, SL-16 R/Bs-Zenit second stages, etc.) (Liou, 2011a;DeLuca et al., 2013). Additional issues that need to be taken into consideration during the selection of the target are the issues of legal nature. In fact, according to the international law, the launching state retains the jurisdiction of the launched object perpetually. Thus, to remove an RSO, an approval from its legal owner is needed (DeLuca et al., 2013). In general, rocket bodies are considered to be less confidential than spacecrafts which is why their removal should pose less legal problems. Furthermore, due to their design, they are considered to be sturdier. Moreover, they generally do not posses appendages and their attitude motion in LEO can be expected to be stable, with low angular rates 14 , after only few years in space (Praly et al., 2012). All those considerations have led us to consider the Russian Kosmos 3M second stages (see Figure 3 15 ) as the target objects of our research. Around 300 are currently present in orbit (DeLuca et al., 2013). 11 Url: https://www.space-track.org. 12 August 26th, 2014. 13 h indicates the orbital altitude and i its inclination. 14 Considered to be according to the literature few degrees per second. 15 Source url: http://goo.gl/sUTltm One particular R/B was chosen as the initial target of the reference mission due to its orbital characteristics and the date of the launch 16 . The selected R/B is identified in the US Space Track catalog 17 with the ID 1975-074B and its essential orbital parameters are listed hereafter 18 : The authors were not able to retrieve any data concerning the attitude motion of the chosen target. However, according to studies performed by Praly et al. (2012), it is acceptable to consider that given its age, its angular rate should be low and in any case no more than few degrees per second. Chaser spacecraft The chaser spacecraft is a robotic system, similar to the one described by Castronuovo (2011), carrying onboard four de-orbiting kits, a suite of sensors for ARVD/C and two robotic manipulators. One will be used for the capture of the target while the other will be used for the attachment of a de-orbiting kit to it. The complete system architecture is out of the scope of the present paper thus, the general overview of the chaser system will be given for the sake of completeness. The choice of de-orbiting kits and in particular of hybrid propulsion modules (HPM), such as those described by DeLuca et al. (2013), was made based on the requirements of a controlled reentry and very high levels of thrust needed for de-orbiting the target. Hybrid propulsion modules were a specific choice given their compact design, high specific impulse, throttling and re-ignition capabilities. The latter two characteristics are in particular the advantage of such modules over the ones based on the solid state propellant. Their disadvantage is a lack of space experience that however should be overcome in coming years given the potential and benefits they showed over the last decades. The high level system architecture of a chaser for an ADR mission is a difficult trade-off, given the number of variables that need to be taken into account in the optimization process. This difficulty is underlined in a paper written by Bonnal et al. (2013) focusing on recent progress and trends for the ADR. The authors stress that currently there is a confusion on how an optimal system architecture of a chaser should look like. Nevertheless, they have identified as a most promising solution a 4-5 t chaser spacecraft carrying at least five de-orbiting kits that should be launched on an Ariane 5 class launch vehicle along with four other identical spacecrafts, each targeting contemporaneously different orbital regions . Based on the above mentioned result, we assume a chaser spacecraft of the same class whose characteristics and exact configuration are to be defined in a future research. Nonetheless, the feasibility of the usage of robotic manipulators for the envisioned mission should be out of question as outlined by recent papers (Castronuovo, 2011;Bonnal et al., 2013). The capture of the target is assumed to be performed using only one manipulator having as an end-effector a capture mechanism that employs directional (Hawkes et al., 2013) or electro adhesives (Tellez et al., 2011;DeLuca et al., 2013). The use of these capture mechanisms allows relaxing the requirements on the tracking of a capture point, since no specific feature on the target has to be grasped. The spacecraft itself is supposed to be a three axis stabilized vehicle able to perform orbital maneuvers, by means of a cluster of bi-propellant main engines, i. e. orbit control thrusters (OCT). Other actuators of the spacecraft control system (SCS) are a cluster of reaction wheels (RWs), a three-axis magnetic torquer (for desatuartion of RWs), and bi-propellant attitude control thrusters (ACT). The latter are supposed to be ON/OFF thrusters and to have a thruster level of around 200 N. RWs are taken into account in order to reduce fuel requirements given the expected long duration of the mission and dynamical coupling that will occur between the manipulators and the base spacecraft. Note however that in order to make the GNC architecture as generic as possible, the SCS is considered as a black box capable of controlling the spacecraft in all 6 DOF. Thus, specific algorithms for controlling the SCS hardware components will be neglected at the time of writing. This restriction will be evaluated further on and eventually removed from the GNC architecture. Similarly to the configuration defined by Sommer and Ahrns (2013) in their concept, the sensors system of the spacecraft is composed of the typical suite of sensors for an attitude determination and control system (ADCS), plus a suite of sensors needed for the relative navigation purposes. The first suite of sensors is imagined to be composed of a coarse Earth/Sun sensor, a magnetometer, star sensors, gyroscopes and GPS receivers. The suite of sensors for the ARVD/C is envisioned to be composed instead of: an infrared (IR) camera and an optical camera for the far range phase 19 (i. e. for distances from 5 − 1 km) of the RV, of a scanning LIDAR for the close range approach (i. e. for distances from 1 km − 50 m) of the RV and for the pose estimation (in 3D mode and distances from 50 − 3 m) of the target and finally of a stereo camera system for the capturing and manipulation phases. Mission scenario In our mission scenario a single chaser spacecraft is expected to autonomously rendezvous and capture, in sequence, four Kosmos 3M rocket bodies and deorbit them. In order to perform this task it needs to successfully accomplish several major mission phases. Those phases are (Fehse, 2003): launch, phasing, far range rendezvous or homing, close range rendezvous and mating (or more specifically capture in our case). This paper focuses on the last two phases, but, for the sake of completeness, the description of the whole mission scenario is presented in what follows. Phasing with the target After the launch, the chaser is assumed to be injected into an initial near circular orbit that is in the same orbital plane as the one of the target, but is lower in altitude, as illustrated in Figure 4. During this initial phase, the chaser will be placed few tens of kilometers below and behind the target. This way the chaser will be in an orbit well below the sphere of uncertainty of approximately 1-2 km in diameter, assumed to be surrounding the target (DeLuca et al., 2013). At this point, after the successful initialization of the chaser spacecraft, the phasing maneuver is initiated in order to reduce the phasing angle between the two vehicles. To achieve this, the altitude of the chaser's orbit is gradually raised until the rendezvous gate, which is assumed to be around 3 km below and 5 km behind the target (see Figure 5). These steps, visible in Figure 5, consist in a series of Hohmann transfers and drift times. This approach offers several advantages over a direct injection into the target's orbit. First, the passive collision avoidance safety is guaranteed at all time given that, even in the case of chaser's complete control inability, the spacecraft will only drift below the target indefinitely. Second, the Hohmann transfers are generally the most fuel efficient orbital transfers in LEO, which makes this approach very fuel efficient. Third, the timing of Hohmann maneuvers and the duration of drift times can be appropriately tuned to meet specific mission requirements (Barbee et al., 2010). Absolute navigation sensors (such as the GPS receivers) are generally used in this first rendezvous phase. Autonomy is not needed in this phase since the commands are generally sent directly from the ground control. Far range rendezvous After reaching the rendezvous gate (see Figure 5), the far range rendezvous phase is preformed to bring the chaser in the immediate vicinity of the target and create the conditions for close range rendezvous or final approach. This phase consists respectively of a homing and closing rendezvous maneuvers, as illustrated in Figure 6 and Figure 7. In these two phases, only relative navigation is performed using the onboard sensors such as an optical or IR camera and/or a LIDAR. To switch between those sensors at least one intermediate hold point (at a distance of approximately 250-100 m) is necessary. At the start of the homing maneuver, the bearing angles (azimuth and elevation) are the most important parameters, but, as the distance between the two reduces, the relative distance and velocity gain more and more prominence (Castronuovo, 2011). It is worth noting that technologies and tech- Two possible profiles for the far range rendezvous phase have been identified in this paper as the most suitable for this kind of mission, given their inherent passive safety and, to some extent, fuel-efficiency. The first one, illustrated in Figure 6, is similar to the approach strategy of the European ATV. It consists at first of a Hohmann transfer, to bring the chaser at the same altitude of the target, but, around 1 km behind it (P 1 in Figure 6). From this point a series of radial boost transfers, with waiting (station keeping) points, follows, to place the chaser in the immediate proximity of the target, about 50 m from it (P 2 in Figure 6). The waiting points are to be used for the switch-over of navigation sensors and re-evaluation of the relative distance between the two objects. When the chaser is at point P 2 , the first pose estimation of the target's motion is performed. A fly around maneuver 20 , using a radial boost, is preformed for inspection of the target. Finally, the chaser returns to the initial Figure 6) where it performs another pose estimation of the target, before starting the final approach. The second method, illustrated in Figure 7, consists instead of two Hohmann transfers and some drift times that bring the chaser 250 m below and behind the target (P 1 in Figure 7). The advantages of this method are the same described in the phasing strategy. In this approach, the drift times are used for the switch-over of navigation sensors and and re-evaluation of the relative distance between the two objects, just as the waiting points were used in the first strategy. The schedule of the Hohmann transfers and the amount of drift times can be appropriately tuned to ensure the convergence and accuracy of the navigation filter (Barbee et al., 2010). From point P 1 , a free drift of the chaser is allowed until the in-track distance between the two is zero (P 2 in Figure 7). During this phase a final estimation of the target's position is performed from below. An inspection of the target vehicle could also be performed given the relative vicinity of the two vehicles. Once that the position has been estimated and that the in-track distance is zero, a maneuver using Clohessy-Wiltshire targeting 21 (C-W targeting) is Figure 8, is an out of plane relative elliptical trajectory of the chaser around the target, that is is fixed with respect to the target and never crosses its V-bar 22 (see Figure 8) (Barbee et al., 2010). The projection of the ellipse on to the radial, cross-track plane (R-bar/H-bar or y/z plane visible in Figure 8) should be a circle of 50 m, thus guaranteeing the minimum distance between the two vehicles. The advantage of this approach lies in the possibility to appropriately design the SE to reach desirable illumination conditions required for the inspection, while guaranteeing at the same time the passive safety of the trajectory. The pose estimation of the target is to be performed while the chaser moves on the SE. Close range rendezvous In both previously mentioned far range rendezvous strategies, the final approach phase begins with the acquisition of the capture axis (see the il-lustration on the right of Figure 6 and Figure 7). The approach trajectory will vary according to the closing method chosen and the requirements of the robotic capture mechanisms. However, in any case it shall guarantee passive safety and to some extent fuel efficiency. The capture axis will generally be the main axis of rotation of the target body. Independently of the previous approach strategy, once the capture axis has been reached, the maneuver will consist either in: a) a straight line trajectory, consisting of a series of hold points and constant rate motion within a predefined corridor (illustrated in Figure 6 and Figure 7) or b) in an optimized trajectory that limits as much as possible the active safety requirement and is fuel-efficient. The final selection of one of the two depends greatly upon the requirements of the robotic capture mechanism that will be defined in future studies. Nevertheless, in both cases the capture approach lasts until the berthing box 23 is reached or the conditions for the capture are met. In case something goes wrong a CAM is to be performed autonomously by the chaser. It is paramount that the autonomous pose estimation of the target is constantly updated during the capture phase to know at every moment the exact 24 relative position and attitude of the target. Once the berthing box has been reached the chaser needs to actively synchronize, within the required boundaries of the capture mechanism, its attitude motion with that of the target. Moreover, the chaser needs to actively maintain its position within the moving berthing box given the natural drifts that would otherwise occur in just few minutes. The capture and manipulation Finally, the chaser deploys its robotic capturing mechanism and captures the target. After the attenuation of the shock and residual velocities, the rigid connection between the two spacecrafts is achieved. Transferred angular momentum, from the target to the chaser, is dissipated and the compound is stabilized. At this point the second manipulator will detach an HPM de-23 Defined essentially as a volume within which the chaser must stay in order to create conditions necessary for the capture of the target vehicle. For a more detailed description please refer to (Fehse, 2003) 24 Within the limits of the pose estimation sensor precision. orbiting kit 25 from the chaser and firmly attach it to the target. A preferable attachment position is the main engine of the R/B, given its mechanical properties and alignment with the center of mass of the R/B. The envisioned attachment could use either an expandable umbrella mechanism (Castronuovo, 2011) or the so called corkscrew system (DeLuca et al., 2013) or even a clamp mechanism that would rigidly secure the de-orbiting kit to the main engine of the R/B. Afterwards, the chaser will reorient the composite system in the right direction and retreat itself to a safe location while the de-orbiting maneuver is performed. Subsequently, the chaser is free to perform the described sequence again, in order to reach and de-orbit the next object in the sequence. It is worth noting that the description of the chaser's robotic capture mechanism is intentionally vague given that its definition will be scope of our future studies. GNC concept The guidance, navigation and control system has to: a) process the information coming from sensors, b) plan the execution of appropriate maneuvers and c) perform them. Based on the following, the GNC architecture has been defined in this paper as "an abstract description of the entities of a GNC system and the relationship between those entities" (Nolet and Miller, 2007). Modularity is seen as one of the key feature of this architecture given that it is envisioned to be built using the open source Robot Construction Kit (ROCK) 26 software framework specifically developed for robotic systems and with modularity in mind. The ROCK provides a wide variety of tools necessary to develop and test robotic systems for many applications. Particularly, it contains a multitude of ready to use drivers and modules, and can be easily extended adding new components, facilitating the development of the GNC architecture and its implementation onto the robotic hardware, for testing purposes. Moreover, its open source nature is seen as another advantage, since the developed architecture should be easily accessible and modifiable by the sci-25 The definition of the HPM de-orbiting kit is out of the scope of the current paper so for further information please refer to the research performed by DeLuca et al. (2013). 26 Url: http://rock-robotics.org/stable/index.html. entific community. Development of the architecture in other platforms is not excluded, given the early stage of the research, but the ROCK framework is at the time of writing the chosen platform. Figure 9 illustrates the envisioned architecture. It consists of several software modules each responsible for a particular function within the GNC system. As in the research of Nolet and Miller (2007), each software module is a set of algorithms capable of executing a particular task. Those modules and their principal tasks are: 1. navigation module: performing the pose estimation of the target 2. guidance module: performing trajectory planning towards the capture axis and ultimately towards the target with safety and fuel-efficiency in mind 3. control module: performing the execution of maneuvers according to the guidance function and suppression of external disturbances Their more in depth description is illustrated further on in this section. Particular attention is devoted to the individualization and characterization of algorithms chosen for populating relative modules. Their quantitative evaluation and modes of implementation within the GNC architecture is left for a future research. For the sake of completeness, the robotics module, although not explicitly part of the GNC architecture 27 (see Figure 9) will be described only briefly in this section, due to the initial stadium of our research in this area. A future paper will be dedicated entirely to this topic and its integration with the spacecraft's GNC architecture. The MVM and FDIR modules (visible in Figure 9) are not considered at this stage of research although we are well aware of their importance in an autonomous system like this one. This is especially true in the last few meters of the close approach, when CAM capabilities of the chaser spacecraft are usually a requirement. It is worth noting that the architecture is built having in mind the most critical phase of an ADR mission which is the close range rendezvous. The target of the mission, as described in previous sections, is assumed to be an uncooperative but known (in shape and approximate attitude) Kosmos 3M R/B. Navigation module The task of the navigation module is to use a filter to: a) process the information about the states of the chaser and target vehicles 28 and b) propagate this information in time by using the model of the spacecrafts' dynamics and information about the imparted commands. This information is then made available to the control and guidance modules for further processing (Fehse, 2003). The sensors used by this module are: a LIDAR (in 3D mode), to generate a 3D point cloud of the target, and inertial measurements units (IMUs) (i. e. gyros and/or magnetometers), to eliminate the ambiguity between the pure rotation and translation (Kervendal et al., 2013) 29 . The reason for choosing a LIDAR is that this active sensor has already been used successfully in space. Moreover, it is relatively insensitive to illumination conditions, and it is usable over a wide range of distances. Moreover, a working unit is present, as of time of writing, in DFKI's facilities, which means that it could be used for real testing of developed algorithms. The disadvantages of using such a sensor are the power consumption and the required minimum distance between the chaser and the target. Thus, these issues must be taken into account when defining the required onboard power and the characteristics of the robotic capture mechanism. Regarding the filter algorithms, a thorough survey of possible nonlinear attitude estimation algorithms has been provided by Crassidis et al. (2007). Based on the literature research, we have selected for this module the extended Kalman filter (EKF) and the unscented Kalman filter (UKF). The desirable features of such an algorithm are: fast convergence, robustness and stability in the whole state space of the mission (Nolet and Miller, 2007). The EKF has been used quite extensively in the aerospace industry in the last few decades, given the right balance it offers between the computational requirements and performance. Moreover, it has the same mathematical scheme of the traditional KF, but, it has the advantage of being used in nonlinear systems, such as ours. All this made it a good candidate as the baseline filter technique. However, the convergence and robustness of the algorithm are not a priori guaranteed as in case of the KF. Additionally, the need to calculate the Jacobian functions might prove difficult, if the functions of the dynamics or measurements prove not to be differentiable (Nolet and Miller, 2007). Hereafter, we present a generic EKF algorithm omitting some theoretical considerations. More detail on the presented algorithm can be found in (Wan and van der Merwe, 2002). The basic idea behind the EKF is to estimate the state of a discrete-time nonlinear dynamic system that can be described with (Wan and van der Merwe, 2002;LaViola, 2003): (1) where x k is the unobserved state of the system, u k is a known control input vector, y k is the observed measurement signal, v k is the process noise and n k is the observation noise. The system dynamic models, represented by the functions F and H, are assumed to be known. The EKF, like all Kalman filters is a recursive process which uses the dynamic model of the system to make an estimate of its current state and correct it using measurement updates. With this in mind, the explicit equations of a generic EKF algorithm are what follows (Wan and van der Merwe, 2002): For k ∈ {1, . . . , ∞}, the time update equations of the extended Kalman filter are: and the measurement update equations are: |n; R v and R n are the covariances of v k and n k , respectively; x 0 is the initial state of the system; P x 0 is the expected initial state error;x − k is the optimal prediction (i.e. prior mean) of x k ; P − x k is the prediction of the covariance of x k ; K k represents the optimal gain term at the step k and I is the identity matrix.n andv are the values of the noise means and are equal to E [n] and E [v], respectively. The superscript (−) indicates a value prior to a state update and (∧) indicates an estimated value. The UKF on the other hand has not been yet used in space, but, given that it does not require the computation of the Jacobian functions, it has the advantage of ease of implementation over the EKF. Moreover, with respect to the EKF its should present: lower error, faster convergence and higherorder expansions. Its disadvantage is that it requires as much as twice the computational load (Nolet and Miller, 2007;Crassidis et al., 2007). The basic idea behind the UKF is to use a deterministic sampling approach to capture the mean and covariance estimates with a minimal set of points, instead of linearizing a nonlinear function using Jacobian matrices (LaViola, 2003). Hereafter, we present a generic UKF algorithm omitting some theoretical considerations, as we did in case of EKF. More detail on the presented algorithm can be found in (Wan and van der Merwe, 2002). A generic UKF algorithm can be described by the following (Wan and van der Merwe, 2002): For k ∈ {1, . . . , ∞}, Calculate sigma points: Time update equations are: Measurement update equations are: where x a = x T v T n T T , X a = (X x ) T (X v ) T (X n ) T T , γ = (L + λ); λ is a composite scaling parameter; L is a dimension augmented state; R v is process noise covariance; R n is measurement noise covariance; W i are weights as expressed in (Wan and van der Merwe, 2002);ỹ k = y k −ŷ − k , X is a matrix of 2L + 1 sigma vectors X i as defined in (Wan and van der Merwe, 2002) and In order to use the mentioned algorithms, the single pose estimation of the target needs to be calculated. This is done in our case by using the open source Point Cloud Library (PCL) 30 . The C++ library is already integrated into the ROCK framework and contains all the state of the art algorithms for 3D point cloud processing. The only hurdle that we think could appear is the significant amount of resources that such kind of library generally requires given its terrestrial nature 31 . Thus, a quantitative evaluation of the required resources of the library for our purposes is a next logical step to asses its usability on a robotic spacecraft such as ours. However, this approach if viable, would significantly speed up the implementation of the navigation module and ultimately of the whole architecture. Guidance module The task of the guidance module is to define, in time, a set of nominal values that will be used by the control module as a reference for the required maneuvers (Fehse, 2003). More specifically, the guidance function has to perform, based on the mission phase, the following actions (Fehse, 2003 In case of an ADR mission the most critical task of the guidance module is the path planning of the final approach trajectory. The criticality of this trajectory is principally given by the stringent safety requirement (in particular the passive safety requirement) that such phase of the RV involves. Nevertheless, the safety is not the only desirable feature that such trajectory should have. Other features that should be considered by the relative path planing algorithm are the propellant consumption, the robustness to perturbations, the plume impingement and the line of sight. Moreover, low computational capabilities of space qualified computers limit quite heavily the number of algorithms that could be practically applied. Thus, this additional requirement should be also taken into consideration during the research for the best possible path planing algorithm (Nolet and Miller, 2007). Numerous methods have been developed during the years to solve this optimization problem, as it has been mentioned in Subsection 2.3. The ones we selected for this concept based on the literature research are the inbound decelerating glideslope algorithm and the MILP-based path planing algorithm developed by Breger and How (2008). The glideslope algorithm, similarly to the EKF described in the previous subsection, has been extensively used in space for real time trajectory planning. The reason behind this lies in its simplicity, robustness and low computational requirements (Nolet and Miller, 2007;Hablani et al., 2002). Moreover, it has been successfully used for autonomous docking of SPHERES microsatellites (Nolet and Miller, 2007). Thus, the inbound decelerating glideslope algorithm was the clear choice for a baseline of our guidance module. The algorithm is especially suited for straight line approaches given that it calculates the velocity profile in the phase plane, using a finite number of thruster commands. This makes it a hybrid algorithm incorporating also a velocity control, that, if paired with another control algorithm, could be used directly for planning and executing the approach along and transverse to the capture axis. In addition, it does incorporate some sort of plume impingement feature, given that it reduces the amount of thrust towards the end of the trajectory. Nonetheless, it does not account for the propellant consumption or passive safety of the trajectory (Nolet and Miller, 2007), which, was the motivation for the selection of another more advanced algorithm that could ultimately solve the above mentioned optimization problem. The generic mathematical expression of the inbound decelerating glideslope algorithm can be described with the following equation (Nolet and Miller, 2007): where ρ is the linear distance between the chaser and target,ρ is the approach velocity,ρ T is the desired arrival velocity and a is the glideslope (< 0). The solution of the previous differential equations is (Nolet and Miller, 2007): The total time of the maneuver that the spacecraft employs to go from a ρ 0 to 0 can be calculated with the following expression (Nolet and Miller, 2007): More detail on the presented algorithm can be found in (Hablani et al., 2002). The MILP-based path planing algorithm selected as the advanced algorithm for the guidance module is the one developed by Breger and How (2008). The following is its generic formulation omitting some theoretical considerations. For more information please refer to (Breger and How, 2008). A linearized dynamics of a chaser spacecraft being in a state x k , at time k can be written as (Breger and How, 2008): where A d and B d are the state transition matrix and the discrete input matrix for a single time step, and u k is the input vector at the step k. The state of the spacecraft at any future step k can be described by (Breger and How, 2008): where Γ k is the discrete convolution matrix. To solve the optimization problem the cost function that penalizes exclusively the fuel usage is used (Breger and How, 2008): where the 1-norm cost is used to take into account the fuel expenditure. This way the optimization problem can be formed to optimize the control input command and, at the same time, constrain the states of the system (Breger and How, 2008). With this in mind the selected MILP algorithm consists of the optimization of the following (Breger and How, 2008): where Equation 32 constrains directly the input at each time step between the vector bounds u min k and u max ; Equation 33 describes the requirements of a line-of-sight (LOS) (with respect to the target satellite); A LOS k and b LOS k describe the states within the LOS cone at step k; Equation 34 describes, through the state terms A T erm N and b T erm N , the terminal constraint at the end of the planning horizon, that, the spacecraft must achieve for a safe docking; and finally Equation 35 defines the safety horizon, i. e. the period of time after a failure during which both spacecrafts are guaranteed not to collide. In the latter equation x F T k is a chaser state at some step k < N , in the planning horizon, after an occurred failure at a step T and is defied by Breger and How (2008) as: In Equation 35 T k defines the set of position states occupied by the target, S is a number of steps the safety horizon lasts, after the end of the nominal trajectory and F is the set of every potential failure time at which the system must guarantee collision avoidance even during the GNC system shutdown (Breger and How, 2008). Starting from this algorithm it is possible to expand it even further to guarantee a longer safety horizon and at the same time prevent failure trajectories from drifting away from the target. To achieve this, invariant formulation of the algorithm must be made. This is done by constraining the state of the chaser in the failure trajectory at some step k to be the same one full orbit after k. Mathematically this is done by adding another constraint to the algorithm described by Equations 31-35 (Breger and How, 2008): where N 0 is the number of steps in an orbit. The state of the chaser is propagated forward using a linear state transition matrix. With this formulation all failure orbits are guaranteed to be invariant with respect to the target, which means that if the invariance constraints are properly imposed, all failure trajectories will result in circular trajectories relative to the target at no fuel expenditure. When compared with a strict V-bar straight line approach, the fuel savings, that Breger and How (2008) were able to obtain in their case study with the invariant formulation of the algorithm, were significant, around 9 times. However, solving this type of algorithm requires quite intensive calculations which makes its real-time implementation very difficult. A solution to this problem could be to use a linear programing (LP) formulation that allows the reduction of the computational load 150 times, while reducing the fuel optimality by only two times. This was done by Breger and How (2008) by using the convex safety formulation, as opposed to the non-convex safety formulation mentioned until now. The latter requires the chaser to remain outside a collision avoidance region while the former constrained the failure trajectories to a region known not to contain the target (Breger and How, 2008). Mathematically this is achieved by adding to the algorithm described until this point (see Equations 31-37) the convex safety constraints (Breger and How, 2008): where y min is the maximum in-track position of the spacecraft and H y is a row vector that extracts the scalar in-track component. Control module The task of the control function is to generate appropriate commands (i. e. control forces and torques) to achieve the nominal attitude and trajectory, according to the discrepancies of the actual state vector from the desired one. Additionally, it has to ensure the stability of the vehicle (Fehse, 2003). In homing and closing phases this is done by controlling separately the attitude and the trajectory by using open loop maneuvers based on the initial and final relative states (Luo et al., 2014;Fehse, 2003). As the distance between the two objects reduces the accuracy requirements become more stringent and the closed loop control must be employed (Fehse, 2003). This is particularly true for the final approach phase, where the maximum relative distance is only 50 m in our mission scenario. A single-input-single-output (SISO) control system can be used to control separately the translation and the rotation of the chaser until very few meters from the target, given that the coupling between the two is relatively small. In close proximity however, the mentioned motions are coupled and a multiple-input-multiple-output (MIMO) control should be considered. This requirement is nevertheless less stringent in a case such as ours where the chaser has to acquire a berthing box and not a particular docking port (Fehse, 2003). These considerations indicate that for the last few meters, the control module requires an advanced multi-variable controller. Other desirable features of the controller are the stability, robustness and fuel efficiency (Nolet and Miller, 2007). For this purpose a great deal of research has been preformed, as mentioned in Subsection 2.3. Based on those researches we have selected in particular two controllers: the proportional-integral-derivative (PID) and the LQR. The PID was chosen to represent a baseline for the control module, given its proven usage in space, low computational requirements and general robustness (Nolet and Miller, 2007). The LQR, on the other hand, was selected as a more advanced controller capable of dealing with the optimization process (Nolet and Miller, 2007). The PID is a well known, commonly used controller which has a proven space heritage. It does not solve the optimization problem, but, in comparison to the others, is easier to implement and could potentially deliver a higher degree of accuracy (Nolet and Miller, 2007). The "textbook" version of the continous-time PID controller can be represented by (Haugen, 2010): where u 0 is the control bias or manual control value to be tuned accordingly, u is the control command output, e is the controller error defined as: where r is the reference and y is the measured process. However, given the discrete nature of the GNC architecture, this standard form is not suitable to be implemented in it. For this we need a descretetime exspression of Equation 39. Following the discretization implemented in (Haugen, 2010), the expression of the discrete-time PID controller is what follows (Haugen, 2010): where u(t k ) is a control command at the step t k and T s is a time step or sampling interval (i.e. typically 0.1 s in commercial controllers). LQRs, on the other hand, have never been used in space despite they are a class of well established algorithms in the control community. However, they have been used in more than one occasion for theoretical studies of close range proximity operations given their design flexibility and inherent ability to optimize cost functions. They have the same computational requirements as the PID, but are more difficult to implement (Nolet and Miller, 2007). Hereafter, a generic form of a typical LQR algorithm is presented omitting some theoretical considerations. More detail on the presented algorithm can be found in (Stengel, 1986;Nolet and Miller, 2007). Let us first define a discrete time-variant controllable system as what follows (Nolet and Miller, 2007) : where x k is a state vector at time t k , A k and B k are dynamic and control matrices, respectively, and u k is a control input vector. With that in mind, the optimization problem consist in finding the control gain matrix K k that minimizes the quadratic cost function, J, associated with the state and control inputs, over a finite horizon of steps, N (Nolet and Miller, 2007): where The optimal control input has the following form (Nolet and Miller, 2007): where the optimal gain matrix, which solves the problem is calculated using: and P k satisfies the algebraic Riccati equation The recursion process for P is initiated with the following equation (Nolet and Miller, 2007): and is solved backward in time (i.e. from k = N, . . . , 1). It is worth noting that, the weight matrices Q, Q f and R are to be determined and tuned appropriately by the user to meet the required behavior of the controller (Nolet and Miller, 2007). Their nonlinear counterparts, the SDRE controllers, appear even more attractive, given their ability to account for perturbations and nonlinear relative dynamics of an ADR mission. The only disadvantage is that they generally require significant computational power. One approach to solve this was developed by Di Mauro (2013), but given the novelty of the approach we have to preform further in depth research and quantitative analysis to rule in or out this intriguing nonlinear technique. Robotics module The tasks of the robotics module are essentially to: 1. control the capture of a tumbling target, by means of a robotic manipulator 2. stabilize the compound (i. e. chaser plus target), while limiting the transfer of the angular momentum from the target to the chaser These tasks are readily solved on ground, however, in space, the control problem arises from the fact that any motion of the manipulator exerts reaction effects on the mounting spacecraft. This leads to a series of constraints that a control architecture of a free-floating 32 robotic spacecraft must take into consideration during the operation of its manipulator. The most prominent are (Ellery, 2004): 1. generalized Jacobian is required to derive the orientation of the spacecraft 2. robot kinematics are affected by dynamic properties of both the spacecraft and the manipulator 3. dynamic singularities, function of both robot kinematics and dynamic properties of the manipulator and the spacecraft, occur in the workspace of the robot 4. joint angle configuration is path dependent due to the non-holonomic redundancy Moreover, due to the very small magnitude of the existing dissipative forces in orbit, the control architecture of the robotic spacecraft must limit the impact forces and torques transmitted to the target body during the contact phase. At the same time, the control architecture must optimize the configuration of the robotic spacecraft during the pre-impact phase to counteract the angular momentum of the target spacecraft once the latter is safely grasped (i.e. during the post-impact phase). Up until now, there has been a vast amount of literature covering various phases of the robotic capture of an uncooperative target, but, just as in case of a GNC, it is difficult to choose one method which could readily solve the entire problem. Moreover, most of the studies concentrate on individual operations (Yoshida et al., 2006) without considering the whole control problem of the capture process. Furthermore, most of the proposed methods are developed having in mind only the limited resources of an on-board computer. Thus, they frequently do not guarantee a feasibility of a planned trajectory and do not exploit the nonlinear nature of the robot kinematics to optimize the grasping of a tumbling target (Lampariello and Hirzinger, 2013). The proposed control architecture, illustrated in Figure 10, is divided into two modules: an onboard (i. e. on-line) and on ground (i. e. off-line). The latter consists of target motion simulation and prediction module along with a motion planner based on learning algorithms. The former instead resides within the robotics module, outside the GNC architecture (see Figure 9), in order to enhance the computational efficiency of the onboard computer (Ellery, 2004). It uses the calculated off-line solution as an initial guess for the trajectory generation and control of the robotic arm in real time. The reason behind this division lies in the computational requirements of the motion planner, that can not be performed in a reasonable time with the computational power of nowadays onboard computers. Moreover, it is worth noting that this computationally intensive task has to be performed just once given the dynamic properties of the robotic chaser spacecraft and the target's geometry. Thus, it makes more sense to do it on ground and upload it to the spacecraft before the capture maneuver. The described control architecture uses coordinated manipulator/spacecraft motion control, known in terrestrial robotics as full body control, to optimize the whole configuration of the robotic spacecraft, during the grasping task, Figure 10: Concept of robotic control architecture and to limit the transmitted angular momentum from the target body to the base spacecraft, after the grasping task. Robotics module The off-line motion planner will be based on machine learning 33 and will be dedicated to the one time, off-line identification of: a) a best suitable grasping point, b) workspace analysis and c) reachability optimization. Two possible methods were identified for this calculation: a) the black box approach and b) the parametrization. In the first case, the learning algorithm selects and evolves on its own the most suitable outputs for the optimization process, starting from current states of the chaser and target. However, it should be noted that this approach is only feasible for more or less simple optimization problems or in case the search space is well defined. Outside these boundaries the algorithm could simply fail to perform the optimiza-33 The individuation of a particular learning algorithm is still work in progress thus at the time of being no specific learning algorithm is mentioned in this paper. tion process. The second approach instead relies on an operator to select the outputs to be optimized and parametrize them so that they can be then evolved by a learning algorithm. This approach assures the desired performance of the optimized configuration, although, the first method appears the most interesting one since it could give birth to new and unexpected configurations. Nevertheless, the expertise and knowledge of an operator cannot be replaced by the first method. Thus, the best solution would be to use the black box approach as a starting point of a further optimization process based on parametrization of the selected outputs. The expected advantages of the proposed control architecture over the existing methods would be: 1. manipulator and base spacecraft motions would be constrained 2. dynamic singularities would be avoided 3. dynamic coupling would be used to facilitate the capture maneuver 4. angular momentum of the whole system would be significantly limited Validation The development of a GNC architecture is however just one piece of the puzzle given that the developed architecture will need to be appropriately tested. However, difficulties in computer rendering of the space scene, as observed by chaser's navigation sensors (e.g. LIDAR, IR and optical cameras), as well as modeling of the dynamics of the coupled system, might prove and arduous task. Thus, an initial software testing must be followed by a hardware in the loop (HIL) testing in order to assess the adequacy of the developed architecture. A system capable of performing such a task is the HIL simulation system for orbital rendezvous maneuvers at DFKI-RIC, that was developed in the INVERITAS project (Paul et al., 2014). The facility is located in a 24 m long, 12 m wide and 10 m high hall. It uses a cable robot system able to move a chaser platform, of up to 150 kg, in three dimensions, with one rotational axis. One industrial robotic arm is used for the movements of a target vehicle. Both systems move the chaser and the target according to a real-time software simulation of orbital dynamics, so that the relative movement of both objects inside the facility matches the movements that would occur in orbit. The system can simulate an approach of up to 16.5 m inside the available operational space. The lighting system simulates the in-orbit illumination conditions eliminating the need of computer rendering of the space scene. Conclusion and future work Up until recently the space debris issue was seen as something straight out of the science fiction. Today, thanks to two recent unfortunate collision events (one of which was intentional) and in depth studies, the space debris issue has gained more visibility. Nevertheless, the problem remains and if we do not act quickly the access to space, as we know it today, could be just a thing of the past. Thus, an active removal of intact hardware has to be preformed routinely in the next few hundred years if we are going to stabilize the space debris environment. In order to do this, space technologies need to make a significant leap forward. Most of those technologies are related to the GNC system and to the ability of a chaser spacecraft to autonomously detect, approach and capture a target. Within this context, this paper presents a preliminary design of a GNC architecture envisioned specifically to tackle the ADR problem by means of a robotic system. Current state of the-art architectures are either envisioned for automatic systems, with humans in the loop, or they lack the ability to deal with the robotic capture phase. The GNC architecture presented here should fill that gap by including state of the art algorithms and a robotics module. Moreover, its modular structure based on the open source ROCK framework should enable the scientific community to quickly and easily modify the architecture to its own needs, once completed. Given the preliminary status of the concept, further quantitative evaluation of the selected algorithms will be performed in order to define the final structure of the concept in the near future. Moreover, the possibility of incorporating the desirable MVM and FDIR capabilities into the architecture will also be evaluated. The development of the robotics module is already one of our research goals and will be illustrated in more depth in a future paper.
18,279.8
2016-04-15T00:00:00.000
[ "Engineering", "Environmental Science", "Physics" ]
Field-enhanced magnetic moment in ellipsoidal nano-hematite Bulk hematite is a canted antiferromagnet at room temperature and displays weak magnetic coercivity above the Morin transition temperature T M ∼ 262 K. Below T M, hematite displays traditional antiferromagnetic behavior, with no net magnetic moment or magnetic hysteresis. Here, we report that ellipsoidal nanocrystals of hematite (ENH) display a significant field-enhanced magnetic moment (FEMM) upon being poled by a magnetic field. This poled moment displays a giant coercive field of nearly 6000 Oe at low temperature. Atomic resolution transmission electron microscopy indicates that the nanocrystals are single crystalline, and that the surfaces are bulk-terminated. The apical terminations include the <001> sets of planes, which are implicated in possible formation of FM-arrangements near the surface. We tentatively suggest that FEMM in ENH could also arise from uncompensated surface spins or a shell of ordered spins oriented and pinned near the surface by a magnetic field. The gradual loss of magnetic moment with increasing temperature could arise as a result of competition between surface pinning energy, and kT. The large coercive field points toward possible applications for ENH in digital magnetic recording. Bulk single crystal hematite (α-Fe 2 O 3 ) is a canted antiferromagnet (c-AFM) below its Néel transition temperature, T N ∼ 950 K. Spin canting away from the basal plane produces a weak spontaneous magnetic moment, a small magnetic hysteresis and a coercive field of 0.33 T at room temperature. Below the Morin transition T M ∼ 262 K, this weak moment becomes fully suppressed due to a transition from canted antiferromagnetic (c-AFM) order at high temperature to traditional antiferromagnetic order (AFM) at low temperature. At 4 K, well below T M , hematite displays no net moment, together with zero coercive field and an absence of magnetic hysteresis. Previous studies of nano-sized hematite reveal that the Morin transition becomes suppressed with decreasing particle size [18]. The coercive field in hematite (both bulk and nano) decreases to zero below T M , revealing no magnetic hysteresis at low temperature. In this paper, we report that a spontaneous field-enhanced magnetic moment (FEMM) and a giant magnetic hysteresis can develop in ellipsoidal nanocrystals of hematite (ENH) upon exposure to an external magnetic field. We further find that the coercive field increases to large values with decreasing temperature. High resolution electron microscopy studies reveal several unusual crystal surface terminations which could be implicated in the formation of unusual magnetic ground states and structures at or near the surface of ellipsoidal nano-hematite. Materials and methods Samples of ENH were synthesized by a 'forced hydrolysis' method in which the selective binding of phosphate ions along the [2-1-4] crystal planes leads to the formation of ellipsoidal nanoparticles from a solution of iron chloride [19,20]. A solution was made by vigorously shaking 4.8 mM (1.3 g) of iron chloride hexahydrate (FeCl 3 .6H 2 O, Aldrich) and 0.1 mM (12 mg) of sodium dihydrogen phosphate (NaH 2 PO 4 , Aldrich) in 1 liter of de-ionized water (MilliQ plant with 0.22 micron filter yielding 18.2 M-Ω-cm conductivity). The solution was placed in a preheated oven for 120 h at 98°C and subsequently furnace-cooled to room temperature. The resulting colloidal suspension, containing nanocrystals of hematite, was washed by repeated centrifugation at 10 000 rpm and re-dispersion in deionized water using an ultrasonic bath. This process was repeated four times, yielding sediments of hematite nanocrystals of ellipsoidal shape. Powder x-ray diffraction (XRD) was performed on a Scintag XDS 2000 diffractometer. Rietveld refinement was performed using the graphical user interface EXPGUI [21]. Nanocrystal size and morphology were investigated using a Hitachi-H9000NAR transmission electron microscope (TEM) operating at 300 keV. Magnetic properties were studied by varying temperature (2-300 K) and magnetic field (0-9 T) in a physical properties measurement system (PPMS) by Quantum Design, Inc. For the magnetic measurements reported here, the powder sample was fixed in an epoxy resin (Bisphenol A diglycidyl ether resin, ITW-Devcon) in order to minimize the physical movement of the nanoparticles while performing magnetic measurements in field. The magnetic measurements reported in this letter are from one sample placed within the PPMS cryostat; however, the results are reproducible over a number of batches of samples. For the sample reported here, 11 mg of nanocrystals were evenly mixed using a non-magnetic pick in 27 mg of epoxy. The resin added a minor diamagnetic component to our magnetic property measurements. Results and analysis XRD shown in figure 1 confirms that our sample is high purity hematite (α-Fe 2 O 3 ). A careful search was performed, along with Rietveld refinement, to eliminate possible signs of the known oxides and oxi-hydroxides of iron (magnetite, maghemite, goethite, akaganite, lepidocrocite and feroxyhite). No evidence of secondary phases was found in our samples. Lattice parameters derived from Rietveld refinement yield crystal parameters a = b = 5.030 42 (15), and c = 13.7931 Results from bright field TEM and high resolution (HRTEM) are shown in figure 2. Images were filtered with an aperture of radius 9.1 nm −1 in order to remove noise beyond the lattice resolution of the microscope. Amplitude-contrast bright-field images indicate that the nanocrystals have a uniform distribution of shape and size (figure 2(a)). The crystals are ellipsoidal, with a length of 70 ± 11 nm and a width of 40 ± 5 nm. Note the presence of at least two facets at each end; these are further analyzed from high-resolution transmission electron microscopy (HR-TEM), and discussed in the following section. 4 show data from magnetization measurements in variable temperature and magnetic field. In figure 3(b) (inset, top left), we observe a weak coercive field of ∼40 Oe in the magnetic hysteresis (M versus H loop) measured at 300 K, consistent with the fact that hematite is a canted antiferromagnet at this temperature. However, the hysteresis loop at 4 K performed upon zero-field cooling (ZFC) from 300 K shows a remarkably high coercive field (coercivity) of ∼6000 Oe. The same figure shows that coercive field measured in ZFC samples increases with decreasing temperature. It is also evident in figure 3(c) (inset, lower right) that the coercivity increases rapidly with increasing maximum field reached during a magnetic hysteresis measurement. Figure 4 shows temperature dependence of magnetization of our ENH sample with differing history of applied magnetic field. Lines 1, 2, 3 and 4 are plots of magnetization as a function of temperature during warming in a field of 1000 Oe (arrows indicate warming or cooling). As shown by the arrow, line 5 was measured during cooling in a field of 5 T. It is clear from line 1 that the sample displays a Morin transition upon cooling in zero field. The moment at low temperature is zero within measurable limits, indicating a transition from c-AFM to AFM. The transition is broadened due to particle size effects, consistent with previously reported results for small particles of hematite [24]. As shown in line 2, the Morin transition is suppressed when the sample is cooled in a low field of 1000 Oe. Remarkably, line 3 shows a rise of magnetic moment with decreasing temperature when the sample is cooled in zero field but exposed to a poling field of 5 T at 4 K before being measured in 1000 Oe during warming. This field-enhanced magnetic moment, or 'FEMM' behavior, is further enhanced when the sample is cooled in a high magnetic field of 5 T, as shown in line 4. FEMM behavior is also observed when the sample is cooled in a high magnetic field (line 5) during measurement. However, the moments are saturated and do not show an activated behavior. Discussion Reflections from powder XRD of our ellipsoidal nano-hematite, shown in figure 1, are consistent with hematite (ICDD PDF #72-0469). Rietveld refinement and careful examination of the reflections along with known reflections from a number of possible secondary phases such as other oxides and hydroxides confirms that we have hematite of very high phase purity. The absence of both secondary phases and secondary crystal structures was carefully confirmed using XRD, TEM and magnetization data. High-resolution TEM shows that the nanocrystals are homogenous, with no formation of secondary crystal structures within the bulk or near the surface of the nanocrystal. Figure 2(a) shows a high resolution image of the faceted end of one such nanocrystal, noise filtered for spacings smaller than the 0.11 nm lattice resolution. Detailed examination of a number of crystals at high resolution shows that the hematite lattice extends all the way to the surfaces of individual crystals. A larger flat facet on the left side of the image is terminated by a <104> type surface. This appears to be a dominant surface found near the tapered edges at the tip of nearly all of the nanocrystals, and is observable in all of our bright field images of type shown in figure 2(b). The shorter facet at the tip of the particles is terminated by the basal plane of the hexagonal structure; the long axis corresponds to the <001> axis. Figure 2(c) shows a digital diffractogram confirming that each crystallite can be treated as a single crystal. This is also consistent with the uniform contrast observed in the bright field images and from the continuous lattice fringes in HRTEM. A systematic observation of images in the bright field and HRTEM convinces us that the nanocrystals have a uniform hematite crystalline structure without any additional detectable crystalline phase. In figure 3, we note that the observed coercive field at 300 K in ellipsoidal nano-hematite is lower than the coercive field of 0.33 T measured in bulk hematite [25]. This is consistent with the observation that the coercive field is small in small particles: a ∼3000 Oe coercive field is observed in larger pseudocubic nanocrystals (∼350 nm) of hematite [26]. The interesting result here is that the hysteresis loop at 4 K in figure 3 is larger than that at 300 K, consistent with FEMM behavior noted above and discussed below. Further, it displays a remarkably large coercive field of ∼6000 Oe which rises with decreasing temperature. This is not consistent with hematite, which is antiferromagnetic below the Morin transition. It is also clear from line 1 in figure 4 that our samples of ellipsoidal nano-hematite undergo a Morin transition as expected for small particles of hematite and thus ought to show no net magnetic moment, and zero coercive field, at low temperature. Both figures 3 and 4 are consistent with a field enhanced [1]: zero field cooled and measured in a field of 1000 Oe [2]; cooled and measured in a field of 1000 Oe [3]; zero field cooled, 'poled' at 4 K with a 5 T field, and measured in a field of 1000 Oe [4]; cooled in a field of 5 T and measured in a field of 1000 Oe [5]; 'poled' at 300 K with a 5 T field, then cooled and measured in a field of magnetic moment, or FEMM, in which a net magnetic moment in ENH remains pinned upon being exposed to high magnetic field. In our hysteresis loops in figure 3, our ENH sample becomes exposed to high magnetic field before sweeping back through zero. This is consistent with figure 4, in which the sample develops a net spontaneous magnetic moment upon exposure to high magnetic field. The moment is enhanced at lower temperature and is also better 'pinned', as evidenced by increasing coercive field with decreasing temperature. Magnetic properties of the many oxides of iron have been extensively characterized in the bulk and, to some extent, in nanoparticulate form [25][26][27][28][29]. For crystal sizes below 100 nm, iron oxides of different size and shape display unexpected and fascinating magnetic and structural characteristics. Such behavior has been examined from several different viewpoints, mostly classified as 'particle size' effects. Theoretical studies, for the most part, have examined size effects on the relative free energy of the surface of a nanocrystal [30]. In addition, phase transitions in critical phenomena requiring long-range order (leading to ferromagnetic or antiferromagnetic ground states) become compromised when the lattice size is smaller than a critical length scale (or an order parameter) and is unable to sustain long-range order [31]. Magnetic properties of nano-materials have also been examined based on another broad class of effects, sometimes referred to as 'core-shell' type behavior. In its most widely studied form, core-shell structure implies a variance in crystal structure or chemical phase between the bulk and the surface of the nanocrystal; this may or may not be intended during the growth of the nanocrystal. Core-shell type behavior can also exist when the crystal lattice of the nanocrystal extends uniformly from its bulk to its surface. Theoretical investigations indicate that unusual magnetic order can nucleate near the surface of the nanocrystal due to constraints of size and shape in the nano-lattice. This can yield unusual magnetic properties due to a 'magnetic core-shell' structure [32]. Finally, unusual magnetic behavior can arise from purely 'surface' effects, or lattice terminations revealed on the surfaces of nanocrystals which are not usually found in bulk crystals. We now briefly discuss possible mechanisms for the observed FEMM behavior in our samples of ellipsoidal nano hematite. First-principles density functional calculations indicate that stable local spin configurations of Fe 2 O 3 (0001) are dependent upon the number of Fe bilayers near the hetero-interface [33]. A local ferromagnetic (FM) up-up spin structure is found to be energetically favorable for a single bilayer, whereas an up-up, down-down AFM structure is favorable in the case of four bilayers. FM behavior of magnetic spins near the surface of Fe 2 O 3 , especially in view of the observations of several <001> sets of planes exposed near the apex of the ellipsoid, would explain the observations in figures 3 and 4. Wang et al indicate that O3-terminated (0001) terminated Fe 2 O 3 surface yields an unusual electronic structure with noticeable presence of states from the subsurface Fe layer [34]. Spin states such as these could either remain pinned at specific surface terminations, or exist in a core-shell type magnetic structure with FM-like order near the surface and AFM-like order in the bulk of the nanocrystal [35]. FEMM behavior, and a large coercive field, could arise from a core-shell type arrangement of the magnetic lattice in which a possible bias between the surface shell and the bulk can be induced and poled by external magnetic field. We therefore conjecture that the fieldenhanced magnetic moment behavior, reminiscent of ferromagnetism, could arise from surface spins oriented upon exposure to a magnetic field and pinned at or near the surface of ellipsoidal Fe 2 O 3 nanocrystals. The coercive field observed in magnetic hysteresis is very high, opening up the potential for the application of ellipsoidal nano-hematite in digital magnetic recording. Conclusion We report field-enhanced magnetic moment (FEMM) in ellipsoidal nano-hematite (ENH) below the Morin transition. This moment increases with decreasing temperature, correlated with an increase of coercive field to nearly 6000 Oe. Although the observation of a net magnetic moment, and a giant coercive field, is counter-intuitive in an antiferromagnetic material, our observations possibly arise from uncompensated spins pinned at the surface of ENH samples. A giant coercive field observed in our magnetic hysteresis measurements provides a basis for potential applications of ellipsoidal nano-hematite in digital magnetic recording technologies.
3,567.2
2014-06-16T00:00:00.000
[ "Physics", "Materials Science" ]
A new perspective on permafrost boundaries in France during the Last Glacial Maximum . During the Last Glacial Maximum (LGM), a very cold and dry period around 26.5–19 kyr BP, permafrost was widespread across Europe. In this work, we explore the possible benefit of using regional climate model data to improve the permafrost representation in France, decipher how the atmospheric circulation affects the permafrost boundaries in the models, and test the role of ground thermal contraction cracking in wedge development during the LGM. With these aims, criteria for possible thermal contraction cracking of the ground are applied to climate model data for the first time. Our results show that the permafrost extent and ground cracking regions deviate from proxy evidence when the simulated large-scale circulation in both global and regional climate models favours prevailing westerly winds. A colder and, with regard to proxy data, more realistic version of the LGM climate is achieved given more frequent easterly winds conditions. Given the appropriate forcing, an added value of the regional climate model simulation can be achieved in representing permafrost and ground thermal contraction cracking. Furthermore, the model data provide evidence that thermal contraction cracking occurred in Europe during the LGM in a wide latitudinal band south of the proba-ble permafrost border, in agreement with field data analysis. This enables the reconsideration of the role of sand-wedge casts to identify past permafrost regions. During the Last Glacial Maximum (LGM; Clark et al., 2009;Mix et al., 2001), corresponding to around 26.5-19 kyr BP, huge ice sheets covered large parts of the Northern Hemisphere, modifying the surface albedo and orography (Hughes et al., 2015;Ullman et al., 2014), and enhanced sea ice cover modified heat fluxes between the ocean and atmosphere (Flückiger et al., 2008). During the coldest phase of the LGM, the sea level was about 130 m lower than today (Lambeck et al., 2014) and the greenhouse gas concentrations were at a historical minimum with values less than half of present-day concentrations (Clark et al., 2009;Monnin et al., 2001). Lower greenhouse gas concentrations favoured the growth of C 4 over C 3 plants (Prentice and Harrison, 2009), although only C 3 plants have actually been identified in European loess (Hatté et al., 1998(Hatté et al., , 2001. Globally, this hampered the development of trees (Woillez et al., 2011), resulting in less-productive terrestrial ecosystems and more open vegetation (Bartlein et al., 2011). Ultimately, this induced easily erodible soils, whose contribution to the dust cycle increased (Prospero et al., 2002;Ray and Adams, 2001). These boundary conditions and forcing led to a substantially different climate than today. In general, the LGM was a colder, drier, and windier period in Earths' history compared with the recent climate (e.g. Annan and Hargreaves, 2013;Bartlein et al., 2011;Löfverström et al., 2014). The global and annual mean surface air temperatures were about 4 • C colder than today, with differences reaching up to 14 • C close to the LGM ice sheets in areas such as central Europe (e.g. Annan and Hargreaves, 2013;Bartlein et al., 2011;Clark et al., 2009;Pfahl et al., 2015;Ludwig et al., 2017). The atmospheric circulation in the North Atlantic region varied considerably from the current conditions, mainly due to the direct influence of the altered topography by ice sheets (Justino and Peltier, 2005;Merz et al., 2015). A planetary large-scale atmospheric wave with an amplitude much larger than today was induced, with a deep trough downstream of the Laurentide ice sheet. This led to a generally more zonal orientation of the North Atlantic jet stream (Löfverström et al., 2014). Additionally, the jet was enhanced and its position was shifted southward (e.g. Li and Battisti, 2008;Merz et al., 2015;Pausata et al., 2011). The storm track during the LGM evolved accordingly (e.g. Löfverström et al., 2014;Ludwig et al., 2016;Raible et al., 2021), and extreme cyclones were more intense and characterised by less precipitation (Pinto and Ludwig, 2020). Thus, cyclones were able to trigger more frequent dust storms during the LGM Pinto and Ludwig, 2020;Sima et al., 2009). Besides these dust storms, easterly winds induced by an anticyclone over the Fennoscandian ice sheet (FIS) were another important factor for the deposition of loess in central and western Europe Schaffernicht et al., 2020;Stevens et al., 2020) as well as westerly to north-westerly winds (e.g. Renssen et al., 2007;Schwan, 1986Schwan, , 1988. At the same time, adjacent areas south of the FIS were widely affected by permafrost (Kitover et al., 2013;Levavasseur et al., 2011;Saito et al., 2013;Vandenberghe et al., 2014;Washburn, 1979). The past permafrost distribution is usually inferred from the occurrence of a variety of fossil periglacial features, among which ice-wedge pseudomorphs are the most reliable and widespread (e.g. Bertran et al., 2014;Huijzer and Isarin, 1997;Péwé, 1966;Vandenberghe, 1983;Vandenberghe et al., 2014). Ice wedges develop within perennially frozen ground, when the temperature drops quickly and the ground experiences thermal contraction cracking. Annual frost cracks that reach downward into the permafrost are a few millimetres wide. They get filled with snowmelt that freezes into ice veins. Repeated cracking over years at the same location adds ice veins that constitute ice wedges (e.g. Harry and Gozdzik, 1988;Murton, 2013). Ice-wedge pseudomorphs observed from the LGM in Europe were formed when the ice melted and the cavities were filled by collapsing soil materials. Today, ice wedges are mostly active in continuous permafrost environments (Fortier and Allard, 2005;Kokelj et al., 2014;Matsuoka et al., 2018;Péwé, 1966). Open cracks may also be filled with wind-blown sand, which gives rise to sand wedges, or by both ice and sand, which gives rise to composite wedges. Active sand wedges are currently primarily found in areas characterised by continuous permafrost and limited snow and vegetation cover (i.e. the polar deserts), and with local sources of aeolian sediments, such as in Antarctica (Bockheim et al., 2009;Levy et al., 2008;Murton et al., 2000;Péwé, 1959). Ground cracking is often restricted to the active layer (i.e. the surface layer subjected to seasonal freezing and thawing) in the areas underlain by "warm" permafrost (i.e. at a temperature close to 0 • C) and south of the permafrost border. Thin cracks develop and are referred to as seasonal frost cracks. However, Wolfe et al. (2018) showed that large shallow sand wedges can also develop in Canada in areas with deep seasonal ground freezing (i.e. without perennially frozen ground) in mineral soils close to dune fields, which provide abundant sand to fill the cracks. Thermal contraction cracking of the ground is the causal factor that leads to ice (or sand) wedge growth. Ecological factors such as type of vegetation cover and thick snow cover often limit thermal contraction cracking, as they may prevent the cooling of the ground. This is the case in current densely vegetated areas that insulate the ground and trap snow (e.g. shrub tundra and taiga; Kokelj et al., 2014;Mackay and Burn, 2002). Conversely, cracking can occur at low frequency in mid-latitude, cool temperate regions in grounds devoid of tall vegetation and snow, particularly in roads and airport runways (Barosh, 2000;Okkonen et al., 2020;Washburn, 1963). Many attempts at reconstructing the past permafrost distribution in Europe using field proxies have been performed during the last decades. Based on the assumption that both active ice wedges and sand wedges are associated with continuous permafrost and possibly with widespread discontinuous permafrost (Burn, 1990;Romanovskij, 1973), some of the earliest reconstitutions, as reported by Vandenberghe et al. (2014), proposed that Europe was affected by permafrost as far south as 43.5 • N. However, a detailed analysis of periglacial features in France by Andrieux et al. (2016bAndrieux et al. ( , 2018 demonstrated that typical ice-wedge pseudomorphs are exclusively found north of 47.5 • N, whereas sand-wedge casts occur at lower latitude at the periphery of aeolian sand sheets. A correlation between wedge depth and latitude has also been highlighted, which strongly suggests that the southernmost shallow sand wedges developed in regions where perennial ice could not form, i.e. without permafrost or with sporadic permafrost. A similar pattern has also been highlighted in China by Vandenberghe et al. (2019). The sand wedges reach up to 1 m wide in south-west France near 45 • N in the periphery of cover sands. Optically stimulated luminescence dating of the sand fill by Andrieux et al. (2018) demonstrated that these large epigenetic sand wedges resulted from repeated periods of growth throughout the Last Glacial. Multiple attempts have also been performed to infer the LGM permafrost occurrence from climate model data. Liu and Jiang (2016b) considered both direct and indirect methods. The simplest indirect method is based on the modelled mean annual air temperature (MAAT). Threshold values for permafrost occurrence were adapted according to ground texture (Vandenberghe et al., 2012). However, this method only provides a rough estimate of permafrost extension, as a variety of other factors are known to impact ground temperatures, including water content, vegetation, and snow cover. Particularly, the insulating effects of snow and vegetation cover may be responsible for an offset of up to 6 • C between the MAAT and the mean annual ground surface temperature (MAGST). On the other hand, variations in ground thermal conductivity (depending on texture and water content) may result in an offset of 2 • C between the MAGST and the temperature at the top of permafrost (TTOP) (e.g. Smith and Riseborough, 2002;Throop et al., 2012). A refined indirect method to derive permafrost occurrence from climate model data is the use of the surface frost index (SFI, Nelson and Outcalt, 1987), which corresponds to the ratio between frost and thaw penetration depths and takes the effects of snow in account. The SFI has been used in several studies, with only minor changes to the original method. For example, monthly model output was used instead of summing up daily air temperatures (e.g. Frauenfeld et al., 2007;Liu and Jiang, 2016b). Slater and Lawrence (2013) weighted the snow depth for each month to consider snow accumulation effects, whereas Stendel and Christensen (2002) replaced the surface air temperature with the temperature of their deepest simulated ground layer (5.7 m deep) to investigate permafrost degradation due to current global warming. The latter authors pointed out the advantage of taking simulated ground temperatures, where insulation effects of snow and vegetation cover are explicitly taken into account by the models, and rendered empirical approaches redundant. For the direct method, the modelled ground temperatures below 0 • C are used to diagnose permafrost. The studies differ slightly with respect to the depth of the considered ground temperatures (e.g. Liu and Jiang, 2016a, b;Saito et al., 2013;Slater and Lawrence, 2013). Studies investigating the permafrost limits during the LGM using global climate simulations have so far failed to appropriately reproduce the permafrost extent as reconstructed from field proxies (e.g. Andrieux et al., 2016b;Levavasseur et al., 2011;Ludwig et al., 2017). However, there is evidence for improvements when using the data from regional climate simulations (e.g. Ludwig et al., 2017Ludwig et al., , 2019. The aim of this study is (1) to explore the possible benefit of using regional climate model data to improve the permafrost representation over France, (2) to decipher how the atmospheric circulation affect the permafrost boundaries in the models and finally, (3) to test the role of ground thermal contraction cracking in wedge development during the LGM. In Sect. 2, we introduce the adaptions made to the regional climate model to be compliant with LGM boundary conditions and describe the global simulations that provide the initial and boundary conditions. Further, we give an overview of the different methods used to derive the LGM permafrost distribution in France. In Sect. 3, we describe the general characteristics and differences of the LGM climate based on the global and regional climate model data and present the permafrost and ground cracking distribution based on regional climate model data. Finally, we discuss and summarise the results in Sect. 4. Data and methods In this study, LGM simulations of two global climate models, namely MPI-ESM-P (MPI -Max Planck Institute; Jungclaus et al., 2013;Stevens et al., 2013) and AWI-ESM (AWI -Alfred Wegener Institute; Sidorenko et al., 2015;Lohmann et al., 2020), are dynamically downscaled with the Weather Research and Forecasting model (WRF; Skamarock et al., 2008). Both global models share the same atmospheric component ECHAM6 but different modules for the ocean. The MPIOM (Marsland et al., 2003) is coupled within the MPI-ESM-P, forming the well-established global climate model that took part in several Coupled Model Intercomparison Project (CMIP) phases. In the AWI-ESM, the FESOM ocean model (Wang et al., 2014) featuring an unstructured mesh as well as a multi-resolution approach is used with a relatively high resolution of less than 30 km north of 50 • N. The atmospheric grid applied in the MPI and AWI experiments is T63 (roughly 1.9 • spatially) with 47 unevenly distributed vertical levels. The simulations follow either the Paleoclimate Modelling Intercomparison Project Phase 3 (PMIP3) pro-tocol (MPI; Braconnot et al., 2012; https://wiki.lsce.ipsl.fr/ pmip3/doku.php/pmip3:design:21k:final, last access: 6 December 2021) or the PMIP4 protocol (AWI; Kageyama et al., 2017), where the boundary conditions (solar constant, orbital parameters, greenhouse gases) are set according to the best estimate of the LGM boundary conditions. The AWI-ESM has been used in the recent CMIP6/PMIP4 intercomparisons (Brierley et al., 2020;Keeble et al., 2021;Kageyama et al., 2021) and was applied for the LGM (Lohmann et al., 2020). The ice sheet provided for PMIP3/CMIP5 LGM experiments is a blended product obtained by averaging three different ice sheet reconstructions: ICE-6G v2.0 (Peltier et al., 2015), MOCA (Tarasov and Peltier, 2003), and ANU (Lambeck et al., 2002). In contrast, the LGM topography in the AWI experiment is configured based on the ICE6G reconstruction (Peltier et al., 2015). For the recent climate, the pre-industrial period (PI), corresponding to roughly 1850, is used as a reference. The simulations again follow the PMIP3 (MPI; Taylor et al., 2012) or PMIP4 protocol (AWI; Eyring et al., 2016). To account for model uncertainties, outputs from these global LGM simulations are used to drive the regional WRF simulations. The atmospheric boundary conditions are updated every 6 h, and sea surface temperature (SST) and sea ice cover are updated daily. Apart from the different forcing, the set up of the two regional simulations is identical. The coastlines, ice sheet extent, trace gas conditions, and orbital parameters are adapted to LGM values according to the PMIP3 protocol (Ludwig et al., 2017). Modifications to the Alpine ice sheet are implemented according to Seguinot et al. (2018). Land use and vegetation cover is taken from the CLIMAP data set (CLIMAP Project Members, 1984). An overview of the parameterisation schemes used in the WRF simulations is given in Table 1. Most important for the representation of the ground characteristics is the parameterisation of the land surface, for which we used the unified Noah land surface model (Tewari et al., 2004). Based on 19 different soil types, various ground parameters (e.g. ground thermal conductivity) are set and used for the calculations of ground temperatures and moisture for each grid point. More details can be found in studies such as Chen and Dudhia (2001) and Niu et al. (2011) as well as references therein. The first model domain covers large parts of Europe with a horizontal resolution of 50 km (see Fig. 1) and 35 vertical layers up to 150 hPa. The integration time step is 240 s. The second, nested domain covers southern parts of the FIS, the Alps, and France, where the latter represents the target region to assess the LGM permafrost limits in this study. Here, the horizontal resolution is 12.5 km and the integration time step is 48 s. The soil is separated into four layers, with representative depths of 5, 25, 70, and 150 cm. A total of 32 years are simulated for each global forcing simulation. The first 2 years are used as a spin-up phase and are excluded from further analysis. Thus, it is ensured that the atmosphere and soil properties and processes are in equilibrium. The permafrost distribution is derived from climate model data using the three different methods described in Sect. 1. For MAAT, the 2 m air temperature is considered. Threshold values were derived from data compiled from studies in current Arctic regions, where continuous permafrost is inferred for MAATs < −8 ± 2 • C, whereas discontinuous permafrost requires MAATs < −4 ± 2 • C (e.g. Smith and Riseborough, 2002;Vandenberghe et al., 2012). The surface frost index (SFI) is based on the annual freezing and thawing degreedays (DDF and DDT respectively) which refer to the sum of daily air temperatures below or above 0 • C respectively: An SFI between 0.5 and 0.6 indicates sporadic permafrost, between 0.6 and 0.67 indicates discontinuous permafrost, and above 0.67 indicates continuous permafrost (e.g. Nelson and Outcalt, 1987;Stendel and Christensen, 2002). For this indirect method, we use ground temperatures of the third layer at 78 and 70 cm for the global and regional simulations respectively. With the direct method, permafrost is inferred when ground temperatures are at or below 0 • C. Beyond the permafrost indices, ground cracking is assumed to be possible when two conditions derived from fieldwork by Matsuoka et al. (2018) are fulfilled simultaneously: a daily mean soil temperature below −5 • C at a depth of 1 m and a temperature gradient in the upper metre of the ground below −7 • C m −1 . These minimum values might represent shallow cracking within the active layer or seasonally frozen layer and can be compared against the sand-wedge distribution. Conditions for intensive and deep thermal contraction cracking (T 100 = −10 • C and G AL = −10 • C m −1 ) are tested in regard to the ice-wedge pseudomorph distribution in France. Due to higher ice content and higher organic carbon content of the ground, these values do not necessarily correspond exactly to those of France during the Pleistocene. We use the third soil layer again, with depths of 78 cm in the global simulations and of 70 cm in the regional simulations. To evaluate the model simulations, the distribution of icewedge pseudomorphs and sand wedges after Andrieux et al. (2016b) and Isarin et al. (1998) are considered. Global boundary conditions In this section, we present the large-scale characteristics of the LGM climate derived from global climate model data that is used for dynamical downscaling in comparison with the respective PI simulations. It is important to investigate the climatic mean state and possible biases of the global projections in order to be able to interpret the regional simulations accurately. Both global models simulate colder annual mean SSTs under LGM than under PI conditions (see Fig. 2a and b). For the MPI model, a limited area with enhanced SSTs is simulated over the North Atlantic. This does not match with proxy data (MARGO Project Members, 2009) and is a known issue for this and other PMIP3 models (e.g. Wang et al., 2013;Ludwig et al., 2016Ludwig et al., , 2017. The AWI simulation does not show this warm anomaly over the North Atlantic and the SSTs are generally colder. In the Arctic Ocean, the SSTs in the AWI simulation are considerably higher than in the MPI simulation. This can be explained by the sea ice cover, which is lower in the AWI LGM simulation. The analysis of wind speed at 300 hPa gives insights into the jet stream structure and strength, which are dominant factors of the atmospheric large-scale circulation over the North Atlantic/European region. In agreement with Li and Battisti (2008), both models show a stronger jet under LGM conditions compared with the simulations under PI conditions (see Fig. 2c and d). This is particularly the case over the North Atlantic, south-eastward of the Laurentide ice sheet, where the annual mean wind speed is up to 14 m s −1 higher for the LGM. On the other hand, the wind speed on both the southern and northern flanks of the jet stream is actually 2-4 m s −1 weaker during the LGM, indicating a more constrained large-scale flow. Even though the wind anomalies are quite similar for both global climate models (GCMs), the actual structure is dissimilar: while the jet is less constrained and deflected to the north for the MPI simulations, reaching Europe at the latitude of Ireland, the jet stream in the AWI GCM reaches Europe at the latitude of the Iberian Peninsula and France and extends farther into the continent. In general, the simulated winds speeds at 300 hPa in the AWI model are weaker compared with the MPI model (not shown). The zonal structure of the wind speed anomalies identified for the AWI simulations is more similar to the ensemble mean of CMIP5 models (e.g. Ludwig et al., 2016) than the MPI anomaly pattern. Climate of the regional simulations Based on the GCM simulations, we obtain two different variants of the regional LGM climate in western Europe. The results are shown primarily for the larger domain of the regional simulations, as the climate of the high-resolution simulations yields a similar structure. The annual mean 2 m air temperature is considerably lower in the WRF-MPI than in the WRF-AWI simulation (see Fig. 3). The biggest differences are identified near the ice sheet margin -almost 10 • C in the respective annual means. The sign and pattern of the differences are visible in all seasons, but winter air temperatures clearly diverge most. For the summer, air temperatures in both models are more similar to each other. These differences can be partly attributed to the snow cover: except for summer, almost the entire region is covered by snow in WRF-MPI, even though a snow height of several metres is only reached over the FIS and the Alpine region (see Fig. S3 in the Supplement). WRF-AWI shows markedly higher snow accumulation over the ice sheets with differences of more than 20 m compared with the WRF-MPI simulation but generally shows less snow cover in southern and central Europe. Differences amount to 20 % less snow cover in WRF-AWI in the respective annual means and to 40 % in both spring and winter. In summer, only the ice sheets are snow covered in both simulations; thus, the differences are negligible. Nevertheless, more precipitation is simulated over Europe in the WRF-AWI simulation (see Fig. 4). High precipitation amounts are either orographically induced, as for precipitation over the Alps and over the FIS, or they are associated with the moisture availability of the North Atlantic. The absolute annual mean wind field and the associated differences are depicted in Fig. 5. Both simulations show strongest winds south of the FIS in the respective annual and winter means, although with a notably enhanced pattern in WRF-MPI, where this also holds for each season. These winds are easterlies/north-easterlies. In contrast, westerly winds from the North Atlantic are stronger in WRF-AWI and, thus, transport heat and moisture towards Europe. During winter, the westerly winds are directed towards the centre of the domain in WRF-AWI, whereas the winds have a more south-western component in WRF-MPI and are directed towards the outside of the domain. In summer, both the WRF-MPI and WRF-AWI simulations are characterised by westerly winds from the North Atlantic. Again, winds from the FIS are blowing south-and south-eastwards, but the summer wind speeds are consistently weaker than in winter for both simulations. These wind fields are induced by the large-scale circulation in the global forcing simulations. In fact, the northerly and easterly components predominantly occur in the MPI simulation (Ludwig et al., 2016), whereas southerly and westerly components occur more often in the AWI simulation. This is in accordance with the jet structure in both global simulations. As the influence of the ice sheet is higher in the (global and regional) MPI simulations, this is consistent with a partially drier and generally colder climate in western Europe during the LGM. . Distribution of 2 m air temperature in annual (a-c) and seasonal winter (d-f) and summer (g-i) means as simulated with the regional WRF model with MPI forcing (a, d, and g) and with AWI forcing (b, e, and h) as well as their differences (c, f, and i). The black line shows the LGM coastline, and the pink line denotes the LGM ice sheet. Permafrost and ground cracking distribution The permafrost distribution of the global and regional simulations based on the SFI is depicted in Fig. 6. The permafrost extent based on the AWI-ESM and WRF-AWI simulations does not reach farther south than the ice sheet, apart from the Alps in WRF-AWI. A modest increase in the permafrost area is simulated by the global MPI simulation. Here, continuous permafrost is still limited to the ice sheet, but sporadic permafrost is slightly more widespread. The WRF-MPI simulation shows a larger permafrost extent. In eastern Europe, the distribution of ice-wedge pseudomorphs strictly overlaps that of modelled continuous permafrost in the selected layer with a depth of 70 cm. In western Europe, field evidence for permafrost exceeds the modelled sporadic permafrost to the south. The conditions for discontinuous and sporadic permafrost are rarely fulfilled in all simulations. The results of the direct method (see Fig. S4 in the Supplement) using long-term mean annual soil temperatures agree with the permafrost extent based on the SFI. However, the different types of permafrost cannot be distinguished by this method, leading to a permafrost line that corresponds to that of the sporadic permafrost based on the SFI. Permafrost estimations based on MAAT are limited to the permanent ice areas during the LGM in all four simulations (see Fig. S5 in the Supplement). Despite the different regional climates, the reconstructed permafrost boundaries in this study closely resemble each other for MAAT. The regional climate model simulations show some additional permafrost areas, which are related to higher orography, especially in the Alps, and, in WRF-MPI, also in the Pyrenees and the Massif Central (see Fig. 1 and Fig. S2 in the Supplement). These mountainous areas are not adequately resolved in the global forcing simulations because of the coarse horizontal grid spacing. Conditions for thermal contraction cracking after Matsuoka et al. (2018) have been tested based on the global and regional climate model data. Examples of how the soil tem- Figure 4. Distribution of total precipitation in annual (a-c) and seasonal winter (d-f) and summer (g-i) means as simulated with the regional WRF model with MPI forcing (a, d, and g) and with AWI forcing (b, e, and h) as well as their differences (c, f, and i). The black line shows the LGM coastline, and the pink line denotes the LGM ice sheet. perature and the gradient develop over 2 consecutive years in France (locations A and B in Fig. 1) are shown in Fig. 7. Time series of the entire simulation periods for these locations can be found in Fig. S6. The two minimum criteria, (a) ground temperature at −1 m below −5 • C and (b) temperature gradient in the upper metre of ground greater than −7 • C m −1 , are fulfilled when both curves reach below the depicted reference line. In the WRF-MPI simulation (Fig. 7a, c), this is the case several times in both years and locations, but it is not the case in the WRF-AWI simulation. For each grid cell, the number of days per year when the thermal contraction cracking criteria (Matsuoka et al., 2018) are fulfilled is translated into heat maps for each simulation. The results of the minimum conditions for (shallow) cracking are shown in Fig. 8, and the results of the conditions for intensive and deep cracking are shown in Fig. S7 in the Supplement. While the permafrost area is much smaller in the global models than their respective regional counterpart, the opposite is the case for thermal contraction cracking areas. The global AWI simulation almost meets the boundaries of sand-wedge occurrence. According to the global MPI simulation, thermal contraction cracking would have been possible as far south as the Iberian Peninsula, where no field evidence for it has been found so far. This can be associated with the lower resolution of the global simulations. Here, the Pyrenees are not resolved adequately in the model and do not act as a natural barrier for cold air arriving from the North, which can, thus, reach further south in the GCMs (see Fig. 1 and Fig. S2 in the Supplement). As for the permafrost distribution, the possible thermal contraction cracking occurrence is also poorly represented in WRF-AWI and is not able to explain the occurrence of wedges in middle and southern France. By contrast, the WRF-MPI simulation agrees well with proxy evidence. Apart from two sand wedges in the lower Rhône valley, the conditions for thermal contraction cracking are found in the simulation in the area where the features are found. This spatial coherence is further improved in the high-resolution simulation (see Fig. 8e), which can Figure 5. Distribution of 10 m wind speed in annual (a-c) and seasonal winter (d-f) and summer (g-i) means as simulated with the regional WRF model with MPI forcing (a, d, and g) and with AWI forcing (b, e, and h) as well as their differences (c, f, and i). The black line shows the LGM coastline, and the pink line denotes the LGM ice sheet. be primarily attributed to a higher resolved orography (see Fig. 1b). Moreover, the conditions for deep ground cracking are represented best in the regional WRF-MPI simulation. The heat maps show that those conditions did not occur in south-western France, which is in agreement with the field data. In this area, the sand-wedge casts do not exceed a depth of 2 m and ice-wedge pseudomorphs are not mapped at all (Andrieux et al., 2016b). Summary and discussion In this study, we explore the benefit of using regional climate model data for the delimitation of the LGM permafrost distribution in comparison with field proxies in France. The main findings can be summarised as follows: 1. The SFI is suitable to infer LGM permafrost from climate model data. The results based on the SFI are sup-ported by the direct method, as the boundaries between permafrost occurrence and absence, as indicated by the SFI, fully match the permafrost border derived from the annual mean ground temperature. Among the models used, the SFI-based permafrost extent of the regional WRF-MPI simulation best agrees with proxy data and is clearly improved compared with its global counterpart. 2. The thermal contraction cracking may have occurred much further south than the simulated permafrost limits, in a context of low and sparse vegetation. The southern extent of sand wedges and that of ice-wedge pseudomorphs in France as delineated by Andrieux et al. (2016b) fit well with the boundaries of LGM thermal contraction cracking derived from the regional WRF-MPI simulation based on the criteria for shallow and for deep cracking after Matsuoka et al. (2018) respectively. In contrast, the global MPI simulation does not resolve orographic features (e.g. the Pyrenees and the Rhône Valley) sufficiently, leading to a possible southward airflow transporting cold air across France to Spain, and allows ground cracking to occur at excessively low latitudes. 3. The obtained estimates for the possible location of permafrost is consistent with the hypothesis proposed by Andrieux et al. (2016bAndrieux et al. ( , 2018, who suggest that sand wedges did not exclusively form in permafrost areas during the LGM but also developed within deep seasonally frozen ground. Contrary to what occurs today in large Arctic areas underlain by permafrost, where ground insulation is limited by dense vegetation (shrub tundra, taiga) and snow cover prevents ground cracking and limits the growth of ice wedges (existing ice wedges that have formed in relation to different climatic or ecological conditions do not melt but are dormant), ice-wedge growth in permafrost areas in France during the LGM was rapid because thermal conditions leading to ground cracking occurred with high frequency. Large ice wedges (which after thawing developed into recognisable pseudomorphs) would have formed in permafrost where it was cold enough in winter to crack. Simulations show that periods of winter ground temperatures below −10 • C at 1 m depth could occur in the discontinuous and sporadic permafrost zone, suggesting that thermal contraction cracks were possibly not restricted to the active layer but could propagate into the permafrost in these areas leading to the development of ice wedges. The regional WRF-MPI simulations best match the proxy-based permafrost reconstruction. The agreement with the proxies is better in eastern Europe, even though the availability of field data remains scarce in that region compared with western Europe. The presence of ice-wedge pseudomorphs in northern France actually shows that permafrost must have extended at least 150 km further south than simulations suggest. The consideration of two global models enables the quantification of uncertainties associated with the large-scale flow under LGM conditions. The global MPI and AWI simulations differ in their atmospheric flow and jet structure. In the AWI, the westerly flow dominates so that moisture and heat are transported from the North Atlantic towards Europe. This large-scale circulation is in good agreement with the multimodel mean of the CMIP5/PMIP3 and CMIP6/PMIP4 models, whereas the MPI simulation exhibits a more northward jet stream and suggests a stronger ice sheet influence through prevailing north-and north-easterly winds (Kageyama et al., 2021;Ludwig et al., 2016). Considering that the regional WRF-MPI simulation is largely in agreement with proxy evidence for both the permafrost and ground cracking extent, we assume that the large-scale circulation of the LGM is reflected more accurately in this simulation. For wind and air pressure, only indirect proxy evidence currently exists, e.g. the reconstruction of easterly wind directions from sediments across the European loess belt (Dietrich and Seelos, 2010;Krauß et al., 2016;Römer et al., 2016). Because of the drier conditions with less vegetation and higher wind speeds, dust events occurred frequently during the LGM. This is reflected by the thick loess deposits in western and central Europe, which form the European less belt (e.g. Lehmkuhl et al., 2016). Recent studies similarly support the hypothesis that, besides individual cyclone events, easterly winds induced by a semi-permanent anticyclone over the FIS were an important component for the glacial dust cycle (e.g. Raible et al., 2021;Schaffernicht et al., 2020;Stevens et al., 2020). Overall, the new regional climate simulations largely reconcile the field data and enable the reconsideration of the significance of ice-wedge pseudomorphs and sand-wedge casts for understanding past climate variations. Field data still suggest a wider extension of permafrost in western Europe than shown by the simulations; however, analysing the southern extent of thermal contraction cracking completes the picture. Various factors may account for a remaining gap between proxy and model data. These factors include the following: , for the first domain of the regional WRF-MPI (c) and WRF-AWI (d) simulations, and for the second domain in WRF-MPI (e) and in WRF-AWI (f). Ice-wedge pseudomorphs and sand wedges from Andrieux et al. (2016) are highlighted using cyan and red triangles respectively, only when located in France. The black line is the LGM coastline, and the grey line denotes the LGM ice sheet. 1. The ground thermal conductivities used in the models may not be perfectly adequate. For fine-grained soils such as loess (in which many ice-wedge pseudomorphs have been reported), this could lead to a slightly colder ground temperature, although this effect is assumed to have been minor. 2. Snow depth and snowpack properties (e.g. Royer et al., 2021) are very sensitive factors for permafrost, and some snow processes are not considered in the models. This may explain some of the discrepancies between field data and simulations. Snow sweeping by the wind at some sites, especially on plateaus, may have led to local permafrost development. However, it should be mentioned that pseudomorphs have been described in the Last Glacial floodplains in the Paris Basin (e.g. Bertran et al., 2018), i.e. in places that are favourable to snow accumulation a priori. 3. Data from loess sections in northern France (Antoine et al., 2003 and Germany (Meszner et al., 2013) show that the main phases of ice-wedge development occurred between 30 and 24 ka. This period, called the Last Permafrost Maximum (LPM, Vandenberghe et al., 2014), covers short and very cold events, which resulted in wider permafrost extension than during the LGM sensu stricto. However, boundary conditions for the simulations are only known accurately at 21 ka. To conclude, the combination of the well-established permafrost index SFI and the criteria for thermal contraction cracking by Matsuoka et al. (2018), both based on regional climate model data, provides new possibilities for the esti-mation of the permafrost extent and the interpretation of ice and sand wedges, especially for palaeoclimate applications. In this context, the use of regional climate model simulations with a highly resolved orography is clearly beneficial (e.g. Ludwig et al., 2019) and should be considered for regions other than western Europe. Author contributions. PL, PB, and JGP designed the concept of the study. PL adjusted the WRF model for LGM applications. KHS performed the regional simulations with the WRF model, analysed the data, and created the figures. XS and GL provided data from the global AWI simulations. PB provided the proxy data. PB and PA contributed to the discussion on the interpretation of proxy data. KHS wrote the first draft of the paper. All authors contributed to discussions and revised the final article. Competing interests. The contact author has declared that neither they nor their co-authors have any competing interests. Disclaimer. Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Acknowledgements. Kim H. Stadelmaier and Joaquim G. Pinto thank the AXA Research Fund for support. Patrick Ludwig is supported by the Helmholtz Climate Initiative REK-LIM (regional climate change; https://www.reklim.de/en, last access: 6 December 2021). Kim H. Stadelmaier, Patrick Lud-wig, Xiaoxu Shi, and Gerrit Lohmann thank the German Climate Computing Centre (DKRZ, Hamburg) for providing computing resources. This study is a contribution to the PALEOLINK project (http://pastglobalchanges.org/science/wg/ 2k-network/projects/paleolink/intro, last access: 6 December 2021) within the PAGES 2k Network; it is also a contribution to the PalMod and PACMEDY projects funded by the BMBF. Financial support. This research has been supported by the AXA Research Fund (https://axa-research.org/en/project/joaquim-pinto, last access: 6 December 2021) and the Helmholtz-Gemeinschaft (Climate Initiative REKLIM grant). The article processing charges for this open-access publication were covered by the Karlsruhe Institute of Technology (KIT). Review statement. This paper was edited by Alberto Reyes and reviewed by Jef Vandenberghe and one anonymous referee.
8,962.8
2021-06-30T00:00:00.000
[ "Environmental Science", "Geology" ]
A Proximal Alternating Direction Method of Multipliers with a Substitution Procedure In this paper, we considers the separable convex programming problem with linear constraints. Its objective function is the sum of m individual blocks with nonoverlapping variables and each block consists of two functions: one is smooth convex and the other one is convex. For the general case m ≥ 3, we present a gradient-based alternating direction method of multipliers with a substitution. For the proposed algorithm, we prove its convergence via the analytic framework of contractive-type methods and derive a worst-case O ( 1/ t ) convergence rate in nonergodic sense. Finally, some preliminary numerical results are reported to support the efficiency of the proposed algorithm. Introduction In this paper, we consider the following convex minimization model with linear constraints and separable objective function: where f i : R n i ⟶ R ∪ +∞ { } (i � 1, . . . , m) are closed proper convex functions and g i : R n i ⟶ R (i � 1, . . . , m) are smooth convex functions, X i ⊆ R n i (i � 1, . . . , m) are closed convex sets, A i ∈ R l×n i (i � 1, . . . , m) are given matrices, and b ∈ R l is a given vector. Furthermore, we assume that each g i has Lipschitz-continuous gradient, i.e., there exists L i > 0 such that roughout the paper, the solution set of (1) is assumed to be nonempty. A fundamental method for solving (1) in the case of m � 2 is the alternating direction method of multipliers (ADMM), which was presented originally in [1,2]. We refer to [3,4] for some review papers on ADMM. ere are many problems of form (1) with m ≥ 3 in contemporary applications, such as the robust principal component analysis model [5], the total variation-based image restoration problem [6], the superresolution image reconstruction problem [7,8], the multistage stochastic programming problem [9], the deblurring Poissonian images problem [10], the latent variable Gaussian graphical model selection [11], the quadratic discriminant analysis model [12], and the electrical engineering [13,14]. en, our discussion focuses on (1) in the case of m ≥ 3. A natural idea for solving (1) is to extend the ADMM from the special case m � 2 to the general case m ≥ 3. is straightforward extension can be written as follows: e convergence of (3) is proved in some special cases (see [15][16][17]). Unfortunately, without further conditions, the direct extension of ADMM (3) for the general case m ≥ 3 may fail to converge (see [18]). In [19,20], the authors present two convergent semiproximal ADMM for two types of 3-block problems. Recently, He et al. [21] showed that if a new iterate is generated by correcting the output of (3) with a substitution procedure, then the sequence of iterates converges to a solution of (1). Since then, several variants of the ADMM were proposed for solving (1) (see [21][22][23][24][25][26]). In (3), all the x i -related subproblems are in the form of with a certain known a i ∈ R l and a symmetric positive definite matrix H. When A i is not the identity matrix, problem (4) becomes complicated. A popular technique is to linearize the quadratic term of (4) (see [27,28]), that is, one can solve the following problem instead of (4): with a certain known c i ∈ R l . In general, one can solve the following problem instead of (4): where x k i is the current iteration. If G i � τ i I i − A T i HA i ≻ 0, then (6) becomes the form of (5). Since g i is smooth, the following problem is easier than (6): Now, we can give the gradient-based ADMM (G-ADMM) iterative scheme as follows: In this paper, imal ADMM with a substitution based on (8). In Section 2, we provide some preliminaries for further analysis. en, we present the gradient-based alternating direction method of multipliers with a substitution (G-ADMM-S) for solving (1) and its convergence is shown in Section 3. In Section 4, we estimate the worstcase iteration complexity for the proposed algorithm in nonergodic sense. In Section 5, some preliminary numerical results are reported to support the efficiency of the proposed algorithm. Finally, some conclusions are given in Section 6. Preliminaries In this section, we provide some preliminaries. Let 〈x, y〉 � x T y and ‖x‖ � ����� � 〈x, x〉 √ . G ≻ 0( ≽ 0) denotes that G is a positive definite (semidefinite) matrix. For any positive definite matrix G, we denote ‖·‖ G as the G-norm. If G is the product of a positive parameter β and the identity matrix I, i.e., G � βI, we use a simpler notation: e domain of f denoted by domf: � x ∈ R n | f(x) < +∞ . We say that f is convex if 2 Mathematical Problems in Engineering For convex function f, the subdifferential of f is the setvalued operator defined by 2.1. Variational Characterizations of (1). i � 1, 2, . . . , m, and W: � X 1 × X 2 × · · · × X m × R l . Since all Θ i (x i ) are convex functions, by invoking the first-order necessary and sufficient condition for convex programming, one can easily find out that problem (1) is characterized by the following variational inequality: we obtain for all (x 1 , x 2 , . . . , x m , λ) ∈ W. e Lagrange function of (1) is given by Let at is, for any λ ∈ R l and Finding a saddle point of L( en, (14) can be rewritten as the following variational inequality (VI): Let W * be the solution set of VI(W, G, Θ). Since we have assumed that the solution set of (1) is nonempty, W * is also nonempty. It follows from the definition of G(w) that . λ max (·) denotes the maximum eigenvalue of one matrix, and λ min (·) denotes the minimum eigenvalue of one matrix. e following notions will be used in the later analysis: It is easy to see that Q � Algorithm and Convergence Analysis In this section, we first describe G-ADMM-S and then prove its convergence via the analytic framework of the contractive-type method [29]. roughout this section, we assume that λ min (G i ) > L i (i � 1, 2, . . . , m). We propose the iterative scheme of G-ADMM-S for solving (1) in Algorithm G-ADMM-S: Let c ∈ (0, 2)D k and b k be defined in (18) and (19), respectively. Start with w 0 . With the given iterate w k , the new iterate w k+1 is given as follows: Step 1 (G-ADMM procedure). Execute scheme (8) to generate w k . Step 2 (substitution procedure). Generate the new iterate w k+1 via where Next, we establish the global convergence of Algorithm G-ADMM-S following the analytic framework of contractive-type methods. We outline the proof sketch as follows: (1) Prove that − D k is a descent direction of the function (1/2)‖w − w * ‖ 2 at the point w � w k whenever w k ≠ w k , where w k is generated by G-ADMM scheme (8) and w * ∈ W * (2) Prove that the sequence generated by Algorithm G-ADMM-S is contractive with respect to W * (3) Establish the convergence Accordingly, we divide the convergence analysis into three sections to address the claims listed above. Verification of the Descent Direction. In this section, we show that − D k is a descent direction of the function (1/2)‖w − w * ‖ 2 at the point w � w k whenever w k ≠ w k and w * ∈ W * . For this purpose, we first prove an important inequality for the output of G-ADMM procedure (8), which will be used often in our further discussion. where Proof. By the optimality condition of the x i -related subproblem in (8), for i � 1, 2, . . . , m, we have x k i ∈ X i and where δ(X i ) is the indicator function of the set X i . us, w k ∈ W and there exists η ∈ zδ( . From the subgradient inequality, one has From the definition of zδ( at is, for all for all x i ∈ X i . Summing the above inequality over i � 1, 2, . . . , m, we obtain where Mathematical Problems in Engineering en, by adding the following term to both sides of (30), we get Since Combining the above two formulas, we have where Mathematical Problems in Engineering Using the notations of G(w k ) (see (15)) and D k (see (18)), assertion (23) is proved. □ Based on assertion (23), we can get the following result. Proof. It follows from (23) that Using (17) and the optimality of w * , we have us, (40) e next theorem implies that − D k is a descent direction of the function (1/2)‖w − w * ‖ 2 at the point w � w k whenever w k ≠ w k . □ Theorem 2. For all w * ∈ W * , Proof. It follows from (37) that at is, (w k − w * ) T D k ≥ b k . Now, we treat the first term of the right-hand side of (43): where the first inequality follows from the Lipschitz continuous of ∇g i . en, let us deal with the second term of the right-hand side of (43): Mathematical Problems in Engineering where Now, we specify the choices of parameters to implement these algorithms. We set H � βI with β � 0.01, the relaxation parameter c � 1.8, r 1 � n1 + β‖A T 1 A 1 ‖, r 2 � ‖M‖ F + β‖A T 2 A 2 ‖, r 3 � n3 + β‖A T 3 A 3 ‖, and G i � r i I n i ×n i − μA T i A i (i � 1, 2, 3). We consider two cases of the parameter μ: Case 1: μ � β; Case 2: μ � 0.15. We test 7 groups of problems with random data. Numerical results are reported in Table 2. For each scenario, we test 5 times and report the average performance. Specifically, we report the number of iterations ("Iter."), the computing time in seconds ("Time"), and the absolute error of function value ("f-error"). e numerical results show that Algorithm G-ADMM-S is effective. Conclusion In this paper, for the linearly constrained separable convex programming, whose objective function is the sum of m individual blocks with nonoverlapping variables and each block is convex, we present a gradient-based ADMM with a substitution in the case m ≥ 3. We have analysed its convergence and iteration complexity. e preliminary numerical results have shown the efficiency of the proposed algorithm. Data Availability No data were used to support this study. Conflicts of Interest e authors declare that they have no conflicts of interest.
2,639
2020-04-27T00:00:00.000
[ "Mathematics", "Computer Science" ]
A UAV and Blockchain-Based System for Industry 4.0 Inventory and Traceability Applications † : Industry 4.0 has paved the way for a world where smart factories will automate and upgrade many processes through the use of some of the latest emerging technologies. One of such technology is Unmanned Aerial Vehicles (UAVs), which have evolved a great deal in the last years in terms of technology (e.g., control units, sensors, UAV frames) and have reduced significantly their cost. UAVs can help industry in automatable and tedious tasks, like the ones performed on a regular basis for determining the inventory and for preserving the traceability of certain items. Moreover, in such tasks, it is essential to determine whether the collected information is valid or true, especially when it comes from untrusted third-parties. In such a case, blockchain, another Industry 4.0 technology that has become very popular in other fields like finance, has the potential to provide a higher level of transparency, security, trust and efficiency in the supply chain and enable the use of smart contracts. Thus, in this paper, the design and preliminary results are presented of a UAV-based system aimed at automating the inventory and keeping the traceability of industrial items attached to Radio-Frequency IDentification (RFID) tags. Such a system can use a blockchain to receive the inventory data collected by UAVs, validate them, ensure their trustworthiness and make them available to the interested parties. Introduction The concept Industry 4.0 fosters the evolution of traditional factories towards smart factories through the use of some of the latest technologies, like 3D printing, augmented reality [1,2], cyber-physical systems [3], fog computing [4] or the Industrial Internet of Things (IIoT) [5,6].Robotics and Unmanned Aerial Vehicles (UAVs) are also considered as key technologies for the future smart factories, since they allow for carrying out repetitive and dangerous tasks without almost any human intervention or supervision. In the last years, UAVs have proved to be really useful in fields like remote sensing (e.g., mining), real-time monitoring, disaster management, border and crowd surveillance, military applications, delivery of goods, precision agriculture, infrastructure inspection or media and entertainment, among others [7,8].In many of such fields, UAVs perform tasks that constitute one the foundations of Industry 4.0: to collect dynamically as much data as possible from multiple locations.In addition, UAVs not only collect data, but are also able to store, process and exchange such information with suppliers or with devices deployed in factories.Industry 4.0 technologies have to be integrated horizontally so that manufacturers and suppliers can cooperate.In order for a company to determine dynamically its need for supplies, it is necessary to keep track of their stock.For such a purpose, many companies carry out a periodic inventory and decide whether more supplies have to be purchased.Unfortunately, in many companies, such an inventory is performed manually, which is a really costly, time-consuming and tedious task.Software exists to automate stock control, but when it is controlled by humans, the process is prone to accounting errors and it is not carried out in real time.Therefore, the ideal inventory should be performed automatically in real time and in an efficient, flexible and safe way. UAVs have been applied to inventory tasks in the past.In the case of the latest commercial systems [9][10][11], they deploy a scanner on the UAV platform and perform a predefined flight in order to read barcodes.In the literature, there are more ambitious solutions like the one presented in [12], which describes an autonomous UAV that makes use of Radio-Frequency IDentification (RFID) and self-positioning/mapping techniques based on a 3D Light Detection and Ranging (LIDAR) device. Another essential technology for many industries is blockchain, which allows for storing the collected data (or a proof of such data) so that it can be exchanged in a secure way among entities that do not trust each other.Although blockchain can be considered to be still under development in many aspects [13], some of its applications for fields where trust is a necessity (e.g., finance) have been already deployed.In addition, blockchain technologies enable the creation of smart contracts, which can be defined as self-sufficient decentralized codes that are executed autonomously when certain conditions of a business process are met.Thus, the code of a smart contract translates into legal terms the control over physical or digital objects through an executable program.For instance, a smart contract may be used as a sort of communication mechanism with a supplier when certain materials run low and more incoming work is expected that would require them. Besides recent literature on blockchain-based autonomous business activity for UAVs [14], to our knowledge, this article is the first that presents a communications architecture that includes both a blockchain and smart contracts together with a UAV development for RFID-based inventory and traceability applications.Specifically, the proposed system can use a blockchain to receive the inventory data collected by UAVs, validate them, ensure their trustworthiness and make them available to the interested parties.Moreover, the system is able to use smart contracts to automate certain processes without human intervention. Communications Architecture Figure 2 depicts the proposed communications architecture.In such an architecture, a UAV carries a Single-Board Computer (SBC) and an RFID reader.The RFID reader is used for collecting data from RFID tags that are attached to items or tools, or are carried by industrial operators.The SBC obtains such data from the RFID reader, processes them and sends them through a wireless communications interface to a ground station.The SBC can send the collected information to two possible destinations: to a Cyber-Physical System (CPS) or to a blockchain. In the case of sending the data to a blockchain, the SBC makes use of a software module that acts as blockchain client.Therefore, the SBC is able to store in a secure way the collected data (or their hashes) into the remote blockchain, which also allows the proposed system to participate in smart contracts.Such a blockchain may be: • Public.It is not required the approval of an entity to join the blockchain.Anyone can publish and validate transactions.Public blockchains can be useful in certain industrial scenarios where a high level of transparency is necessary or where massive device interaction is required. • Private.The participation in the blockchain is regulated by the owner.Therefore, such an owner decides on issues like the mining rewards or who can access the network. • Consortium or federated.In this type of blockchain, a group of owners operate the blockchain.They restrict user access to the network and the actions performed by the participants. In fact, the consensus algorithm is usually run by a pre-selected group of nodes, which increases transaction privacy and accelerates transaction validation.This can be the case of groups of industrial companies (e.g., suppliers) that work on the same field and that have to exchange and validate transactions: each entity may have its own validation node and when a minimum amount of nodes approves a transaction, it is added to the blockchain. UAV Implementation UAVs vary widely in size, materials, components and configuration.In the design of the proposed UAV, the main objective was to develop a cost-effective, simple and modular initial prototype that can be easily adapted to different applications, scenarios and/or performance criteria. Figure 2 depicts the main components of the designed UAV.It is composed of a flight controller PixHawk 2.4.8 flashed with the well-known open-source firmware Ardupilot [15] mounted in an Hexacopter frame of 550 mm of diameter mostly made of carbon fiber except for the arms, which are made of plastic reinforced by carbon fiber rods in the interior. The thrust to move the UAV is generated by six 920 Kv brushless motors controlled by six 30 A Electronic Speed Control (ESCs) powered by a four-cell Li-Po battery of 5 Ah of capacity that also provides power to all the on-board electronics through a voltage conversion module.Besides the built-in sensors of flight controller board, a UBLOX M8N GPS module was included to provide autonomous flight outdoors. In order to perform the inventory, an RFID reader system has been used that consists of a commercial RFID reader (NPR Active Track-2, RF Code, Austin, United States) that has been modified to reduce its weight by replacing its steel case with a lighter one made of foam, which protects the reader and reduces vibrations.The reader is connected through Ethernet to the SBC that processes all the readings and communicates wirelessly with the ground station.Table 1 shows a summary of the main components of the designed UAV. Experiments In order to test the proposed system, it was tested in a big industrial warehouse (approximately 120 m long and 40 m wide) where 13 different tags were attached to items scattered throughout the warehouse (actually, for security reasons, the tags were deployed in a 50 m × 40 m isolated subarea).Figure 3 illustrates the experimental setup, while Figure 4 shows one of the moments during the experiments.As it can be observed, in these preliminary tests, the drone was operated in manual mode in order to avoid possible security problems and it followed a circular movement around the test area where the RFID-tagged items were placed.In the future, such an operation will be automatic through prefixed waypoints.Figure 5 shows the percentage of the read tags through time.It can be observed that all the tags were read in less than two minutes.In addition, the significant reading range of the reader can be observed, since, in the first 11 seconds (as the drone rises from the ground), it is able to read roughly 30% of the tags.These results are really promising, since the time required by a human operator to collect the same information is at least five times greater than when using the proposed system, since it has to walk through the area, locate the items and identify them manually. Conclusions In this paper, the design and preliminary results of a UAV and blockchain-based system for Industry 4.0 inventory and traceability applications were presented.Such an RFID-based system is able to collect inventory data five times faster than a human operator.The real-time collected data are processed in an SBC that can send the information to a CPS or to a public, private or consortium blockchain.Further work will focus on additional experiments and the implementation of a specific blockchain with IoT-based smart contracts. Figure 2 . Figure 2. UAV used by the inventory and traceability system. Figure 5 . Figure 5. Percentage of read tags during a specific inventory flight. Table 1 . Main features of the UAV components.
2,430.4
2018-11-14T00:00:00.000
[ "Computer Science", "Engineering", "Business" ]
Improving Glycerol Photoreforming Hydrogen Production Over Ag2O-TiO2 Catalysts by Enhanced Colloidal Dispersion Stability Solar-driven photocatalytic reforming of biomass-derived resources for hydrogen production offers a sustainable route toward the generation of clean and renewable fuels. However, the dispersion stability of the catalyst particles in the aqueous phase hinders the efficiency of hydrogen production. In this work, a novel method of mixing Ag2O-TiO2 photocatalysts with different morphologies was implemented to promote colloidal dispersion stability, thereby improving hydrogen production performance. A series of Ag2O-TiO2 nanoparticles with different morphologies were synthesized, and their dispersion stabilities in aqueous phase were investigated individually. Two types of Ag2O-TiO2 particles with different morphologies under certain proportions were mixed and suspended in glycerol aqueous solution without adding any dispersant for enhancing dispersion stability while reacting. From the results, photocatalytic hydrogen production was found to be strongly correlated to colloidal dispersion stability. The mixed suspension of Ag2O-TiO2 nanosphere and nanoplate achieved an excellent colloidal dispersion stability without employing any additives or external energy input, and the photoreforming hydrogen production obtained from this binary component system was around 1.1–2.3 times higher than that of the single-component system. From the calculated hydrogen production rate constants between continuous stirring and the binary system, there was only <6% difference, suggesting an efficient mass transfer of the binary system for photoreforming hydrogen production. The proposed method could provide some inspiration to a more energy-efficient heterogeneous catalytic hydrogen production process. INTRODUCTION Photoreforming hydrogen production route has been attracting great attention due to its integration of both solar energy and renewable sources utilization (Liu et al., 2014;Yu et al., 2015;Sadanandam et al., 2017). With the presence of renewable sacrificial organic compounds [e.g., glycerol (Shen Y. et al., 2019), lactic acid (Fu et al., 2019), or wood (Kawai and Sakata, 1980)], GRAPHICAL ABSTRACT | Hydrogen production by enhanced colloidal dispersion stability. the reaction efficiency of H 2 generation could be significantly improved as those compounds are more readily to combine with photo-generated h + than water splitting. Actually, the redox reaction between water and organic compounds into a onestep process could be defined as photoreforming which is a valid approach to produce H 2 as it is more thermodynamically feasible than pure water splitting (Fu et al., 2008). It is worth noting that a large number of biomass-derived substrates, such as bio-alcohols, could be used for this photoreforming hydrogen production process. Among those biomass-derived substrates, glycerol (C 3 H 8 O 3 ) as a by-product of biodiesel production attracts special interest for hydrogen production for its low cost and excess production (Daskalaki et al., 2011;Gombac and Falqui, 2016). In our recent studies, glycerol has been found to have great potential for both efficient thermo-chemical and photo-chemical hydrogen production (Wang et al., , 2017aNi et al., 2017). Titanium dioxide, one of the most promising photocatalysts, has been widely studied for photoreforming hydrogen production (Petala et al., 2015). However, there are some obstacles for further practical applications of bare TiO 2 : the severe electron-hole recombination of bare TiO 2 catalyst caused by a mismatch between photo-excited charge carriers life span and redox reaction slow kinetics, and this could lead to low energy conversion efficiency (Patrocinio et al., 2015;Litke et al., 2017); the spontaneous aggregation of TiO 2 -based particles when they are being suspended in aqueous phase due to their exposed highsurface energy facets for particular crystals Zhang et al., 2015). Thus, it is desirable to maintain certain dispersion stability during reaction and suppress electron-hole recombination to better achieve hydrogen production. Efforts in previous investigations have been made to enhance the TiO 2 -basis photocatalytic activity (Yang et al., 2013;Pan et al., 2018;Shen J. et al., 2019;Wang W. et al., 2019). In our previous studies, it was found that photoreforming H 2 production could be improved by coupling other metal oxide semiconductors to bare TiO 2 with a sol-gel method (Wang et al., 2017a). In particular, an efficient catalytic hydrogen production was achieved over Ag 2 O-TiO 2 catalyst. This was mainly because Ag 2 O composite could form hetero-structures with TiO 2 , which could efficiently provide the rapid separation sites for the photo-generated electrons and holes. Photocatalytic efficiency in an aqueous phase environment is found to be influenced by the catalyst aggregation to some extent (Li et al., 2010). In the works of Lakshminarasimhan et al. (2008), they concluded that the higher photocatalytic hydrogen yield was effected by the particle agglomeration of TiO 2 . Besides, physical dispersion such as ultrasonic dispersion and mechanical dispersion and chemical dispersion such as dispersant or surfactant addition and nanoparticle surface modification were pointed out to be effective for improving the stability of TiO 2 particles in water (Kim and Nishimura, 2012;Othman et al., 2012). However, those methods requiring extra external energy input (e.g., ultrasonic dispersion, mechanical stirring, or electromagnetic stirring, etc.) obviously break the energy balance and increase the cost of large-scale application of photocatalytic hydrogen production. In other previous studies, it was found that the particle agglomeration could be minimized by controlling the pH of the suspension , applying silane coupling agent modification of TiO 2 (Wang C. et al., 2019) or fluorinating TiO 2 particles by fluorine gas etc. . reported that the photocatalytic activity would be improved due to the surface fluorination of titanium dioxide by enhancing the dispersion stability of the TiO 2 in the organic reagents (Kim and Nishimura, 2012). Theoretically, photocatalytic activity is greatly affected by the dispersion stability which is directly influenced by the electrostatic interactions between the solid surface and generated ions. However, the approaches of TiO 2 surface modification may sacrifice the surface-active sites, resulting in a lower surface catalytic reaction efficiency. Those methods of regulating the composition of the liquid substrates (such as adjusting pH value) may also increase the complexity of the aqueous-phase reaction system and interfere with the mass transfer of the reactions. In addition, the high cost of surfactants and the disposal of generated residues would also be derivative issues. In our previous studies, it was discovered that TiO 2 -H 2 O nanofluids could be stabilized through the addition of ultrathin ZrP nanoplatelets (Liu et al., 2015b). The means of mixing TiO 2 particles with different shapes may be beneficial to the dispersion stability (Shao et al., 2015). We have achieved a preliminary enhancing hydrogen production by mixing two types of bare TiO 2 with different shapes (Shao et al., 2015). The mixed suspension of TiO 2 nanosphere and nanosheet still showed great colloidal dispersion stability and photocatalytic hydrogen production promoting at a specific mixing ratio. Herein, this study attempts to investigate the dispersion stability of Ag 2 O-TiO 2 -based photocatalysts with different morphologies and its effect on photocatalytic activity for photoreforming hydrogen production. Various Ag 2 O-TiO 2 nanoparticles with different morphologies were synthesized, and their microstructures were detected by X-ray diffraction (XRD), Brunauer-Emmett-Teller measurements (BET), and high-resolution transmission electron microscopy (HRTEM) analysis etc. The dispersion stabilities of the aqueous suspensions were characterized using zeta potential measurements and a Turbiscan Stability method. Based on the obtained results of previous studies, binary Ag 2 O-TiO 2 systems were introduced by dispersing a certain ratio of two types of Ag 2 O-TiO 2 nanoparticles to enhance the dispersion stabilities. Hydrogen production from photoreforming of glycerol aqueous solution was carried out to examine the relationship between dispersion stability and photocatalytic activity. Synthesis of Various Shapes of Ag 2 O-TiO 2 Nanoparticles The reason why Ag 2 O was utilized in the experiment is that Ag 2 O-TiO 2 has the strongest photocatalysis ability among ZnO 2 -TiO 2 , Bi 2 O 3 -TiO 2 , and Ag 2 O-TiO 2 due to its narrow band gap and absorption of more spectra energy (Wang et al., 2017a) and the high self-stable shown in the Ag 2 O to eliminate the external influence to the system stability (Yu et al., 2016). All chemicals were purchased from Sigma-Aldrich Trading Co. Ltd., and the reagents with analytical grade were used as received without further purification. The deionized water was prepared by Millipore Milli-Q ultrapure water purification systems with a resistivity larger than 18.2 M . Synthesis of various shapes of Ag 2 O-TiO 2 nanoparticles could be summarized as follows: synthesis of TiO 2 with various shapes and compounding Ag 2 O with the prepared TiO 2 . According to the previously study, TiO 2 could be prepared to provide the desired morphologies by several methods. TiO 2 nanosphere with the average diameter of 32 nm was directly used as received. The asprepared TiO 2 nanosphere was also served as a precursor in the process of TiO 2 nanotube synthesized. The typical hydrothermal process in a certain concentration sodium hydroxide aqueous solution was applied to synthesize TiO 2 nanotube (Kumar et al., 2016). At first, 2.5 g TiO 2 sample was dispersed in 200 ml of NaOH solution (10 M). After stirring in the circumstance temperature for 1 h, the obtained slurry was transferred and sealed in a Teflon-lined autoclave for the hydrothermal treatment under 130 • C in an atmospheric pressure for 20 h. The certain volume of 0.1 mol/L HCl and ethanol solution was applied to wash the precipitate alternately after the supernatant cooling down to the circumstance temperature. For the last step, the obtained solid was dried at 70 • C under the atmosphere for 12 h. The hydrothermal process was also employed to prepare TiO 2 nanoplate. In the beginning, 10 ml titanium tetra-isopropanolate was dissolved in 1.2 ml hydrofluoric acid with continuous stirring for 30 min. After that, a hydrothermal treatment was carried out for 24 h under 180 • C and atmospheric pressure. The products were washed alternatively by water and ethanol in centrifugation until the final pH of the suspension reached 7. The sample was then dried for 12 h at 70 • C. It should be noted that all TiO 2 samples were calcined at 350 • C under the atmosphere for 5 h in the last step of preparation. After obtaining the TiO 2 precursors with different morphologies, the corresponding Ag 2 O-TiO 2 particles were prepared using a precipitation method (Zhou et al., 2010). TiO 2 precursors (0.5 g) of each kind were dispersed in 100 ml of distilled water, followed by dissolving 0.725 g AgNO 3 to each suspension while stirring (weight ratio of Ag 2 O:TiO 2 = 1:1). Afterward, the excess amount 0.2 M NaOH solution was added to the mixture with continuously stirring to gain the precipitate. Finally, various Ag 2 O-TiO 2 samples were obtained after washing and drying. Due to the control of the compositions, the morphologies of TiO 2 were considered to be unchanged after compounding Ag 2 O. The obtained Ag 2 O-TiO 2 nanosphere, Ag 2 O-TiO 2 nanoplate, and Ag 2 O-TiO 2 nanotubes were denoted as AS, AP, and AT, respectively. Characterization of Catalysts The BET (Brunauer-Emmett-Teller) method was employed to detect the specific areas of the catalysts that were detected by N 2 adsorption and desorption isotherms at 77 K with Micrometric Acusorb 2100E apparatus. In a typical procedure, the sample was disposed in vacuum to degassed prior to the measurement at 100 • C for 1 h and then at 120 • C for 2 h in turn. The crystal phase and structure of the samples were investigated using powder XRD (Shimadzu XRD-6000) for diffraction angle 2 h from 20 • to 80 • where a Cu target Kα-ray (operating at 40 kV and 30 mA, with k = 0.1541 nm). In the applied continuous mode, a nominal step interval of 0.0025 • 2θ with a step time of 100 s was set. According to the diffraction peaks and the mean crystallite size was calculated by the Scherrer equation. Detailed morphologies and structures of the catalysts were observed under the HRTEM using JEM-2100. UV-vis absorption spectra of the samples were obtained by a UV-3600 plus (Shimadzu, Japan) apparatus. The particle sizes were analyzed at 25 • C by dynamic light scattering (DLS) at a scattering angle of 173 with a Zeta sizer Nano ZS particle size analyzer (Beckman Coulter, Inc., USA). Colloidal Dispersion Stability Measurements The dispersion stabilities of the nanoparticle suspensions were analyzed by the Turbiscan Lab R Expert type stability analyzer manufactured by Formulation (France) (Buron et al., 2004;Wiśniewska, 2010;Fang et al., 2012;Kang et al., 2012). A near-infrared light source λ = 880 nm based on multiple light scattering, transmission coefficients, and backscattered pulses was monitored by two simultaneous optical detectors. It should be noted that a fingerprint spectrum characterizing the dispersion performance of sample could be confirmed when the measurement frequency, scanning time, and scanning interval of the analyzer were set. The dispersion stability was evaluated by Turbiscan Stability Index (TSI) with the help of Turbiscan Easy Soft R . Based on the measured data, Turbiscan Stability Index (TSI) could be calculated using the following equation: Where h is the height of the sample cell, and scan i (h) denotes the light transmission or backscattering obtained by the i th scan at height h. Larger TSI value indicates less stable the dispersed system. Zeta potential profiles of the suspension system were measured using a zeta-potential measurement device (Delsa Nano C/SS). The specific operation process was used to prepare the suspension: 2-mg sample was dispersed into 20-ml solvents and ultrasonicated for 1 h. Photocatalytic Activity Measurements The photoreforming H 2 production experiments were carried out in a duplex Pyrex flask at nearly ambient temperature and −0.1 MPa pressure, where openings of the flask were sealed with a silicone rubber seals and glass lids. A 300W Xe arc lamp (50 W, 320-780 nm, Beijing Philae Technology Co., Ltd., China) was used as a light source and vertically placed at 10 cm away from the top of the photocatalytic reactor. The focused light intensity and area on the flask for xenon lamp were ca. 120 mW/cm 2 and 0.2 cm 2 , respectively. In each photocatalytic experiment, 0.1 g total amount of catalyst (or mixed binary catalysts with different weight ratios) was suspended in 100-ml glycerol aqueous solution (containing 7 vol% of glycerol). Based on the previous study, different weight ratios were selected as: 20% of AS with 80% of AP (denoted as 1AS-4AP), 40% of AS with 60% of AP (denoted as 2AS-3AP), 60% of AS with 40% of AP (denoted as 3AS-2AP), 80% of AS with 20% of AP (denoted as 4AS-1AP). Before each test, the suspensions were stirred for 30 min and maintained in ultrasonic agitation for another 30 min to maintain the initial dispersion stability. Every effort was made to ensure that there was no external interference making sense to the colloidal stability during each experiment. The produced gaseous products were detected by gas chromatographer with a TCD detector (GC-2014c AT, Shimadzu, Japan) and 5Å molecular sieve column using N 2 as a carrier gas. The following equation group would be applied to calculate the apparent quantum efficiency (AQE) and light-to-hydrogen energy conversion efficiency (LTH) according to the work by (Yu et al., 2011): Where P is radiation flux, E is average irradiance, A R is lightreceiving area of reactor; N i p is number of incident photons, t is reaction time, λ is equivalent wavelength, h is Planck constant, c is constant speed of light; a is the apparent quantum efficiency (AQE) and R H2 is the obtained hydrogen production rate, N A is the Avogadro constant; ηis defined as the light-to-hydrogen (LTH) energy conversion efficiency, and H 0 c is the enthalpy of combustion of hydrogen. Density Function Theory Calculation Figures 1A-D show the conventional cell of anatase TiO 2 and different surface cells introduced for surface energy calculation in this experience. As known, anatase TiO 2 has a tetragonal structure [space group: I4 1 /amd, local symmetry: D 4h Long, 2013] that contains two titanium atoms and four oxygen atoms in its unit cell. A range of surface slabs could be created by optimizing bulk unit cell of anatase at its Miller indices by surface builder module in materials studio. In this work, we employed a flat slab with a thickness of 2 atomic layers, which was vertical to the surface and could extend indefinitely in the other two directions to simulate the surface of anatase TiO 2 and call periodic boundary conditions. Besides, each repeated replica with a certain vacuum width of 12 Å constituted each surface cell in this work. For example, the proposed supercell model of anatase TiO 2 (001) consisted of 16 titanium atoms, 32 oxygen atoms. The layer model of Ag 2 O coupled with anatase TiO 2 (001) consisted of 16 titanium atoms, 37 oxygen atoms, and 10 silver atoms shown in Figures 1E,F. The atomic concentration (the number of atoms that can fit into a given volume) of silver was about 15.87% (atomic fraction), which was referenced from the sample used in the experimental section. All calculations were performed with the CASTEP using a total energy plane-wave pseudo-potential method) module in Material Studio 7.0 on the basis of density function theory (DFT) (Payne et al., 1992). The expanding wave functions of the valence electrons using a plane wave (PW) basis set within a specified energy cut-off of 300 eV. In additionally, we described the exchange correlation energy with the generalized functional approximation of the Perdew-Burke-Ernzerhof gradient(GGA-PBE) (Perdew et al., 1996) and the pseudo-potential representation was in the reciprocal space (Troullier and Martins, 1991). In the calculation, the k-point mesh generated by Monkhorst-Pack scheme was set as 2×2×2 over Brillouim zone with a k-point spacing of 0.025Å −1 . The Broyden-Fletcher-Goldfarb-Shanno (BFGS) method was set to relax the structure and the thresholds for the converged structure were set as following: energy change per atom was <2.0 × 10 −5 eV; residual force was <0.05 eV/Å; the displacement of atoms during the geometry optimization was <0.002 Å; and the residual bulk stress was <0.1 Gpa. The thermodynamic stability of a given surface is dependent on its surface energy and a positive low value indicates a stable surface. The surface energy (Esurf) in a slab model could be calculated by (Meng et al., 2016): where E slab and E bulk represents the total energies of the surface slab and the bulk unit cell, respectively. N slab and N bulk are the numbers of atoms contained in the slab and the bulk unit cells, respectively, while A is denoted the unit area of the surface and "2" means that the flat slab has two faces along the z-axis. The surface energy was calculated by the CASTEP module in Materials Studio (MS) on the basis of DFT. Photocatalyst Characterization The pore structures and BET surface areas of the asprepared samples were detected by the N 2 adsorptiondesorption measurement. Figure 2 showed the isotherms and the corresponding pore size distribution curves of the samples. According to the International Union of Pure and Applied Chemistry (IUPAC) classification, type IV isotherm is the most approximate to the nitrogen adsorption-desorption isotherms of all samples, indicating the presence of mesoporous structure (2-50 nm). The shapes of hysteresis loops were of type H 3 at the relative pressure value of range of 0.8-1.0, suggesting the presence of slit-like pores due to the stacking of TiO 2based particles. The Barrett, Joyner, and Halenda (BJH) method was used to obtain the pore size distribution curve from the desorption branch of the nitrogen isotherm. After calculating through the BJH method, the pore diameters for AS, AP, and AT were about 31.62, 16.58, 5.9 nm, and the BET surface areas of AS, AP, and AT were 25.31, 48.79, and 73.61 m 2 g −1 ; the related details were listed in Table 1. The crystal structure and crystallinity of the synthesized photocatalysts were investigated using powder XRD analysis, and the results were demonstrated in The appearance of Ag 2 O as a secondary phase in all samples indicated that Ag 2 O was well compounded with TiO 2 particles of three different morphologies by the described synthesis method. As a matter of fact, such structures for all the samples may be beneficial for electron transfer which could be a benefit to the photocatalytic performance (Tan et al., 2003). As seen in the HRTEM image (Figure 4a), Ag 2 O grains were regarded as much tinier compared to the bright part and could be identified as dark spots on the surface of the nearly transparent TiO 2 nanospheres. The average particles size of Ag 2 O particles was calculated as 9.9 nm from respective HRTEM image (analyzed by Nano Measurer 1.2.0 software R ). In Figures 4b,d, Ag 2 O particles were also tiny and well-dispersed in the way of anchoring tightly onto the surface of the TiO 2 nanoplates. The Ag 2 O nanoparticles on TiO 2 nanoplates are very stable and will not break even after ultrasonic treatment, which is meaningless. It could be observed that the loaded Ag 2 O particles had a fairly wide range of sizes that varied from 5.64 to 17.93 nm. The HRTEM image of Figure 4c revealed that the structure of the prepared TiO 2 nanotubes was of cylindrical shape and hollow inside. The outer and inner diameters of the tube were about 7 and 4 nm, respectively. Similarly, the black spot on the surface of the AT in the HRTEM image implied to the presence of the Ag 2 O nanoparticles. In the previous observation of Ag 2 O-TiO 2 photocatalyst, some of the Ag 2 O particles could be reduced to metallic Ag particles (Wang et al., 2017b). The average Ag 2 O size (20 measurement objects were randomly selected) over AT was 15.41 nm. Furthermore, almost no free Ag 2 O was found in the background of the HRTEM images, which could confirm a high loading rate of the Ag 2 O particles. The histogram in Figure 4e sample. According to the related work (Ren and Yang, 2017), the influence of the ununiform distribution of the Ag 2 O nanoparticles could be ignored regarding to the H 2 yield. In fact, the photoelectrochemical properties of the catalysts synthesized with the same idea had been examined in our previous work which showed a competitive performance (Wang C. et al., 2019). The apparent sizes of catalyst particles were measured in aqueous suspensions with certain concentrations using dynamic light scattering (DLS) technique. Just as the histogram in Figure 5 has shown, it could be observed that there was a wide range of particle diameters and particle aggregations. From the results of DLS measurements, two peaks were detected for all samples, about 5% of the detected particles for all those three samples were in the diameters between 90 and 150 nm. These parts might be formed by individual particles. The other parts of the detected particle sizes for AS, AP, and AT were in the range varying from 300 to 700 nm, 170 to 400 nm, and 250 to 550 nm, respectively, which might reflect their sizes of aggregations. Figure 6 showed the UV-visible absorption spectra for the binary mixing systems with the Kubelka-Munk diagram for apparent band gap energies (E g ) to understand the optical properties and dispersion stability of the binary system, calculated by the Tauc equation in the following (Grover et al., 2013): (αhν) n = hν-E g , where ν is frequency, h is Plank's constant, and n = 0.5 for indirect semiconductor, α is absorption coefficient, and E g is the band gap energy. In fact, the absorptions above 400 nm in catalyst samples were ascribed to the presence of Ag 2 O as a functional visible-light sensitization compound which possessed both a tough and wide absorption band in the visiblelight region (Zhou et al., 2010). The wavelength thresholds of the single-component system AS, AP, and AT were calculated to be 450, 450, and 520 nm, corresponding to the bandgaps of 2.75, 2.75, and 2.40 eV, respectively. The calculated E g for AS, AP, and AT in this study were at the same level of the reported values of Ag 2 O-TiO 2 (varied from 2.18 to 2.88 eV) (Zhou et al., 2010;Kumar et al., 2016;Ren and Yang, 2017). Here, it should be noted that 20% of AS and 80% of AP were denoted as 1AS-4AP. And other denote similar to the denoting rules. As known, the optical properties of TiO 2 nanoparticles were sensitive to their morphologies, therefore the peak intensity differences of UVvis absorption spectra for the single-component systems were mainly due to their morphological diversities. It could be deduced from the peak intensities of UV-vis light absorption (shown in Figure 6) that the light absorption abilities could be summarized as AS > AP > AT. The present result may be probably caused by the microscopic spatial structures of the materials and the different promoting effects of Ag 2 O for TiO 2 with different morphologies. As shown, the light absorption capacity of AS-AT binary nanoparticle system was stronger than that of the AS single-component system. There was no obvious difference between AP and AT single-component systems in the light absorption capacities. Compared to AT or AP single-component systems, AT-AP binary system had not been improved in light absorption capacity. Generally speaking, the light absorption capacities of the binary systems located between the highest (AS) and lowest (AT) single-component system. From the results, the system of 60% AS mixed with 40% AP (3AS-2AP) exhibited the highest absorption capacity compared to other AS-AP binary systems, AS-AT systems, and AT-AP systems. Since the obtained specific surface areas and light absorption properties of Ag 2 O-TiO 2 materials had no direct and obvious effect on the UV-vis absorption experimental results of the binary systems, it could be considered that the differences of light absorption properties might be caused by their different dispersion stabilities. Dispersion Stability Analysis The dispersion stability could be analyzed by Turbiscan Stability Index (TSI), and the obtained TSI values for different systems were plotted in Figure 8. It should be noted that the TSI value shows a negative correlation to the dispersion stability for the suspension, and the increase of TSI value indicates a fast sedimentation process and a large thickness of the sediment. According to the results shown in Figure 7, AS single-component system performed the best dispersion stability among all samples, and the addition of AS with certain concentration improved the dispersion stability of AP-and AT-based systems, respectively. In the case of AP-AT binary system, the TSI values of the selected AT-AP binary systems were smaller than that of AT and AP single-component system, indicating the enhancing dispersion stabilities. Such increase in dispersion stability was possibly related to a comprehensive effect of electrostatic repulsion and steric hindrance according to the DLVO theory (Liu et al., 2015a). Generally speaking, this effect was caused by the reduction of particle collision frequency and agglomeration tendency. The results of average transmission flux were also depicted in Figure 7. There was a fact that particles being homogeneously dispersed in water would block most of the laser to the detector, resulting in low transmission flux, while the formed sedimentations could not block most of the upper laser, so that the total transmission flux would be high. As the time increased, the transmission flux increased from 0 to 42% for the singlecomponent systems, meaning agglomeration and sedimentation of AT and AP particles occurred due to the van der Waals force and the gravitational force. In Figure 7A, the dispersion stability of AP single-component system was significantly improved by mixing AS (AS varied from 0 to 40% of the mixing particles, while the mass of the mixing particles was 0.1 wt.% of total mass of the suspension). Nevertheless, the excessive AS (60%) declined dispersion stability of the suspension. It could be inferred that the depletion interaction between AS and AP was emphasized when reaching a critical ratio, and such depletion interaction between particles with different shapes might influence the dispersion stability (Mason, 2002;Zhang et al., 2013). In the case of AS-AT systems (Figure 7B), AT showed worse dispersion stability than AS with the same total concentration. This result suggested that AT in the suspension was easier to agglomerate resulting in the formation of large AT particles. Unfortunately, there was no obvious improvement in dispersion stability even mixing AS to AT. In Figure 7C, the dispersion stability of AT-AP binary system had been improved compared to AT or AP single-component system, indicating the generation of a strong electrostatic repulsion between the AT and AP solid surface. Zeta potential (ζ ) is widely recognized as an indicator of the stability of colloidal dispersions, revealing the potential difference between the dispersion medium and the fluidic connection of the stationary layer to the dispersed particles. Figure 8 displayed the absolute values of ζ for the single-component systems and the binary component systems. According to the previous study, the obtained values were large enough to maintain a relatively high stability (Patel and Agrawal, 2011). According to the most widely accepted DLVO theory, colloid stability depends on the sum of van der Waals attractions and electrostatic repulsive forces (Missana and Adell, 2000). The ζ value could provide information of the electrostatic repulsive force. On the other hand, the van der Waals force relies on the Hamaker constant, and this constant is determined by particle spatial configurations and other properties without considering the influence of an intervening medium between the two particles of interaction. When the Hamaker constant is small, the reflecting van der Waals force is weak, the low electrostatic repulsion reflected by small ζ may be appropriate to ensure colloid stability (Kim et al., 2014). Therefore, it is common to come across stable colloids with low ζ values. Photoreforming Hydrogen Production The photoreforming H 2 evolution over the single and binary systems were carried out in a glycerol-water system stimulated by 300 W xenon lamp. Figure 9 demonstrated the photocatalytic H 2 evolution over time for different catalyst systems. As observed, the hydrogen production amount could be summarized as AP < AT < AS for the single-catalyst systems. Although AP with a large specific surface area and a high pore volume was considered as a two-dimensional (2D) material for excellent photo-generated carrier transfer property (Amano et al., 2009), the AP system did not exhibit a competitive photocatalytic hydrogen production performance. According to the analysis dispersion stability, both zeta potential and TSI value showed the poor dispersion stability of AP. In other words, the dispersion stability had a strong influence on hydrogen production performance. To be noted, AT with a 1D nanostructure exhibited higher photocatalytic activity among the single-component systems due to a better delocalization effect of the excited photo-generated electron-hole pairs and welldeveloped space charge region that reduced the recombination of photo-generated charge species effectively (Toledo Antonio et al., 2010;Zhao et al., 2014). Still, these advantages may be largely neutralized by the poor dispersion stability of AT. The poor dispersion stability of AT can be confirmed from the TSI value and the transmission flux. This result further indicated the importance of dispersion stability for photocatalytic hydrogen production. Compared with AP and AT single-component systems, AS with the lowest surface area, exhibited the best photocatalytic hydrogen production performance. Experiments were carried out to further confirm the role of binary systems with a high dispersion stability for photocatalytic hydrogen production. For AS-AP binary component system (Figure 9A), the photocatalytic activity of AP was significantly improved by mixing AS. From the results of the AS-AP binary systems, 3AS-2AP showed the least photocatalytic hydrogen production amount, while its dispersion stability appeared to be the worst of the binary systems. Since there were differences of catalytic performance and the dispersion stability between AS and AP, the binary system might present its catalytic performance within the range of those two single-component systems. Due to the doping of Ag 2 O-TiO 2 , a structure might be formed to display the antenna mechanism for promoting the catalytic activity, and the binary component system may have a more positive impact on the role of this mechanism compared to that of the single-component system (Wang et al., 2006). Among those binary systems, 20% AS and 80% AP system displayed the largest photocatalytic H 2 production amount of 1,133.21 µmolg −1 . In the suspension of AS-AT binary component system, the depletion interaction between AS and AT was weak, leading to little effect on the enhancement of dispersion stability. As illustrated in Figure 7B, the dispersion stability of AS-AT binary component systems was not effectively improved compared with AS and AT single-component suspensions; therefore, the photocatalytic hydrogen production performance was not enhanced (Figure 9B). In Figure 9C, the photocatalytic performance of the AT-AP binary component system was not significantly improved compared to that of single-component systems regardless of time effect on reaction kinetics, this result was highly consistent with the previous result of bare TiO 2 catalyst (Cai et al., 2018). The dispersion stability results of AT-AP binary systems were roughly consistent with the trend of photocatalytic activities. If not considering AT single-component system, 1AT-4AP binary system displayed the photocatalytic H 2 production amount of 690.16 µmol·g −1 and was about 1.5 times higher than that of the AP single-component system. However, AT with complex spatial structure and electronic transmission characteristics may have special microscopic particle interaction forces, and this result was similar to that of our previous study of bare TiO 2 nanotubes meaning that doping Ag 2 O did not significantly change the spatial interaction of TiO 2 nanotube particles (Cai et al., 2018). The light source at the range of 320-780 nm was used in this experiment. In the range of UV light irradiation, both TiO 2 and Ag 2 O could be excited to generate the photogenerated electronhole pairs, while the visible light irradiation is only absorbed by Ag 2 O, according to (R1) and (R2). The Ag 2 O would be in situ reduced by the electrons to Ag according to (R3). Then, the photo-generated holes on both TiO 2 and Ag 2 O will produce reactive oxygen species ·OH. At the same time, according to (3) and (5), the O 2 obtained by the Ag 2 O reduced would react with the electrons to more ·OH, which could improve the TiO 2 photocatalytic activity (Ran et al., 2019;Chen et al., 2020). Because of the band gap of the Ag 2 O and TiO 2 , the Ag 2 O can be excited to produce h + and e − under the visible light and the electrons on the conduction would be transferred to the conduction band of TiO 2 to produce H 2 . Thus, in this biphasic photocatalyst, the Ag 2 O acts as a visible light sensitizer to absorb more energy from the light source, while the Ag acts as an electron to transfer the photo-generated electrons to improve the H 2 yield (Sadanandam et al., 2017). Kinetic Analysis During a typical heterogeneous photocatalytic hydrogen production from photoreforming, the organic substrates are considered to be strongly adsorbed on the catalyst surface to promote the direct reaction between positive holes and organics rather than those in the solutions (Clarizia et al., 2017). The reaction rate could be described by Langmuir-Hinshelwood (L-H) kinetics, which is dominated by different rate-determining steps under different concentrations of the adsorbed species (Rivero et al., 2019). In fact, the initial concentration of the glycerol solution in this study is an effective means of reflecting kinetic behavior for hydrogen production (shown in Figure 10). For each experiment, 1-h irradiation without stirring was carried out for photocatalyst with a total mass of 0.1 g (80% of AP and 20% of AS which was proved to have a better self-dispersion stability among binary systems). Meanwhile, another set of experiment was conducted under the same condition except for the continuous stirring. The L-H kinetic model could be described as follows: Assuming the equilibrium of absorption and desorption: 1 where r H 2 refers to H 2 generation rate, the result of k a k d is the absorption equilibrium constant, k H 2 is the rate constant of hydrogen production, θ is the cover degree, and C is the initial concentration of glycerol solution. As shown, the results of the kinetic calculations fitted well with the experimental data for both cases, indicating the wide applicability of L-H kinetics for photoreforming reactions. The increase of reaction rates was significantly reduced when the initial concentrations of the Frontiers in Chemistry | www.frontiersin.org glycerol solutions came to 1 mmol/ml for both cases reflecting the key concentration of the speed-limiting step switching. As obtained, the values of k H 2 and k a k d of the case under stirring were 217.9 µmol/g · h and 1.39 ml · µmol −1 , respectively. On the other hand, the corresponding values of k H 2 and b were 204.08 µmol/g · h and 0.90 ml · µmol −1 for the case of colloidal system with a self-dispersion stability. From these results, different reaction rate constants and absorption equilibrium constants for the two cases denoted their different mass transfer characteristics when using the same type of catalyst combination under the same light intensity and temperature. Although the binary system with a better self-dispersion stability was still less efficient than the mass transfer promoted by continuous stirring, it still can be kept at a high level of pseudo first-order kinetic constants. It is worth noting that the difference of hydrogen production rate constants between stirring and self-dispersion stability was <6%, showing that the mass transfer efficiency of the binary system for hydrogen production was high within the selected concentration range. Density Function Theory Study The crystal lattice of Ag 2 O and bulk anatase TiO 2 had been optimized before calculated Ag 2 O and TiO 2 surface and the lattice constant of Ag 2 O after optimizing is 4.728Å. The lattice parameter of bulk anatase TiO 2 (a = 3.776Å, b = 3.776Å, c = 9.486Å), calculated by DFT method, corresponded to the experimental data (Arlt et al., 2000). The equilibrium morphology of a crystal was determined by its surface energy and the related growth rate of various surfaces (Cooper and de Leeuw, 2003), which means the certain surface with a high surface energy was supposed to have a great growth rate, and these fast growing surface would not be presented in the resulting crystal morphology. On the contrary, those surfaces with low surface energies and hence slow growing rates had the opposite situation in the resulting crystal morphology (Gao et al., 2013). According to the surface energies theory, the thermodynamic penalty for cleaving a surface from a bulk material was also detected. The calculated surface energies of three types of anatase TiO 2 surfaces are shown in Table 2. According to (101) surface with the lowest surface energy was the main cleavage and was expressed planes in the equilibrium morphology of anatase TiO 2 crystal which matched well with the XRD results (in Figure 3). The total energy of a unit cell with formula Ti 2 O 4 of anatase crystal is shown in the footnote in Table 2. Figures 11A,B displayed the obtained band structure and density of states DOS [including the total density of states (TDOS) and the partial density of states (PDOS)] of pure anatase TiO 2 . The calculated band gap of pure anatase TiO 2 was 2.098 eV, which was underestimated comparing to the experimental E g = 3.2eV for the reason that the framework of the DFT would not take discontinuity of the exchange-correlation potential into account (Stampfl and Van de Walle, 1999;Zhao et al., 2015). The valence band of the pure anatase TiO 2 phase was mainly composed of the part from −20 to −15 eV and the one from −5 to 0 eV. The former part was mainly consisted by the O 2s states, which were far away from the top of the valence band, and other electronic states were not obvious, so it had little impact on the physical properties of objects. The latter part was mainly consisted of both the O 2p states and Ti3d states. As the Ti 3d states were split into two parts (the t 2g and e g states) in an octahedral ligand field with O h symmetry, the CB could be divided into the lower and the upper parts (Fang et al., 2014). Additionally, the PDOS diagram showed that the conduction band mainly consisted by the Ti 3d states. In general, the Ti 3d states act a dominate role in the conduction band in the pure TiO 2 , while the O 2p states act in the valence band. This result implies that the main cause of optical absorption is electrons transiting from O 2p to Ti 3d states, which is corresponding to the previous theoretical researches (Cao et al., 2014;Wang et al., 2017a). As compared to the 2.97V of E g of pristine TiO 2 (001), the E g was found to narrow to 0.176eV for Ag 2 O coupled with TiO 2 (001). In general, in pure TiO 2 , the Ti 3d states act a dominate role in the conduction band, while the O 2p states act in the valence band. This result implies that the main cause of optical absorption is electrons transiting from O 2p to Ti 3d states, which is corresponding to the previous theoretical researches (Melrose and Stoneham, 1977;Wang et al., 2017a). Our previous studies reported that incorporation of Ag 2 O into TiO 2 can extend the spectral response to the visible-light region, and the photocatalytic activity is greatly enhanced in hydrogen production from glycerol:water mixture systems (Melrose and Stoneham, 1977;Wang et al., 2017b). Our calculated results were well-agreed with these experimental results. The projected density of states has been corrected to the partial density of states (PDOS) were calculated and plotted in Figures 11D-F in order to further gain the origin of electronic structures of Ag 2 O-coupled TiO 2 . For comparison, the DOS and PDOS of pure TiO 2 and Ag 2 O were also displayed in Figures 11D,E, respectively. According to the calculated results, the top of the valance band of pure Ag 2 O consists mainly of Ag 3d states, while the bottom of the conduction band is dominated by O 2 p states. Whenever coupled on perfect and deficient TiO 2 (001) surfaces, the characteristic DOS of Ag 2 O accounted for the fact that Ag 2 O is preserved well. Additionally, calculations indicated that the shift of Fermi level is up-shifted by 1.1 eV relative to the position of conduction band of TiO 2 . The appearance of the down-shifted of the bottom conduction band indicated the appearance of the Ag 3d and O 2 p of Ag 2 O compared with pure TiO 2 (001). For Ag 2 O-coupled TiO 2 , the splitting of the Ag 3d and O 2 p orbitals into occupied and unoccupied states will cause an impurity band in the forbidden gap, which would express as a weak but visible peak in the vicinity of the Fermi level in the image ( Figure 11E). These effects may result in the band gaps being narrowed (Fang et al., 2014). To further analyze the optical absorption spectrum of pure and coupled TiO 2 , we calculated the complex dielectric function ε (ω) = ε 1 (ω) − iε 2 (ω) according to the obtained electronic structures. Generally speaking, the imaginary part, ε 2 (ω), of the dielectric function could be evaluated from the momentum matrix elements between the occupied and unoccupied wave functions. ε 1 (ω), the real part of the dielectric function, could be evaluated from ε 2 (ω) by the Kramer-Kronig relationship (Sun and Wang, 2005). The absorption spectra were calculated based on the equation (Zhang et al., 2017): In this equation, I represents the optical absorption coefficient, ω represents the angular frequency. Based on the calculated electronic structures, the optical absorption spectra of the pure TiO 2 and Ag 2 O-coupled TiO 2 were calculated and shown in Figure 11C. It could be very clearly observed that the pure TiO 2 had nearly no response to the range of the visible range and only absorbed actively to UV light. In contrast, for Ag 2 O-coupled Ag 2 O, the narrowed band gap will effectively absorb light in visible range due to the formation of the localized mad gap level above the conduction band by Ag 2 O compounding. The result is in line with that of DOS. CONCLUSION A series of Ag 2 O-TiO 2 nanoparticles with different morphologies were prepared, and their dispersion stabilities in aqueous phase were investigated individually. Among those Ag 2 O-TiO 2 composite catalysts, Ag 2 O-TiO 2 nanosphere displayed a better colloidal dispersion stability in the suspensions. Using the as-prepared Ag 2 O-TiO 2 catalysts, photoreforming H 2 production was carried out from glycerol aqueous solution and the colloidal dispersion stability was found to be one of the dominant factors for heterogeneous catalysis in aqueous phase. Novel Ag 2 O-TiO 2 binary component systems with proper ratios of mixture were successfully introduced to enhance the dispersion stability, thereby improving hydrogen production performance. Among the binary component systems, 20% Ag 2 O-TiO 2 nanospheres mixing with 80% Ag 2 O-TiO 2 nanoplates displayed the best photocatalytic activity with the maximum H 2 production amounts around 11,33.21 µmolg −1 . It was interesting that the difference of hydrogen production rate constants between continuous stirring and the binary system was <6%, indicating an efficient mass transfer of the binary system toward photoreforming hydrogen production. In order to further explore the mechanism, the photoelectrochemical characteristics are suggested for study in the future work. The proposed method of mixing Ag 2 O-TiO 2 catalyst particles with different shapes could provide some inspiration to a more energy-efficient heterogeneous catalytic hydrogen production process. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation, to any qualified researcher. AUTHOR CONTRIBUTIONS ZY and CW contributed experiment methods and design of DFT calculations. WZ and SM contributed the synthesis of the samples and the characteristics of the sample. YC, JZ, RS, and QS organized the literature research of this issue and wrote part of the manuscript. All authors contributed to manuscript revision, read, and approved the submitted version.
10,837.4
2020-05-19T00:00:00.000
[ "Chemistry", "Materials Science" ]
Optimal Control of Epidemic Routing in Delay Tolerant Networks with Selfish Behaviors Most routing algorithms in delay tolerant networks (DTN) need nodes serving as relays for the source to carry and forward message. Due to the impact of selfishness, nodes have no incentive to stay in the network after getting message (e.g., free riders). To make them be cooperative at specific time, the source has to pay certain reward to them. In addition, the reward may be varying with time. On the other hand, the source can obtain certain benefit if the destination gets message timely. This paper considers the optimal incentive policy to get the best trade-off between the benefit and expenditure of the source for the first time. To cope with this problem, it first proposes a theoretical framework, which can be used to evaluate the trade-off under different incentive policies. Then, based on the framework, it explores the optimal control problem through Pontryagin's maximum principle and proves that the optimal policy conforms to the threshold form in certain cases. Simulations based on both synthetic and real motion traces show the accuracy of the framework. Through extensive numerical results, this paper demonstrates that the optimal policy obtained above is the best. 2 International Journal of Distributed Sensor Networks nodes (consumers) get the message (advertisement) timely. In addition, such benefit may be varying with time. For example, the sooner the nodes get message, the more the benefit will be. Therefore, the source has the incentive to push the message to other nodes timely. To achieve this goal, it has to pay certain reward to the relay nodes to make them be cooperative. In addition, such reward may be varying with time too. For example, the longer the time nodes stay in the network, the more the energy may be used, so nodes may ask for more rewards. In fact, nodes (e.g., phone, PDA) are often devices that can be manipulated by humans [9,10], and the buffer space or the forwarding ability of nodes can be seen as goods. Therefore, the event that the source requests help from other nodes can be seen as the event that the source buys certain goods from humans. The things that are used to buy goods by the source may be virtual currency [11] or discount of service [12] and so forth. Therefore, the message propagation process can be seen as the commodities trading process, and humans want to maximize its reward in this process. Therefore, these humans may adjust the price of their goods according to the market state. For example, if the remaining lifetime of message is shorter, they may think that the source is eager to transmit the message as soon as possible, so they think that their goods (e.g., the forwarding service) are important for the source. In this case, they may increase the price. On the other hand, if the remaining lifetime of the message is long, they may think that the source may be not eager to transmit the message quickly and is not willing to pay too many fees. In this case, they may help the source with only a little reward. Therefore, the price of the goods (e.g., the forwarding service) may be varying with time. In this environment, whether to make nodes be cooperative at specific time is an important problem for the source. For example, suppose that node is willing to help the source by charging m nuggets (price of the goods) at time 1 , but it only requires n nuggets at time 2. If > and 1 < 2, the source can pay less nuggets when it requires help from node j at time 2, but this may decrease the message propagation speed, so it may be not good for the source. On the other hand, if the source uses fewer nuggets to make node be cooperative, the remaining nuggets are more, so the source may have enough nuggets to make more nodes be cooperative and this is good for the source. Therefore, the optimal policy of the source is related to time. In this case, how to maximize the total income of the source is not a simple problem and this will be our main contribution. The main contribution of this paper can be summarized as follows. (i) We consider the optimal incentive policy to get the best trade-off when the reward is varying with time for the first time. (ii) We propose a unifying framework through a continuous time Markov process, which can be used to evaluate the trade-off between the benefit and expenditure of the source under different incentive policies. (iii) Then based on the framework, we formulate an optimization problem. Through Pontryagin's maximum principle, we explore the optimal control problem and prove that the optimal policy conforms to the threshold form in some cases. By comparing the simulation results with the theoretical results, we show that our theoretical framework is very accurate. In addition, we compare the performance of the optimal policy with other policies through extensive numerical results and find that the optimal policy obtained by our model is the best. Related Works In fact, our work is similar to the optimal controlling problem for epidemic routing algorithm (ER), such as the works in [13,14]. These works mainly study how to maximize the average delivery ratio when the energy is limited, and the energy consumption for forwarding the message once is not related to time. Therefore, these methods cannot be used to solve the optimization problem in our paper, in which the reward that the relay nodes require is related to time. On the other hand, the work in [15] studies similar problem as that in this paper, but it tries to get the optimal forwarding policy when the total fees are limited. However, we study the tradeoff between the income and expenditure, so they are different. There are many other selfish behaviors, such as individual selfishness and social selfishness [16,17]. At present, some works study the impact of these selfish behaviors. For example, Li et al. study the impact of social selfishness on the epidemic routing protocol [17]. Then, they explore the impact of both individual selfishness and social selfishness on multicasting in DTN [18]. However, these selfish behaviors depend on the social relations between nodes. For example, the social selfishness denotes the selfish behavior between friends. Therefore, the distribution of friends may have certain impact. Existing studies have shown that the number of friends of nodes may be different. In particular, the distribution of friends may conform to a power law distribution [19]. Therefore, if we consider those selfish behaviors, we have to classify the nodes according to their number of friends, and this will be a controlling problem with multiple parameters. Such interesting problem is an extension of our work, and we will study it deeply in the future. Network Model Suppose that there are one source , relay nodes, and a destination node . At time 0, only the source has message, and it wants to make the destination obtain the message before the maximal lifetime . To achieve this goal, the source needs the help from others. However, it has to pay certain reward every time it makes a relay node be cooperative, so it may not do this all the time. Note that only the relay nodes that have message can forward it to others, so the source only makes these nodes be cooperative. In other words, the source is not willing to pay reward to nodes that do not have message. In this paper, we assume that the source makes a relay node (e.g., node ) that has message cooperative with probability ( ) at time , and then can get the required reward denoted by ( ). As shown in previous section, the function ( ) may be varying with time. On the other hand, the source can obtain certain benefit denoted by ( ) if the destination gets message at time . In addition, we assume that all of the relay nodes are willing to receive message. In fact, if they get message, they may get certain reward from the source, so they have incentive to receive message and this assumption is rational. Nodes in the network can communicate with each other only when they come into the transmission range of each other, which means a communication contact, so the mobility rule of nodes is critical. In this paper, we assume that the occurrence of contacts between two nodes follows a Poisson distribution. This assumption has been used in wireless communications for many years. At present, some works show that this assumption is only an approximation to the message propagation process. For example, the work in [20] reveals that nodes encounter with each other according to the power law distribution. However, it also finds that if you consider long traces, the tail of the distribution is exponential. Furthermore, a more recent work in [21] studies the vehicles' dataset in large-scale urban environment and finds that the intermeeting time can be modeled by a three-segmented distribution. Though the first and second parts of the contact intervals do not obey the exponential distribution, it also recognizes that the tail obeys the exponential distribution. In addition, the work in [22] shows that individual intermeeting time can be shaped to be exponential by choosing an approximate domain size with respect to the given time scale. Moreover, there are also some works, which describe the intermeeting time of human or vehicles by exponential distribution and validate their model experimentally on real motion traces [23,24]. For this reason, the exponential model is still widely used in many existing works, such as [25][26][27]. In this paper, we also use such model and assume that the intermeeting time between two nodes follows an exponential distribution with parameter . Simulations based on both synthetic and real motion traces show that our theoretical framework based on such assumption is very accurate. Besides the intermeeting time, many other factors can have certain impact on the routing performance, such as the contact duration, bandwidth, and message size. If the bandwidth is big enough, the message may be transmitted successfully in one contact. However, if the bandwidth is too small, it may be hard to transmit the message to one contact, even though the contact duration is long. At present, some works find that the distribution of the contact duration may conform to the Pareto distribution [28,29]. However, the Pareto distribution is hard to be used to analyze the routing performance theoretically. Therefore, most of the previous works which explore the routing performance based on the theoretical method ignore the impact of the contact duration and assume that a contact is long enough to transmit the message, such as [13][14][15][16][17]. In this paper, we use the same assumption. Note that the assumption is rational when the message is small or the bandwidth is very big. The commonly used variables of this paper can be seen in Table 1. Number of relay nodes The source node The destination Exponential parameter of the intermeeting time (the biggest value) The maximal lifetime of the message ( ) The probability that the source makes a relay node cooperative at time t ( ) The required reward at time t Symbol { ( )} denotes the set of relay nodes that do not have message at time , so the cardinality of this set is − ( ). ( , + Δ ) denotes the event whether the relay node gets messagein time interval [ , + Δ ]. If ( , + Δ ) = 1, we can say that node successfully obtains message, but if ( , + Δ ) = 0, this event does not happen. Note that a relay node can get message only from the source or other cooperative relay nodes. In addition, two nodes encounter with each other according to an exponential distribution with parameter . Therefore, node encounters with a specific node (e.g., ) in time interval [ , + Δ ] with probability 1 − − Δ . If node is the source, node can get message immediately. However, if node is a relay node, can get message from only when is cooperative. In addition, a relay node is cooperative at time with probability ( ), so the total probability that node gets message in interval [ , + Δ ] is Combining with (1) and (2), we can get ( ( + Δ )) = ( ( )) + ( − ( ( ))) ( ( , + Δ )) . Further, we can obtain One main metric of routing algorithm in DTN is the delivery ratio, which denotes the probability that the destination obtains message within given time. Let ( ) denote the delivery ratio when the given time is . Before getting its value, we first give another symbol ( ) = 1 − ( ), which denotes the probability that does not obtain message before time . Moreover, let ( , + Δ ) denote the probability that does not get message in time interval [ , + Δ ]. Therefore, we have Similar to the relay nodes, may get message from the source or the cooperative relay nodes. Therefore, we have Further, we can obtain Let ( ) denote the total income of the source till time , which equals the result that the benefit takes away the expenditure. Therefore, we have Time interval Δ is very small, so we can assume that the behavior of the source remains unchanged. That is, the source makes a relay node that has message be cooperative with the same probability denoted by ( + Δ ) in the time interval. In addition, the number of relay nodes that have message can be denoted by ( + Δ ). Because the source has to pay certain reward if it makes a relay node be cooperative and the reward is ( ) at time , the total reward the source has to pay is ( + Δ ) ( + Δ ) ( )Δ . In addition, if the destination gets message, the source can obtain certain benefit. Symbol ( , + Δ ) denotes whether gets message in interval [ , + Δ ], so we can obtain (9). Because nodes that have message do not receive the same message any more, if the event ( , + Δ ) happens, we can see that the destination does not have message before. In other words, we have Combining with (9) and (10), we can obtain (̇( )) = ( ) (̇( )) − ( ) ( ) ( ( )) . Based on (11), we have is the maximal lifetime of the message, and our object is to maximize the value of ( ( )), which is a function about ( ). That is, our object is to solve the following question: Optimal Control. Obviously, the above question is an optimal control problem, and ( ) is the control variable. We use Pontryagin's maximum principle in ([30, P. 109, Theorem 3.14]) to solve the above problem. According to the principle, we should first get the Hamiltonian function. Let (( , ), ) be an optimal solution. In particular, at time , denotes the value of ( ( )) and denotes the value of ( ( )). Similarly, denotes the value of ( ). According to [30], the Hamiltonian function can be got by the derivative of the objective function and the derivation of the corresponding state functions, so we can get the Hamiltonian : Note that, at time , and are simple expressions of ( ) and ( ), respectively. Based on (14), we have The transversality conditions are shown as follows [30]: Then according to Pontryagin's maximum principle in ([30, P. 109, Theorem 3.14]), there exist continuous or piecewise continuously differentiable state and costate functions, which satisfy ∈ arg max 0≤ * ≤1 ( , , ( , ) , * ) . This equation between the optimal control parameter and the Hamiltonian allows us to express as a function of the state ( , ) and costate ( , ), resulting in a system of differential equations involving only the state and costate functions and not the control function. In fact, this equation means that maximizing the value of ( ( )) equals maximizing the corresponding Hamiltonian . In particular, at given time , the state ( , ) and costate ( , ) can be seen as constants, and ( ) can maximize at this time. Therefore, according to (15), we can obtain the optimal policy as follows: Below, we will prove that when the function of ( ) and ( ) satisfies certain conditions, the optimal policy has a simple structure. The conditions are as follows: ( ) is increasing with time , but ( ) is no-decreasing function; ( ) and ( ) are continuous and differentiable; they are nonnegative. In fact, the maximal lifetime ( ) of the message is fixed, so if the value of is bigger, the remaining lifetime ( − ) is shorter. In this case, the relay nodes may think that the source may be eager to transmit message to quickly, so they may ask for more rewards. That is to say, if the value of is bigger, the value of ( ) may be bigger. Therefore, the condition that ( ) is increasing is rational in some environments. On the other hand, it is better if the destination gets message earlier, so the assumption that ( ) is no-decreasing function is rational in certain applications too. If above conditions can be satisfied, the optimal policy conforms to the threshold form and has at most one jump. In particular, we have the following theorem. Proof. First, note that the functions ( ) and ( ) are nonnegative. In addition, we simply use ( ), ( ), and ( ) to denote ( ( )), ( ( )), and ( ( )) in the proving process, respectively. When = 0, none of the relay nodes has message, so the value of cannot have any impact. Therefore, we only consider the case that > 0. Based on (15), we define Then, we can geṫ Combining with (16), we havė In addition, from (16) That is, if ( ) = 0, the function will decrease at time . Then we assume that ( ) < 0. Based on (19), we have ( ) = 0. Combining with (22) and (23), we also can obtain (25). Further, we can get (27) and know that ( ) will decrease at time . In summary, if ( ) ≤ 0, ( ) will decrease at time . Therefore, if ( ) ≤ 0, we have ( ) < 0, > . Further, according to (19), the optimal policy satisfies ( ) = 1, < ℎ, and ( ) = 0, > ℎ, 0 ≤ ℎ ≤ . That is, once ( ) ̸ = 1, it will be 0 later and then remain unchanged all the time, so the optimal policy conforms to the threshold form and has at most one jump. This proves that Theorem 1 is correct. Model Validation. In this section, we will check the accuracy of our framework by comparing the theoretical results obtained by our model with the simulation results. We run several simulations using the opportunistic network environment (ONE) [31] based on three different scenarios. In the first one, we use the famous random waypoint (RWP) mobility model [32], which is commonly used in many mobile wireless networks. There are totally 500 nodes, and all of these nodes move according to the RWP model within a 10000 m × 10000 m terrain with a scale speed chosen from a uniform distribution from 4 m/s to 10 m/s. The communication range is 5 m. Moreover, the source and destination nodes are randomly selected among these nodes. In the second scenario, we use a real motion trace from about 2100 operational taxis for about one month in Shanghai city collected by GPS [33]. The location information of the taxis is recorded at every 40 seconds with the area of 102 km 2 . We randomly pick 500 nodes from this trace. In addition, the source and destination nodes are randomly selected among these nodes too. The third scenario is based on the dataset collected in the Infocom 2005 conference [34]. In particular, this dataset includes 41 attendees, who connect with each other by Bluetooth. Among those attendees, we randomly select two nodes as the source and destination, respectively. The functions of ( ) and ( ) may be any form. For simplicity, we define ( ) = (1 − − /10000 )/1000 and ( ) = 1000 − /10000 . In fact, the value of ( ) may be any value between 0 and 1 at time too. Because our main goal is to check the accuracy of our theoretical framework, we only consider two special cases: case 1: ( ) = 1, ≥ 0; case 2: ( ) = 0, ≥ 0. The first case means that the sources make nodes be cooperative all the time and message is propagated according to epidemic routing (ER) algorithm, but in the second case, the source does not ask for help from others at all, so message is propagated according to the direct transmission algorithm (DT). At the starting of each simulation, one message is generated with maximal lifetime , and each simulation is repeated 20 times. In addition, let the maximal message lifetime increase from 0 to 50000 s. Based on these settings, we can get Figures 1, 2, and 3, respectively. From the results, we can see that the average deviation between the theoretical and simulation results is very small. For example, the average deviation is about 4.22% for the RWP mobility model and 5.01% for the Shanghai city motion trace. For the Infocom 2005 dataset, the average deviation is about 7.12%. Though the deviation is bigger than that in RWP and Shanghai city motion traces, it also can be seen very accurately. This demonstrates the accuracy of our theoretical framework. For this reason, we can use the numerical results obtained by our theoretical framework to evaluate the performance of different policies. In addition, the results above also show that the performance is different when the source adopts different policies. In particular, the results in Figures 1 and 2 show that it is not good for the source to request help all the time. For example, when the value of is bigger than 4000 s in Figure 2, the total income of the source may be negative if it requests help all the time. This shows that the policy of the source can have important impact on its total income, and this means that our optimal control policy is necessary. Later, we will show that the optimal policy obtained by (19) is the best through extensive numerical results. Performance Analysis with Numerical Results. In this section, we use the best fitting for the Shanghai city motion trace in the above simulation to describe the exponential distribution of the intermeeting time between nodes. First, we evaluate the performance of the optimal policy obtained by (19). For comparison, we consider three other cases: case 1: ( ) = 1, ≥ 0; case 2: ( ) = 0, ≥ 0; case 3: random. The random policy means that the value of ( ) is randomly selected from the interval [0, 1] at time . Other settings are the same as those in simulation, and then we can obtain Figure 4. The result in Figure 4 shows that the optimal policy is the best one. Under the optimal policy, the source can always get the maximal total reward. This means that our optimal control policy is correct. Now, we further compare the performance of different policies when the number of relay nodes is different. In this case, we assume that the maximal message lifetime equals 10000 s, and let the number of relay nodes increase from 50 to 1000. Other settings remain unchanged. Numerical result is shown in Figure 5, and it demonstrates that the optimal policy obtained by (19) is the best too. In addition, total reward under the optimal policy is increasing with the number of nodes. In fact, when there are more nodes, the source can request help from more nodes at early time. Because the reward that the relay nodes request is increasing with time (e.g., ( ) = (1 − − /10000 )/1000 is increasing function), this behavior will decrease the cost of the source and the source will stop requesting help at early time. As shown in Theorem 1, the source will stop making relay nodes be cooperativeat certain time (e.g., ℎ). In particular, the optimal policy satisfies ( ) = 1, < ℎ, and ( ) = 0, > ℎ, 0 ≤ ℎ ≤ . When the number of nodes is bigger, the value of ℎ is smaller, so the source stops requesting help earlier and it will pay less reward. The result in Figure 6 shows that ℎ is really decreasing with the number of nodes. When the number of relay nodes is smaller, the source has to ask for help for a longer time. For example, when there are 50 relay nodes, the source nearly requires help all the time. On the other hand, the result in Figure 6 also shows that the optimal policy really conforms to the threshold form. We can see the result in Figure 7 more clearly, when there are 500 and 1000 relay nodes, respectively. That is, the source asks for help from others with probability 1 before the threshold ℎ, and then it stops doing this. In the above simulation and numerical results, we define ( ) = (1 − − /10000 )/1000, which is an increasing function. The optimal policy conforms to the threshold form in this case. However, ( ) may conform to any form. In the rest of this section, we want to know whether the threshold policy is still better when ( ) has different forms. In particular, we define ( ) = 50, ≤ 5000 s; ( ) = 0 > 5000 s. It is easy to see that ( ) is not an increasing function. Other settings are the same as those in simulation. Based on these settings, we can obtain Figure 8. Note that the optimal policy is obtained by (19), but the threshold policy conforms to Theorem 1. In fact, there is a threshold policy, which is corresponding to a specific value of ℎ. Therefore, there are many threshold policies, which can be denoted by threshold (ℎ). The threshold policy in Figure 8 can maximize the total income of the source under all of the threshold policies. The result in Figure 8 shows that the threshold policy is worse than the optimal policy obtained by (19). This means that the optimal policy does not conform to the threshold form in this case. Therefore, the form of the function ( ) can have certain impact on the optimal policy. Conclusions To increase the efficiency, most routing algorithms in DTN need nodes to work in a cooperative way. In particular, nodes should stay in the network to forward the message further after getting message. However, due to the impact of selfishness, nodes have no incentive to stay in the network after getting message. To make these nodes be cooperative, the source has to pay certain reward (e.g., ( )) to them, and such reward may be varying with time. On the other hand, if the destination gets message timely, the source can get certain reward (e.g., ( )) too. For example, the sooner the destination obtains message, the more reward the source may get. In this paper, we propose a unifying framework to evaluate the total income that the source gets under different policies. Then based on the framework, we study the optimal control problem through Pontryagin's maximum principle. In addition, we prove that the optimal policy conforms to the threshold form when ( ) and ( ) satisfy certain conditions. Simulations based on both synthetic and real motion traces show the accuracy of our theoretical framework. Numerical results show that the optimal policy obtained by (19) is the best. International Journal of Distributed Sensor Networks 9 Note that once we know the functions ( ) and ( ), we can get the theoretical model that can evaluate the routing performance under different policies. Furthermore, we can get the optimal policy from (19). In this case, nodes just need to conform to the optimal policy. However, ( ) and ( ) are system specified, so we may not know their form as the premise. In this case, we need certain learning process to get the functions of ( ) and ( ) rapidly. In other words, in certain applications, we have to explore the learning process, and this will be our future work.
6,805.4
2014-04-01T00:00:00.000
[ "Computer Science" ]
PANEV: an R package for a pathway-based network visualization Background During the last decade, with the aim to solve the challenge of post-genomic and transcriptomic data mining, a plethora of tools have been developed to create, edit and analyze metabolic pathways. In particular, when a complex phenomenon is considered, the creation of a network of multiple interconnected pathways of interest could be useful to investigate the underlying biology and ultimately identify functional candidate genes affecting the trait under investigation. Results PANEV (PAthway NEtwork Visualizer) is an R package set for gene/pathway-based network visualization. Based on information available on KEGG, it visualizes genes within a network of multiple levels (from 1 to n) of interconnected upstream and downstream pathways. The network graph visualization helps to interpret functional profiles of a cluster of genes. Conclusions The suite has no species constraints and it is ready to analyze genomic or transcriptomic outcomes. Users need to supply the list of candidate genes, specify the target pathway(s) and the number of interconnected downstream and upstream pathways (levels) required for the investigation. The package is available at https://github.com/vpalombo/PANEV. Background Thanks to advancements in high-throughput techniques and simultaneous reduction in the associated costs, large scale 'omics' studies are now common. These studies enable the generation of a huge amount of biological data [1] and pose to the researchers the challenge of data mining, rather than data production. The key result of genomic (e.g. genome-wide association study) or transcriptomic analysis (e.g. gene expression profiling) is a long list of statistically significant genes that, supposedly, contribute to the studied phenomenon. The subsequent step, after the exclusion of false positive signals, is to extract meaning from them, in order to provide insights into the underlying complex biology of the phenotype under study [2]. One common strategy to reduce the complexity of this challenge is grouping the genes into smaller sets of related ones, for example, sharing the same biological processes (i.e. pathway). This pathwaybased approach [3] has become popular during the last years [4] and is, de facto, the standard for the postomics analysis of high-throughput experiments [5]. Pathway analysis and visualization tools are now successfully and routinely applied to gene expression and genetic data analyses and they represent a support key to understand biological systems [6][7][8][9][10][11]. In this regard, pathwaybased approaches are particularly useful when complex phenomena, with a quantitative inheritance, are under study [12]. Compared with an individual gene-based approach, the strategy to create a network of multiple related pathways and genes of interest is more suitable to explore the biology of complex traits and identify functional candidate genes [13,14]. The increase in the availability of repositories based on hierarchical and/or functional classification of terms helped in this exploration [15]. Many web resources are now available, providing access to many thousands of pathways (see http://pathguide.org/). Among the others, a prominent reference repository, constantly updated, is the Kyoto Encyclopedia of Genes and Genomes (KEGG) [16]. KEGG is a bioinformatics resource that maps genes to specific pathways and summarizes them into one connected and manually curated metabolic network. Here, we introduce the PANEV (PAthway NEtwork Visualizer) R package that represents an easy way to visualize genes into a network of pathways of interest. The novelty of PANEV visualization relies on the creation of a customized network of multiple interconnected pathways, considering n levels (as required by the user) of upstream and downstream ones. The network is created using KEGG information [16]. As far as we know, no other KEGG visualization tool [6][7][8] provides such a feature that may help to identify functional candidate genes among the list of provided ones. PANEV has also features that are rarely simultaneously available in other pathway visualization tools [7,17,18]. In particular, (i) it handles data from all the species included in KEGG databases, (ii) it provides fully accessible graphics through an interactive visualization module that allows the user to easily navigate the generated network, (iii) it is easy to be integrated with other pathway analysis or gene set enrichment analysis tools. Implementation The package is specifically designed for post-genomic and post-transcriptomic data visualization. The rationale of graphical visualization performed by PANEV is to identify candidate genes taking into account a network of 'functionally' related pathways. The 'functional' network is created considering a set of main pathways of interest (first level pathways -1 L), chosen by the user since known to be involved in the phenomenon under study, and multiple levels of interconnected pathways, added by PANEV on the basis of information retrieved on KEGG database [16,19]. Each level considers the pathways connected with the previous ones. These pathways represent de facto the upstream and downstream pathways without reconstructing the direction of relations in the PANEV graphical output. Once the 'functional' network is created, PANEV visualizes the genes among the list of those provided by the user. The network visualization is generated in html output format using the visNetwork R package (https://cran. r-project.org/web/packages/visNetwork), which guarantees fully interactive graphs. Package installation and functionality The package PANEV v.1.0 is available at https://github.com/ vpalombo/PANEV. It can be easily downloaded and installed in any R session (R ≥ 3.5.0) using the install_ github("vpalombo/PANEV") function, from the devtools package (https://cran.r-project.org/package=devtools). The tool requires other libraries automatically uploaded along with the package. Once installed, PANEV can be loaded in the R environment with the library('PANEV') command. PANEV package functions could be divided in two steps: data preparation and data analyses (Fig. 1). The first step helps to prepare a properly formatted list of genes and 1 L pathways, as well as to obtain all mandatory information required to run PANEV analyses. The second step performs data analysis and visualization. Since PANEV interrogates KEGG databases [16], an internet connection is required. Access to KEGG repositories has specific copyright conditions (https://www.kegg. jp/kegg/legal.html). PANEV uses the KEGGREST package (https://bioconductor.org/packages/release/bioc/html/KE GGREST.html) functions to download individual pathway graphs and data files through API or HTTP access, which is freely available for academic and non-commercial uses. Trial datasets are available in the package and can be stored in the working directory using the panev.example() command. Data preparation To enhance user experience, data preparation functions are available. In particular, PANEV provides two specific functions, panev.dataPreparation() and panev.exprdataPreparation(), to obtain a proper input data format from a simple gene list or an expression gene list, respectively. Their correct performance depends on the availability of biomaRt [20] data access for a specific species of interest. The list of all the available species for biomaRt annotation can be retrieved by the panev.bio-martSpecies() command. Along with the correct KEGG organism code, obtainable with the panev.speciesCode() function, a list of main pathways of interest (1 L) is mandatory to properly run PANEV. The list of all KEGG investigable pathways can be retrieved by the panev.pathList() function. In the case of analysis on an expression gene list, the 1 L pathway(s) must be provided with a pathway expression estimated score(s). The pathway estimated score can be obtained by using common gene set enrichment analysis or over-represented approach analysis [21] (e.g. flux value [22], as in the trial data). Data analyses and visualization The panev.network() function allows performing PANEV visualization on a simple gene list (e.g. genomic analysis). The function requires (i) a properly formatted gene list, (ii) a vector of 1 L pathways, (iii) the KEGG organism code and (iv) the number of levels to investigate (from 1 to n), which represents how many levels of interconnected (upstream/downstream) pathways will be explored. If the argument is set as 1, only 1 L pathway(s) will be used to create the network. The panev.network() function firstly creates a framework of interconnected pathways, starting from 1 L pathways, and it subsequently highlights the genes from the input gene list inside the generated 'functional' network. The function creates an interactive graph, summarizing the genes/pathways network results and enabling the selection and magnification of a specific node (Fig. 2). Moreover, it generates one text file containing the tabular results of the highlighted genes for each level analyzed. For gene expression datasets, PANEV takes into account any possible connection among a custom list of pathways of interest and a list of differentially expressed genes (DEGs). The dedicated function is panev.exprnetwork() that requires (i) a properly formatted DEG list with fold change (FC) values and p-values, (ii) a properly formatted list of pathways with expression estimated scores, (iii) the KEGG organism code and (iv) a p-value cut-off for filtering subsets of genes in the DEG list. The function generates the interactive diagram visualization of the gene/pathway network (Fig. 3). Gene/pathway nodes are colored according to their gene FC and pathway expression estimated scores, following the classification reported in Table 1. PANEV also provides the ancillary functions panev.stats.enrichment() and panev.network.enrichment() to perform a gene enrichment analysis based on a hypergeometric test (one-sided Fisher exact test), as described by Simoes and Emmert-Streib [23]. In particular, while the former function allows the user to search against the default KEGG database, the latter computes the pathway enrichment of the genes highlighted by PANEV using the pathways generated in the network as a background. The results are text files containing enrichment analysis outcomes and tables with gene/pathway occurrences. For each pathway, a p-value is calculated to estimate its probability of overrepresentation [23]. Results and discussion To evaluate and validate the usefulness of PANEV, we used a publicly available dataset on human type 1 diabetes mellitus (T1DM) [24]. In the reference study, the authors carried out a gene-based genome-wide association study (GWAS) and identified 452 significant genes. Among these, 171 genes were newly associated with T1DM and 53 out of 171 were supported by replication or differential expression studies. In particular, four non-HLA (human leukocyte antigen) genes (RASIP1, STRN4, BCAR1 and, MYL2) and three HLA genes (FYN, HLA-J and PPP1R11) represent the main result discussed by the authors, since validated by both the replication and the differential expression studies. To verify the possible contribution of the PANEV tool to the identification of functional candidate genes, we performed PANEV analysis considering the list of 171 newly identified genes. The validation datasets are available in the Fig. 1 The general architecture of the workflow of the PANEV package and schematic illustration of the main functions. The yellow rectangles represent the PANEV functions. The green circles represent the input data lists, in particular gene or pathway lists. The red diamonds represent the output from the PANEV 'data preparation' functions. The blue rectangles represent the final PANEV outcomes package and can be stored in working directory using the panev.example(type = "validation") command. After data preparation, 5 out of 171 genes having no corresponding entrez ID were excluded from the further analyses. Considering the complexity of the investigated trait, PANEV was performed up to the third level of interaction [25]. The 'Type I diabetes mellitus' (map04940), 'Insulin resistance' (map04931) and 'AGE-RAGE signaling pathway in diabetic complications' (map04933) pathways were chosen as 1 L pathways, since clearly associated in the literature with T1DM [26,27]. A summary of PANEV results is reported in Additional files 1 and 2. Fifteen out of 166 genes were highlighted at different levels as functional candidates by PANEV (Additional file 1). In particular, PANEV identified 4 out of 7 genes mainly discussed in reference study: PTPN11 at 1 L, FYN at 2 L, BCAR1 and MYL2 at 3 L. The three genes (RASIP1, STRN4 and, HLA-J) not detected by PANEV are in KEGG databases but not assigned yet to any pathway. It is interesting to note that PANEV identified also other well-known genes (ITPR3, BAK1 and IL10 at 2 L; HMGB1 and MICA at 3 L), already associated with T1DM [28][29][30][31] but not discussed by Qui and colleagues [24], since they were confirmed only by the differential expression or replication studies. Furthermore, PANEV highlighted other genes reported in the literature as being associated with the susceptibility to T1DM disease but not discussed in the reference study [24], since not confirmed neither by the differential expression nor by replication studies. In particular, CDK2 [32], SMAD7 [33], STAT4 [34], BCL2A1 [35] and RXRB [36] were shown at 2 L, whereas MADCAM1 [37] at 3 L. It is worth to note that, except for CDK2, all genes mentioned above refer to researches conducted before the reference study [24]. Simultaneously, it must be observed that 138 genes were excluded by PANEV during the analysis, because (i) assigned to pathways not included in the three investigated levels (~8%), (ii) not present in KEGG databases (~39%), or (iii) not assigned yet to any pathway (~48%). The first point is suggestive of PANEV capability to discriminate false positive among the list of provided genes. The last two points clearly represent the main limitations of PANEV due to KEGG's incomplete information. A comparison among PANEV results and reference study [24] is reported in Additional file 3. Accordingly to the reference study [24], we also performed the enrichment analysis of KEGG pathways considering the 452 genes identified by the authors. The results obtained by PANEV enrichment function showed an over-representation of immune diseases and immune Fig. 2 An example of the gene/pathway network visualization of PANEV results. The green circles represent the candidate genes connected with the pathways in the network. The violet diamonds represent the first-level (1 L) pathways. The yellow diamonds represent the second-level (2 L) pathways. The orange diamonds represent the pathways belonging to the network but without connection with any candidate gene. The diagram is saved in '.html' format system pathways (Additional file 4), in line with Qiu et al. [24] outcomes. PANEV was already applied by Palombo and colleagues on genes significantly associated with milk fatty acid profiles in Italian Simmental and Holstein breeds [38]. A total of 47 and 165 significant positional candidate genes were detected in Italian Simmental and Holstein breeds, respectively. Among these genes, PANEV highlighted three lipogenic genes well described in the literature: SCD, DGAT and FASN. Furthermore, fifteen new functional candidate genes directly or indirectly involved in 'Lipid metabolism' pathways were identified. In summary, PANEV offers advantages in terms of timesaving and speeding up data mining. In particular, candidate genes with strong literature support could be rapidly identified without any validation study. These candidate genes could be quickly subjected to the further study phases (such as in vivo validation). Moreover, gene and pathway connections could be easily identified using the diagram visualization and this information might be interesting to discuss in manuscript drafting. About the putative candidate genes not highlighted by PANEV, these could be retrieved using conventional methods, such as deeper literature research or in silico validation, which remain more time consuming and costly. Conclusion PANEV is a package entirely built in R and represents a novel and useful visualization tool to reduce the complexity of the high-throughput data mining challenge and identify candidate genes. PANEV creates customized gene/pathway network graphs considering a list of candidate genes and multiple levels of interconnected (upstream and downstream) pathways of interest. This helps the interpretation of genomic and transcriptomic analysis outcomes, in particular when complex biological phenomena are investigated. The contribution of the PANEV tool could be significant not only for well-annotated species (i.e. Homo sapiens, Mus musculus) but also for all the organisms available in KEGG databases. Although KEGG is a popular and constantly updated database, the lack or incomplete information could represent the main PANEV disadvantage, as for other KEGG-based tools. The effectiveness of PANEV analysis in terms of result coherency was confirmed by the validation study. In particular, PANEV produces timesaving advantages, pointing the user to genes that are biologically involved with the investigated trait. Additional file 1. Summary of the tabular result obtained by PANEV using the data from Qui et al. (2014) study and considering three levels of interactions 'Type I diabetes mellitus', 'Insulin resistance', and 'AGE-RAGE signaling pathway in diabetic complications' as 1 L pathways Additional file 2. Screenshot of network-based visualization result obtained by PANEV using the data from Qui et al. (2014) study and considering three levels for the investigation. The violet diamonds represent the first-level (1 L) pathways (in this case: 'Type I diabetes mellitus', 'Insulin resistance', and 'AGE-RAGE signaling pathway in diabetic complications') connected with candidate genes. The yellow and the blue diamonds represent the second (2 L) and third-levels (3 L) pathways connected with candidate genes, respectively. The orange diamonds represent the pathways belonging to the network without connection with any candidate gene
3,813.6
2020-02-06T00:00:00.000
[ "Computer Science", "Biology" ]
Comparative Studies on Solubility and Dissolution Enhancement of Different Itraconazole Salts and Their Complexes Itraconazole is a potent triazole antifungal drug which has low solubility at physiological pH conditions. Itraconazole is weakly basic (pKa =3.7) and highly hydrophobic drug. It is categorized as a BCS class II drug. The main objective of the present investigation was to improve the solubility of itraconazole, by preparation of salt forms itraconazole hydrochloride, mesylate and besylate by using addition reaction with hydrochloric acid, methane sulphonic acid and benzene sulphonic acid. Further inclusion complexes of itraconazole were prepared with Captisol (sulfobutyl ether7 β-cyclodextrin) by using physical mixing, kneading and co-evaporation techniques. The preparations were characterized by using X-ray diffraction, Fourier Transformed Infrared spectroscopy and Nuclear Magnetic Resonance spectroscopy. The solubility of prepared salt was found multifold than the solubility of itraconazole. The dissolution studies exhibited higher percentage drug dissolution from itraconazole complexes than that of the pure drug which can be attributed to the increase in drug solubility provoked by the complexation technique. Introduction Poorly water soluble drugs are posing a problem of satisfactory dissolution within the gastro intestinal tract and there by their oral bioavailability. The recent past has witnessed the modern techniques of drug discovery which lead to an increasing number of drug candidates with unfavorable solubility characteristics [1] . Formulation of such compounds for oral delivery has been the most frequent and greatest challenge to scientists in the pharmaceutical industry. Major problem associated with poorly soluble drugs is lack of dissolution there by results in poor and/or variable bioavailability [2]. Kaplan [3] has suggested that the solubility of a drug more than 10mg/mL at a pH < 7 is expected to have no dissolution as well as bioavailability related problems but, this could be a problem for drugs whose solubility is below 1mg/mL. Dissolution rate less than 0.1mg/cm 2 /min were likely to give dissolution rate limited absorption. Solubility of a drug is an intrinsic property and it can only be altered by chemical modification of the molecule by salt formation [4][5][6] or prodrug formation [7]. Dissolution is an extrinsic property which can be modified by various chemical, physical or crystallographic techniques like complexation, particle size reduction, surface or solid state properties. Different techniques have been reported in the literature for improvement of solubility and drug dissolution rates. These techniques are reduction of the particle size by micronisation or nanonisation to increase the surface area, use of surfactants, Cyclodextrin complexation, pro-drug formation, conversion of crystalline to amorphous forms [8]. Pharmaceutical salts [9] are important in the process of drug development for converting an acidic or basic drug into a salt by a simple neutralization reaction. Using different chemical species to neutralize the parent drug can produce a diverse series of compounds and this process is traditionally being used for modification of the physicochemical, processing, biopharmaceutical or therapeutic properties of drug substances. Each of the individual salts of a particular drug substance can be considered as a unique chemical entity with their own distinctive physicochemical and biopharmaceutical properties [10][11][12] . It has been estimated that approximately half of all of the active pharmaceutical substances (API) that have been developed were ultimately progressed as pharmaceutically acceptable salts and that salt formation is an integral part of the development process [13,14]. Sulfonic acid salts particularly alkyl sulfonates such as mesylates and besylates generally results in the formation of high melting point API salts with good solubility and stability [15]. 86 Comparative Studies on Solubility and Dissolution Enhancement of Different Itraconazole Salts and Their Complexes Cyclodextrins (CDs) are useful functional excipients that have enjoyed widespread attention and use. A number of cyclodextrin-based products have reached the market based on their ability to change undesirable physicochemical properties of drugs [16,17]. The formation of inclusion complexes provides numerous advantages in pharmaceutical formulation development. β-CD were reported to increase bioavailability of poorly soluble drugs by increasing the drug solubility [18]. The family of CDs comprises of a series of cyclic oligosaccharides compounds. The three commonly used cyclodextrins are α-cyclodextrins comprised of six glucopyranose units, β-cyclodextrins comprised of seven units and γ-cyclodextrins comprised of eight such units [19]. Sulfobutyl ether β-Cyclodextrin (SBE 7 -β-CD) [Captisol®] [20][21][22] is a chemically modified β-cyclodextrins that is a cyclic hydrophilic oligosaccharide which is negatively charged in aqueous media. The solubility in water for Captisol (70 g/100 ml at 25 C) is significantly higher than the parent β-cyclodextrin (1.85 g/100 ml at 25 C). It does not exhibit the nephrotoxicity and cytotoxicity which is generally associated with other β-CDs [23][24][25]. Some of the investigations also reported that the drug inclusion complex with SBE7-β-CD provided a protective effect against drug-induced cytotoxicity [25]. Based on these advantages, Captisol has been selected to study the effect of improving the physiochemical properties of poorly water-soluble drug itraconazole. Itraconazole (ITR) is a broad-spectrum triazole antifungal agent with poor aqueous solubility [26]. ITR is weakly basic with pKa of the piperazine ring is 3.7 and highly hydrophobic drug [27]. Because of poor aqueous solubility itraconazole on oral administration results in poor bioavailability and inter individual variations in the plasma drug concentrations. ITR has the characteristic of pH dependent solubility having highest solubility at acidic side (4μg/ml) compared to basic pH (1μg/ml). However, because of highly liphophilic nature (log P= 6.2) it can easily penetrate into intestinal membrane. This indicates the poor aqueous solubility is the main reason for lower plasma concentrations. Various techniques [8] have been reported for enhancing the solubility and bioavailability of itraconazole, but the salt formation [13,14] and inclusion complexes [18] showed some promising results. Keeping these in the view the present work was planned with an objective to synthesize Itraconazole hydrochloride, mesylate besylate salt forms from Itraconazole. Further these salt forms were studied for improvement of dissolution by preparing inclusion complexes with Sulfobutyl 7 Ether β-Cyclodextrin (Captisol ® ) using physical mixing, kneading and co-evaporation techniques. These preparations were characterized by X-ray diffraction, Fourier Transformed Infrared spectroscopy, Nuclear Magnetic Resonance spectroscopy and also evaluated for solubility, drug content and dissolution studies. Materials and Methods Itraconazole was a gift sample obtained from Pharmatech, Hyderabad, and Sulfobutyl 7 Ether β-Cyclodextrin (Captisol ® ) (average molecular weight 2,163 and degree of substitution 6.5) was obtained from Cydex laboratories. Hydrochloric acid (A.R. grade) Benzene sulfonic acid (A.R. grade) and Methane sulfonic acid (A.R. grade) were purchased from Merck. All other chemicals used in this study were of analytical grade. Preparation of Itraconazole Salts Itraconazole hydrochloride (ITRH), Itraconazole mesylate (ITRM) and Itraconazole besylate (ITRB) salts were synthesized from itraconazole (ITR) by acid addition reaction using hydrochloric acid, methane sulfonic acid and benzene sulfonic acid ( Figure.1, 2 & 3). Itraconazole salts were synthesized from a modified method by using acid addition reaction method [28][29][30]. In case of ITRH preparation, accurately weighed about 1 gm of ITR (1.4 mmol) and was dissolved in about 10 ml of dichloromethane in a rotary evaporator flask. To this solution about 400 mg of concentrated hydrochloric acid (11.42 mmol) was added and dissolved. The above suspension was heated at 50 °C for 1hr under reflux using rotary evaporator. After one hour 700 mpa vacuum was applied while reaction. The reaction was continued for one hour to form a precipitate of salt. The mixture was allowed to stand overnight at room temperature. The precipitated product was collected, dried at 60°C for 1 hour and shifted through #100 mesh sieve. ITRM and ITRB salts were prepared by following the similar procedure as mentioned above for ITRH salt by taking 1 gm of ITR (1.4 mmol) suspended in about 10 ml of dichloromethane and to this solutions about 400 mg of methane sulfonic acid (4.16 mmol) and 600 mg of benzene sulfonic acid (3.9 mmol) were added and dissolved. The final products were stored in an air tight container and then placed in desiccators. Solubility Studies Solubility studies for pure ITR, ITRH, ITRM and ITRB were carried in purified water and simulated gastric fluid (pH 1.2 -0.1 N Hydrochloric Acid). In each case excess amount of sample was added to 10 ml of solvent and agitated at 37°C in a rotary test tube shaker for 24 hrs. After equilibration, the samples were filtered using 0.45 µm Millipore filters, suitable diluted and analyzed for the itraconazole content by measuring the absorbance at 258 nm using Shimadzu UV-Visible spectrophotometer [31]. Phase Solubility Studies Phase solubility study [32][33][34][35][36] was carried out to investigate the effect of Captisol on the solubility of ITR, ITRH, ITRM and ITRB using the method reported by Higuchi Salts and Their Complexes solutions excess amounts of ITR, ITRH, ITRM and ITRB were added separately and shaken using orbit shaker at 25°C for 72 hr. After equilibrium, the solutions were filtered using 0.45µ filters and diluted suitably to determine the itraconazole content at 258 nm using UV-Visible spectrophotometer. The graphs were plotted between solubility of ITR (concentration in mM) from pure ITR, ITRH, ITRM and ITRB against the concentration of Captisol (in mM). The stability constant for the complex was determined from the graph using the following equation. The slope was obtained from the graph and S 0 was the equilibrium solubility of ITR, ITRH, ITRM and ITRB in 0.1 N HCl. Preparation of Inclusion Complexes The inclusion complexes of ITR, ITRH, ITRM and ITRB with Captisol (1:2 and 1:3 ratios) were prepared by using physical mixing, kneading and co-evaporation technique [37]. Physical mixture was prepared by simple mixing in a mortar with pestle for 10 min. The powders of ITR, ITRH, ITRM, ITRB and Captisol of required molar ratios are simply mixed in mortar with pestle and then sieved through 100 #. Kneaded (KN) product was obtained by triturating equimolar quantities of ITR, ITRH, ITRM, ITRB and Captisol of required molar ratios in a mortar with a small volume of solvent blend of water: methanol: dichloromethane at a volume ratio of 2:5:3. During this kneading process few drops of solvent were introduced to maintain a suitable consistency. The resulting mass was dried in an oven at 55 °C until they get dry and the solid was finally grounded and then sifted through #100 sieve. In co evaporation technique, aqueous solution of Captisol was added to the solution of ITR, ITRH, ITRM and ITRB in a solvent blend of methanol: dichloromethane at a volume ratio of 2:3.The resultant mixture was stirred for 1 hr and evaporated at a temperature of 55 °C until dry. The dried mass was pulverized and sifted through #100 sieve. NMR Spectroscopy The 1 H-NMR spectra of pure ITR, ITRH, ITRM and ITRB were taken in DMSO on a Bruker Ultra shield 400 MHz nuclear magnetic resonance (NMR).Chemical shift values are interpreted for confirmation. Drug Content Estimation Accurately weighed 50 mg of the sample and transferred into a 50 ml volumetric flask. Then 25 ml of 50% methanol:0.1N HCl mixture was added and shaken for 15 minutes to completely dissolve the drug. The volume is made up to 50 ml with 50% methanol:0.1N HCl mixture. The resulted solution was filtered through 0.45 μm filter and suitable diluted and analyzed for the itraconazole content by measuring the absorbance at 258 nm using Shimadzu UV-Visible spectrophotometer. The drug content of all the inclusion complexes was estimated by following the same method. In vitro Dissolution Studies In vitro dissolution studies [38] were carried out in 900 ml of simulated gastric fluid of pH 1.2 using USP Type-II (Paddle) dissolution test apparatus (M/s. Electro Lab India). Sample equivalent to 100 mg of ITR, a speed of 75 rpm and a temperature of 37±0.5 °C were used in each test. A 5 ml aliquot was withdrawn at different time intervals, filtered and replaced with 5 ml of fresh dissolution medium. The filtered samples were suitably diluted whenever necessary and assayed for ITR by measuring absorbance at 258 nm. The dissolution studies were carried for the pure ITR and the prepared ITR salts inclusion complexes. Commercial ITR capsules Sporonax ® was also evaluated for dissolution for comparison. All the dissolution experiments were conducted in triplicate and the mean values are reported. Nuclear Magnetic Resonance Spectroscopy (NMR) NMR spectrum of Itraconazole showed ( Figure 4a Solubility Studies The solubility of ITR was found to be 1.388µg/mL in purified water and 7.59µg/mL in 0.1N HCl. The solubility of ITR salts ITRH, ITRM and ITRB in purified water was found to be 23.86μg/ml, 165.86µg/mL and 191.64µg/mL respectively. The solubility of ITR salts ITRH, ITRM and ITRB in simulated gastric fluid was found to be 93.60μg/ml, 402.6µg/mL and 508.7µg/mL respectively. These results clearly indicated that prepared salts have considerable influence on improvement of ITR solubility. Phase Solubility Studies The effect of Captisol on the aqueous solubility of ITR, ITRH, ITRM and ITRB was evaluated using the phase solubility method. The results (Table 1.) showed an increase in the solubility of ITR, ITRH, ITRM and ITRB with increase in Captisol concentration which indicates the effect of complexation. According to Higuchi and Connors, phase solubility study indicated ( Figure 5) that the curves can be classified as the AP type (the solubilizer was proportionally more effective at higher concentrations). The positive curvature indicated that the existence of soluble complexes is with an order greater than one. Therefore, the theoretical molar ratio (1:2 and 1:3) were chosen to prepare the solid complexes through different methods.The slope value were lower than one i.e., for ITR, ITRH, ITRM and ITRB was 0.7801, 0.035, 0.0386 and 0.0106 respectively. The Infra Red Spectroscopy (IR) Infra red spectra of pure drug ( Figure 6) indicated the presence of characteristic peaks of carboxylate group (O-C-O) in the range of 1550-1660cm -1, C-N stretch from 1073cm -1, chlorine group at 700-850cm -1, benzene moiety from 3100-300 cm-1. The salt forms itraconazole mesylate and itraconazole besylate have got a characteristic peak of S=O group in the range of 1345-1365. FTIR studies revealed that Itraconazole HCL showed two typical bands at 3369 and 3283 cm -1 due to N-H primary stretching vibration and a band at 3170 cm -1 due to N-H secondary stretching and characteristics bands at 1623 and 1560 cm -1 assigned to C=N stretching. FTIR results suggested that there is no significant chemical interaction between the drug and the Captisol complexed products, which confirms the stability of drug in the powdered form. X-ray Powder Diffraction (XRD) The XRD pattern of ITR and ITR complexes samples are shown in Figure 7. The pure drug spectra has shown intense and sharp at 16, 20 and 28 0 2θ indicating its crystalline nature. The XRD patterns of salts and the complexed products have been found to have no peaks indicating their amorphous nature and inclusion complex formation with Captisol. Drug Content Estimation The percentage drug content of different itraconazole complexes are shown in Table 2. The drug content was found to be in the range of 75.66±0.34% w/w to 99.45±0.18% w/w. The low standard deviation values indicated the uniformity of drug content of the prepared complexes. In Vitro Dissolution Study of Complexes The dissolution profiles of itraconazole from pure drug and different complexes prepared by physical mixture, kneading technique, co-evaporation techniques are shown in Figure 8a, 8b &8c respectively. The pure drug showed the dissolution of 16.89% in 90 minutes indicating the poor solubility and thereby dissolution. The dissolution of simple physical mixture complexes ITR and Captisol complexes of 1:2 and 1:3 weight ratios was found to be 18.86 % and 21.31 % w/w respectively, kneading complexes was found to be 32.84 % and 49.87 % w/w and co-evaporates was found to be 26.61 % and 42.34 % w/w respectively. The data indicated that the only drug complexes with captisol could not able to increase the dissolution to the required level. Further the dissolution of itraconazole salt complexes with captisol at a weight ratio of 1:2 and 1:3 in physical mixing was found to be 50. 12 Figure 8d). The kinetics of ITR release from complexes was studied by subjecting the dissolution data to zero order, first order kinetics (as shown in Table 4). The results indicated that the drug release follows first order kinetics. The mechanism of drug release was found to be by diffusion. The correlation values peppas equation indicated the dissolution follows fick's law of diffusion. The study clearly indicated the usefulness of itraconazole hydrochloride, mesylate and besylate salt complexes with sulpho butyl 7 ether β CD in improving the solubility and dissolution rate of itraconazole. Among all the complexes the ITRH and ITRB complexes with captisol at a weight ratio of 1:2 prepared by kneading method showed higher dissolution rate. Conclusions The present study showed that itraconazole a poorly soluble drug exhibits very poor in vitro dissolution. The simple drug complexes with captisol also could not able to extend the dissolution rate. The salt form of ITR such as itraconazole hydrochloride, besylate and mesylate salt forms could significantly improve the solubility and dissolution rate of itraconazole. Further the complexation of these salts with captisol has improved the dissolution rate of ITR. Among all the complexes the ITRH and ITRB complexes with captisol at a weight ratio of 1:2 prepared by kneading method showed higher dissolution rate and comparable with commercial Sporanox® capsules.
4,001.2
2014-01-01T00:00:00.000
[ "Chemistry" ]
The prostate cancer drug enzalutamide shortens anogenital distance in male rat offspring by blocking the androgen receptor Background: Enzalutamide is a non-steroidal anti-androgen drug used to treat prostate cancer. It is a potent androgen receptor (AR) antagonist, with an in vitro Lowest Observed Effect Concentration (LOEC) of 0.05 μM. In this study, we wanted to assess its utility as a model compound for future mechanistic studies aimed at delineating mechanism-of-action of anti-androgenic effects in the developing fetus. Methods: Enzalutamide in vitro activity was tested using an Androgen receptor reporter assay (AR-EcoScreenTM) and a steroidogenesis assay (H295R assay). For in vivo characterization, pregnant Sprague-Dawley rats were exposed to 10 mg/kg bw/day enzalutamide from gestational day 7-21. At gestational day 21, enzalutamide exposure concentrations were measured both in amniotic fluids and fetal plasma, alongside Anogenital distance (AGD). Fetal testes were collected and for testosterone measurements and gene expression profiling. Results: Enzalutamide was a strong AR antagonist in vitro and we also observed disrupted androgen synthesis in the H295R steroidogenic assay with a LOEC of 3.1 μM. In utero exposure resulted in about 20% shorter anogenital distance (AGD) in male fetuses., as well as signs of dysregulated expression of the steroidogenic genes Star, Cyp11a1 and Cyp17a1 in the fetal testes at gestational day 21. Intra-testicular testosterone levels were unaffected. Conclusions: Based on these observations, together with in vitro LOECs and the fetal plasma levels of enzalutamide, we propose that the effect on male AGD was caused by AR antagonism rather than suppressed androgen synthesis. Due to the characteristic mechanism of action of enzalutamide, we suggest to use it as a new model compound in research on anti-androgenic environmental chemicals. 5 bicalutamide target AR transcriptional activity by interfering with recruitment of coactivators to the transcriptional complex (16,17), enzalutamide targets three key stages of AR signaling: blocking androgen binding, inhibiting translocation of activated AR and inhibiting binding of activated AR to the DNA (14). Thus, enzalutamide is a potent, specific inhibitor of androgen signaling. We recently completed a study using the 5α-reductase inhibitor finasteride as a model compound in an effort to tease out some of the underlying molecular mechanisms driving effects on male AGD (18). Based on in vitro data we found that enzalutamide apart from blocking the AR also inhibited androgen synthesis in vitro. To follow up on this, we tested enzalutamide in an in utero exposure study based on its known antiandrogenic effect in order to investigate, if the compound affected AGD and, if so, which mechanism was underlying the effect. We found that the AGD of the late gestation male fetuses were significantly shorter and that gene expression levels of steroidogenic enzymes in the fetal testis at GD21 were affected, albeit without significantly affecting the intra-testicular testosterone levels. Thus, we conclude that enzalutamide causes it AGD effect by antagonizing the AR. AR-EcoScreen™ assay 6 The antagonistic effects of enzalutamide on AR were investigated using the was used with an injection volume of 100 µl, measuring in ESI-mode using methanol, and 1 mM ammonia in water as the mobile phases (gradient flow rate was 0.4 ml/min). For the other hormones an Ascentis Express C 8 column (2.1 × 100 mm, 2.7 µm) was used with an injection volume of 100 µl, measuring in ESI-/ESI + mode 9 with acetonitrile and 0.1% formic acid in water as the mobile phases (gradient flow rate was 0.25 ml/min). Ten hormones: testosterone, androstenedione, dehydroepiandrosterone (DHEA), corticosterone, cortisol, pregnenolone, progesterone, 17α-OH-progesterone, estradiol and estrone were quantified. The limit of quantification (LOQ) was 0.1 ng/ml for corticosterone, 1.0 ng/ml for DHEA and pregnenolone, 0.02 ng/ml for testosterone and androstenedione, 0.05 ng/ml for 17α-OH-progesterone, and 0.01 ng/ml for all the other hormones. For quantification, external calibration standards were run before and after the samples at levels of 0.1, 0.2, 0.5, 1.0, 2.0, 5.0 and 20 ng/ml, with 5.0 ng/ml internal standards: (testosterone-d2, methyltestosterone-d3, progesterone-c2, and estradiol-d3). The mass spectrometer was an EVOQ Elite Triple Quadropole Instrument from Bruker (Bremen, Germany) and the UPLC system was an Ultimate 3000 system with a DGP- Denmark). They were placed in an animal room with controlled environmental conditions: 12 hr light-dark cycles with light starting at 9 pm, temperature 22 ± 1 °C, humidity 55 ± 5%, 10 air changes per hr. All animals were fed a standard diet with Altromin 1314 (soy-and alfalfa-free, Altromin GmbH, Lage, Germany). Acidified tap water (to prevent microbial growth) in PSU bottles (84-ACBTO702SU Tecniplast) were provided ad libitum. The PSU bottles and cages as well as the aspenwood shelters (instead of plastic) were used to eliminate any risk of migration of bisphenol A that could potentially confound the study results. From GD7-21, dams were weighed daily and dosed by oral gavage by qualified animal technicians with a stainless steel probe 1.2 × 80 mm (Scanbur, Karlslunde, Denmark) with either vehicle control (corn oil) or enzalutamide (10 mg/kg bw/day) at a constant volume of 2 ml/kg bw per day. All animals were decapitated (guillotined) under CO2/O2-anesthesia at GD21. Caesarean sections GD 21 Dams were decapitated (guillotined) under CO 2 /O 2 -anesthesia at GD21 and fetuses were collected by caesarean section. The dams were exposed 1h ± 15 min before decapitation in the same order as Caesarean sections were performed to adjust for the chemical analysis of maternal blood, fetal blood and amniotic fluid. Uteri were taken out and weighed, and the number of live fetuses, resorptions, and implantations were registered. Body weights of the fetuses were recorded prior to decapitation (by a scissor). Maternal trunk blood was collected and transferred to heparin-coated vials. Trunk blood from all fetuses was collected and transferred to heparin-coated vials and pooled for each gender within each litter. Blood samples were kept on ice and centrifuged at 4000 rpm, 4 ºC for 10 min. Plasma was transferred to new tubes and stored at -80 ºC. Amniotic fluid was collected from all 11 fetuses, pooled within each litter, snap frozen in liquid nitrogen and subsequently stored at -80 ºC. AGD was measured as the distance between the genital papilla and the anus by the same, blinded technician using a stereomicroscope with a micrometer eyepiece. The AGD index (AGDi) was calculated by dividing AGD by the cube root of the body weight. Fetal testes were isolated by dissection under a stereomicroscope and LC was performed on a Dionex Ultimate 3000 RS (Thermo Scientific, CA) with a Poroshell SB C-18 (100 × 2.1 mm, 2.7 µm particle size) column held at 30 °C (Agilent technologies, Walbron, Germany). The solvent system consisted of A: 2.5 mM ammonium hydroxide + 0.1% formic acid in water and B: acetonitrile. Solvent programming were: 2% B from 0 to 1 min followed by a linear gradient to 95% B to 14 min, isocratic 95% B from 14 to 16 min followed by reversal to initial conditions to 16.1 min and re-equilibration of the column to 20 min. The flow rate was 0.3 ml/min from 0 to 1 min followed by a linear gradient to 0.4 ml/min to 14 min, which was held to 16 min followed by reversal to initial conditions. Synthesis of cDNA and RT-qPCR analysis Protocols were essentially as previously described (23). Briefly, total RNA was extracted from GD21 testis (n = 12/group) using RNeasy mini kit and on-column Rn01751069_mH, Rps18 Rn01428913_gH, and Sdha Rn00590475_m1. In addition, primers and probes Cyp17a1, Cyp11a1 and Star were designed in our lab (24). The following cycling conditions were used: an initial step of 95 °C for 20 sec followed by 45 two-step thermal cycles of 95 °C for 1 sec and 60 °C for 20 sec. The relative transcript abundance was calculated using the 2 − ΔCT method using Rps18 and Sdha as normalizing genes. Statistics Data from the AR-Eco Screen and H295R assay were analyzed by one-way ANOVA followed by Dunnett's post hoc test in GraphPad Prism 5 (GraphPad Software, San Diego California, USA). Results are presented as mean ± SEM for the three independent experiments. One measurement of DHEA in the H295R assay was lost so the DHEA data presented are from two independent experiments. In vivo data on maternal parameters, fetal body weight, AGD and AGDi, were analyzed by one-way ANOVA followed by Dunnett's post hoc test, using SAS® (SAS Enterprise Guide 6.1, SAS Institute, Inc., Cary, NC, USA). AGD was analyzed using fetal weight as a covariate and fetal body weights were analyzed using the number 15 of offspring per litter as covariate. For all analyses, the litter was the statistical unit. Statistical analyses were adjusted using litter as an independent, random and nested factor. For data presentation, group mean ± SEM was calculated from 6 litters/group based on litter means. Analysis of enzalutamide concentrations in plasma and amniotic fluid as well as RT-qPCR data and intra-testicular testosterone levels was performed with student's ttest in GraphPad Prism 8 (GraphPad Software, San Diego California, USA). In cases of non-normal distribution or non-equal variance between groups, data was logtransformed prior to analysis, while the graphs still represent the untransformed data. For data presentation, mean ± SEM was calculated from 6 litters/group (maternal plasma, amniotic fluid and fetal plasma), 1 testicle from 6 fetuses /group (fetal intra-testicular testosterone) and 1 testicle from 12 fetuses/group (RT-qPCR). Enzalutamide acts as an AR antagonist in vitro The AR agonistic and antagonistic potential of enzalutamide were investigated using the AR-EcoScreen™ assay (19). Enzalutamide did not show any agonistic activity ( Fig. 1A) but showed AR antagonistic activity at all concentrations (p < 0.001) between 0.05-12.5 µM (Fig. 1B). The lowest observed effect concentration (LOEC) was 0.05 µM while the IC 50 value was 0.1 µM. We confirmed that the reduction in luciferase activity was not due to cytotoxicity following enzalutamide exposure (Fig. 1C). Enzalutamide affects steroidogenesis in vitro The H295R Steroidogenesis Assay (20) was used to test whether enzalutamide affects the synthesis of ten sex steroid hormones (Fig. 2). All steroid hormones, except for the two estrogens estrone and estradiol, were affected by enzalutamide. Pregnenolone levels were slightly increased with a LOEC of 3.1 µM. However, no increase was seen at concentrations of 25 µM and above. The progestagens, progesterone and 17α-OH-progesterone were decreased with LOECs of 1.8 and 6.3 µM, respectively. The levels of the two androgens, androstenedione and testosterone were decreased, both with LOECs of 3.1 µM. By contrast, the adrenal androgen and precursor of the other androgens, dehydroepiandrosterone (DHEA), was increased with a LOEC of 0.8 µM. The corticosteroids, corticosterone and cortisol were generally decreased with LOECs of 6.3 and 25 µM, respectively, although cortisol was increased at 0.8 µM. Enzalutamide was present in different biological compartments and male AGD was shorter in exposed animals at GD21 Pregnant Sprague Dawley rats were exposed to 10 mg/kg bw/day enzalutamide from GD7 to GD21 and maternal as well as fetal parameters were investigated at GD21. The distribution of enzalutamide in the different biological compartments (i.e. maternal plasma, amniotic fluid and fetal plasma) was determined at GD21 by HPLC-MS/MS. In maternal plasma the concentration was 1002 ± 145 nM, while it was 285 ± 46 nM in amniotic fluid, and 282 ± 53 nM and 122 ± 21 nM in the plasma of the female and male fetuses, respectively (Fig. 3). We measured maternal body weight, weight gain (GD 7-21), and uterus weight to determine if enzalutamide exposure resulted in maternal toxicity, but observed no treatment-related significant differences between the groups or any signs of maternal toxicity (Table 1). In addition, the number of fetuses as well as fetal weights of both males and females were similar between the two groups (Table 1). Attesting to the anti-androgenic potential of enzalutamide, we found that both AGD and AGDi was 19% shorter (p < 0.001) in exposed males compared to control males (Fig. 4, Table 1). There were no significant differences in either AGD or AGDi between exposed females and control females (Table 1). Enzalutamide affects gene expression of steroidogenic enzymes in male fetal testis, without affecting intra-testicular testosterone levels Based on our observations that enzalutamide affects steroidogenesis in vitro, we assessed expression levels of key genes encoding steroidogenic enzymes in the male fetal testis at GD21. There was an upregulation of Star (p < 0.001), Cyp11a1 (p < 0.05), Cyp17a1 (p < 0.05) and a downregulation of Nr5a1 (p < 0.05) following in utero exposure to enzalutamide (Fig. 5). The expression levels of the two hydroxysteroid dehydrogenase genes, Hsd3b1 and Hsd17b1, were unchanged, as were the germ cell marker gene Ddx4 and Sertoli cell marker Sox9 (Fig. 5). Next, we measured the intra-testicular testosterone levels, but found no significant difference between control males and exposed males (Fig. 6). Finally, histopathological assessment of H&E stained fetal testis sections showed no adverse alterations to testis histology (Fig. 7). DISCUSSION Enzalutamide is a second generation prostate cancer drug used to treat men with mCRPC (13). In this study, we explored the use of enzalutamide as a potential model compound or for investigations of anti-androgenic actions of environmental chemicals. Our particular interest in this regard is the use of target-specific compounds to delineate the molecular mechanisms driving the development of anogenital tissues. The purpose is to obtain better knowledge on the utility of AGD as a general biomarker for fetal anti-androgenic effects. Enzalutamide is designed to specifically target the AR (14). In our AR reporter gene assay the IC 50 for AR antagonism was calculated to be 0.05 µM, which is close to the IC 50 of 0.03-0.05 µM previously reported (14,25). Our results thereby confirmed that enzalutamide acts as an AR antagonist, without having any AR agonistic activity. Because of these specific antagonistic properties, we performed an in utero exposure study in rats, with focus on effects on the male AGD. We measured the concentration of enzalutamide in maternal plasma, amniotic fluid and fetal plasma at GD 21 in Sprague Dawley rats and found it present in all three compartments. These data clearly show that enzalutamide can transfer across the placenta, albeit the maternal concentration was about four times higher than in the fetal compartments. Notably, we also measured almost twice the concentration of enzalutamide in male plasma compared to female. It remains unclear why this difference was observed, but sex differences in pharmacokinetics is a welldocumented phenomenon, both in humans and animals (26). Furthermore, the fetal plasma concentration of 0.1-0.3 µM is twice or more than the LOEC of 0.05 µM for AR antagonism observed in vitro. Taken together, these data show that an exposure level of 10 mg/kg bw/day during gestation is appropriate to reach biologically active enzalutamide levels in the fetus and reduce fetal androgen signaling. As a reduced AGD caused by fetal androgen insufficiency is a well-known marker in both humans and rodents, we believe that the observed effects are translatable to humans. Fetal androgen insufficiency, caused by low androgen levels or blockage of AR signaling, can prevent the male perineum to develop properly and result in a short AGD (8). One previous report shows that enzalutamide induces shortening of male AGD in mice, although they do not state how much shorter AGD is compared to controls (27). We found the average AGD to be 19% shorter in enzalutamide exposed males than in control males. This effect is less pronounced than previously observed with the prostate cancer drug and AR antagonist flutamide, which shortened male AGD by 19% at a dose of only 2 mg/kg (28), and between 35-43% in the 6-8 mg/kg dose range (28,29). Conversely, enzalutamide has a greater effect on male AGD than some of the pesticides known to interfere with AR, such as vinclozolin and procymidone, where doses of 50-100 mg/kg are required to induce a similar shortening of AGD (28,(30)(31)(32). Because of the greater potency and the known mode of action of selective pharmaceuticals like flutamide, finasteride or enzalutamide, they are often better suited as model compounds than less specific environmental chemicals when performing studies aimed at characterizing mechanisms of effects. Apart from blocking AR action, enzalutamide was also found to disrupt hormone synthesis in the H295R in vitro steroidogenesis assay. Progestagen, androgen and corticosteroid synthesis was downregulated, whereas estrogen synthesis was unaffected. Dehydroepiandrosterone (DHEA), on the other hand, was markedly elevated. It remains unclear exactly how enzalutamide cause these effects in the steroidogenic pathway, but judging from the hormone profile, we speculate that 3βhydroxysteroid dehydrogenase is inhibited. With the decrease in androstenedione and testosterone, a decrease in estrogens could have been expected. It is, however, possible that sulfotransferase activity, which is responsible for metabolism of estrogens (33), was reduced. Reduced sulfotransferase activity could equalize the estrogen levels and result in unchanged levels. Another possible explanation could be induction of CYP19 as enzalutamide has previously been shown to induce other CYPs, specifically CYP2C9, CYP2C19 and CYP3A4 (34). These in vitro assays together with physiologically-based kinetic modeling have the potential to be able to predict 20 AGD effects in the future, thereby minimizing animal testing. Since steroidogenesis was affected in vitro, we also investigated the fetal testis for signs of steroidogenic disruption. We saw increased expression of the key steroidogenic genes Star, Cyp11a1 and Cyp17a1. We initially speculated that this could be a compensatory response to reduced testosterone levels. However, we did not detect any significant reduction in intra-testicular testosterone levels. Thus, at the functional level the effects of enzalutamide on the fetal testis may be diminishable compared to its effect on AR activation in peripheral tissues. In support of this, the in vitro LOEC on testosterone synthesis was 3.1 µM, a concentration that is much higher than the actual measured fetal levels of 0.1 µM. Nevertheless, our results point to some degree of disturbance to fetal testis function, but more studies are needed to elucidate the exact mechanism of action of enzalutamide on the steroidogenesis pathway. CONCLUSION In summary, enzalutamide has strong AR antagonistic effects both in vitro and in vivo. While we observed weak inhibition of steroid synthesis in the rat, this is likely of minor importance in the mode of action, as intra-testicular testosterone levels were not affected. Since enzalutamide is used to treat prostate cancer patients, fetuses will most likely not be exposed to the compound under normal circumstances; albeit, environmental exposure cannot be ruled out. The effects of enzalutamide on the developing male fetus are therefore not of immediate concern, but confirms that enzalutamide is a valuable model compound for future studies on effects of AR antagonists. Enzalutamide does not affect intra-testicular testosterone at GD 21: Intra-testicular testoster Enzalutamide does not affect fetal testis histology: Cross-sections of formalin-fixed GD21 tes Supplementary Files This is a list of supplementary files associated with the primary manuscript. Click to download.
4,342.8
2019-11-27T00:00:00.000
[ "Medicine", "Biology" ]
XplaiNLI: Explainable Natural Language Inference through Visual Analytics Advances in Natural Language Inference (NLI) have helped us understand what state-of-the-art models really learn and what their generalization power is. Recent research has revealed some heuristics and biases of these models. However, to date, there is no systematic effort to capitalize on those insights through a system that uses these to explain the NLI decisions. To this end, we propose XplaiNLI, an eXplainable, interactive, visualization interface that computes NLI with different methods and provides explanations for the decisions made by the different approaches. Introduction We present XplaiNLI, an interactive visualization, web-based interface that computes Natural Language Inference (NLI) with three different approaches and provides sketches of explanations for the decision made by each approach. 1 An overview of XplaiNLI is found in Figure 1. The user on the frontend (right) inputs a premise (P) and a hypothesis (H). The pair is passed to the backend (left) where it goes through a symbolic and a deep learning (DL) component, which compute an inference label each. Each component also determines the rules and features that lead to the decision: for the symbolic one, we use Natural Logic (Valencia, 1991) inference rules to explain the inference label, while for the DL approach, we use insights gained from relevant work (Naik et al., 2018;Gururangan et al., 2018;Dasgupta et al., 2018;McCoy et al., 2019) to account for the decision. The complete output enters the hybrid component, which combines the strengths of the symbolic NLI engine and the DL model and determines which approach's label should be trusted based on semantic characteristics of the sentences. All output is forwarded to the frontend, where an intuitive visualization encodes the inference labels of the three approaches as well the corresponding explanations. The user can interact further with the interface by adding her own heuristics and by providing feedback on the inference label, which is used for improving the separate components. Related Work Work on interpretability for NLI is still at an early stage. One strand of research explains the models by "stress-testing" them and revealing the phenomena that the models cannot handle or by detecting bias in the training data (Gururangan et al., 2018;Dasgupta et al., 2018;McCoy et al., 2019, inter alia). Another strand of research has approached the task by directly learning natural language explanations along with the inference decision (Camburu et al., 2018) or creating distributional representations of syntactic and semantic inference rules (Zanzotto and Ferrone, 2017) and training machine-learning models on them. Although all these approaches shed light on the processes behind the reasoning task, the insights gained have not yet been used in their full potential; XplaiNLI seeks to fill this gap. XplaiNLI Backend Model The backend outputs the inference relation for a given pair, as well as the features that lead to that decision, based on each of the following three approaches. The exact backend implementation and the performance Figure 1: The high-level architecture of XplaiNLI: on the left, the three NLI approaches providing an inference label and explainable features, and on the right, the interactive, explainable, visual frontend. of each of the approaches is detailed in Kalouli et al. (2020); this paper focuses on explainability. The Deep Learning Component For the DL component we use BERT-base (Devlin et al., 2018), one of the state-of-the-art models for NLI, which we fine-tune for our task. For fine-tuning, we use the SemEval 2014 version of SICK (Marelli et al., 2014). We utilize a corrected version of the corpus 2 to mitigate some of the shortcomings of the original corpus, e.g., event and entity coreference issues. We do not fine-tune on other commonly-used benchmarks, such as MNLI (Williams et al., 2017), as these corpora suffer from similar problems. For fine-tuning, we use the HuggingFace implementation 3 and we fine-tune the parameters suggested by the authors: batch size, learning rate and number of epochs. Our best performing model uses a batch size of 32, learning rate of 2e-5 and 3 epochs. The trained model classifies an input pair into E(ntailment), C(ontradiction) or N(eutral). To provide potential explanations for the model's decision, we implement the findings of Naik et al. . Their work has revealed specific heuristics and artifacts that arguably appear in the training sets of these models and can thus explain to some extent the way the models label a pair. Particularly, we implement four kinds of heuristics/explanations. First, the presence of negation. As observed by Naik et al. (2018), Dasgupta et al. (2018) and McCoy et al. (2019), negation words such as no, not, don't, nobody, etc. make the model predict C, consistent with the heuristic found in the SNLI training set. Second, we follow Dasgupta et al. (2018), Naik et al. (2018) and McCoy et al. (2019) and compute the lexical overlap of the two sentences. It is argued that whenever H is completely contained in P, the models tend to predict E, no matter the word order or other constraints. The third heuristic of sentence length is similar (Naik et al., 2018;Gururangan et al., 2018): Hs that are much longer than their Ps tend to be neutral, while Hs that are shorter than their Ps tend to be entailed. Last, we add relation-specific word heuristics. According to the findings of Gururangan et al. (2018), specific words being present in H or/and P are characteristic for a specific inference relation. So, generic words like animal, instrument, outdoors are mostly found in the Hs of entailments, while modifiers and superlatives like sad, tall, best, first are mostly found in neutral pairs. The Symbolic Component The symbolic component implements a version of Natural Logic (NL) (Valencia, 1991). NL attempts to explain inferences through monotonicity, i.e., by whether the concepts expressed in a sentence can become "more general" or "more specific" salva veritate. For example, in the sentence a woman is walking, woman can be replaced by the more general person while preserving truth. The symbolic component is based on an improved version of the Graphical Knowledge Representation (GKR) by Kalouli and Crouch (2018) -GKR allows for the kind of inference mechanism we require. In the first stage of the process, P and H are parsed to their GKR representations, each producing six default GKR graphs: a dependency graph, a conceptual graph, a contextual graph, a lexical graph, a properties graph and a coreference graph. In the next stage, the lexical graphs, which contain, for each content word, the WordNet (Fellbaum, 1998) senses, synonyms, antonyms, hypernyms, hyponyms and the SUMO (Niles and Pease, 2001) concepts, superconcepts and subconcepts are used to determine matches between H and P and their specificity. For example, person in H can be matched to woman in P and be assigned the specificity superclass: person is a hypernym of woman. One of the four specificity markers (equal, subclass, superclass, disjoint) can be assigned. In the next stage, the determined specificities are updated based on the predicate-argument structure of each sentence, captured in the concept graph. For instance, woman is a subclass of person but it is not a subclass of tall person (not all women are tall). For the two terms of a match, the system considers if both, none or only one of them have dependents (modifiers/arguments) in their respective concept graph. Based on that, different update rules apply. For example, if person in H has additional dependents such as tall but woman in P does not, then the match becomes more specific: since H (person) was already more general than P (woman) (specificity superclass), then making this match more specific leads to the specificity becoming undetermined (none). After updating all H-P matches, the exact inference relation is determined based on the GKR context graphs, the instantiabilities they contain and the specificities of the matches. For example, if the H-term is instantiated and more or equally specific than the uninstantiated P-term (a womanno woman), there is a contradiction. If the H-term is instantiated and more general (a personno woman) than the P-term, we cannot determine the relation. Similarly for entailments: if the match is equally or more specific and both terms are instantiated, there is an entailment (a woman -a woman. See Kalouli et al. (2020) for more details on the symbolic engine. These rules, i.e. the exact combinations of specificity relations and contexts, can be used straightforwardly to explain the decision made by the symbolic component. The Hybrid Component The hybrid approach is based on the fact that distributional features are suitable for dealing with conceptual aspects of the meanings of words, phrases, and sentences, but struggle with Boolean and contextual phenomena like modals, quantifiers, negation, implicatives, propositional attitudes, conditionals, etc. (Dasgupta et al., 2018;Naik et al., 2018;McCoy et al., 2019, to name only a few). These are phenomena to which more symbolic/structural approaches are well suited. Thus, we expect that "easy" cases which do not involve such phenomena will be best handled by the DL approach, while hard linguistic phenomena like the ones mentioned will be best handled by the symbolic approach. Thus, the hybrid component determines whether to use the symbolic or the DL label as its own inference label, based on specific semantic characteristics of the pair. During training, the hybrid classifier learns for each pair which of the components delivers the right label (again based on the SICK-train corpus): the symbolic one (S), the DL one (DL) or both of them (B). 4 With this, the classifier indirectly learns whether the pair is "easy" or hard: if S is right, the pair is probably hard; if DL is right, the pair is probably easier; if both are right, we cannot make any claims about the nature of the pair. The learning is based on the implemented rules of the symbolic component (cf. Section 3.2), which are converted to features, e.g., the pair P: The woman is walking. H: The person is not walking would be assigned the features veridical, antiveridical, superclass because the match person-woman has the superclass specificity and the highest match walk-walk is instantiated in P and uninstantiated in H. These features (rules) capture the effects of hard linguistic phenomena like modals, negation, quantifiers, implicatives, factives, etc. To target explainability and as decision trees have been shown to be one of the most interpretable models (Guidotti et al., 2018), we train a Random Forest classifier (Gini impurity) with 30 estimators: 5 each pair is classified as one of S, DL or B, and then mapped to the respective label: if classified as S or DL, the symbolic or the DL inference label are used, respectively; if classified as B, then either one of S or DL can be chosen but we use the DL label for higher robustness. The features used for prediction are also used for explainability purposes. Explainable Visual Interface The user interface (Figure 1, right) features three main components, all emphasizing the role of the human-in-the-loop. Two text fields (for P and H) allow users to insert the inference pair to be computed. Visualizing Explanations With the submission of the input pair, the system on the backend computes one inference label for each approach as well as explanations for each label. The results are visualized with an intuitive visualization schema (Figure 1, right): each sentence of the pair is presented along with all features that could lead to a certain inference label. On the left side, the user can find the features (rules) of the symbolic approach and on the right, the features of the DL model. The features that are relevant for this pair are colored and contain , if the feature's value is true, or no , if the value is false. The color of the features encodes the inference relation that each approach predicted: green is for E, red for C and grey for N. Some DL features might have lower opacity: this means that they should -according to the literature -lead to a different label than the one actually predicted by the model. In this way, the user can verify previous literature findings or discover new patterns. The colored features are then linked with the predicted inference label, also encoded by color. No link between the DL features and the label means that the prediction is not based on any of these features. In the middle of the visualization, the user can find the label of the hybrid approach, marked with bold text. Again, links visualize the behavior of the approach: if there is a link between the symbolic decision and the hybrid one, the hybrid approach chose the symbolic label; if the link is between the DL label and the hybrid one, the hybrid approach chose the DL label. If both links exist, then the labels of symbolic and DL were the same and so the hybrid approach just chose one of them. In terms of visualization, all features used for the hybrid decision are marked with a grey H in increasing opacity: the darker the color, the more weight this feature had for the decision. User-defined Heuristics Along with the input pair, users can also input words -also words not found in P or H -that are expected to act as heuristics for a certain inference relation. The option of input words is available for both P and H and for all three inference relations. For instance, the user can insert the word asleep in the Contradiction field of H to check the artifact that hypotheses containing the word asleep are bound to be labeled as C by a DL model. Due to the system's architecture (see Section 3), only the DL model might get explained by additional heuristics; the symbolic approach is based on predefined inference rules and the hybrid approach uses semantic features to make its decision, independently from surface heuristics. The current version of the system only supports the search for specific words as heuristics; future versions will extend to further user-defined heuristics, e.g. Part-Of-Speech tags. Learning from User Feedback The labels of the hybrid decision are at the same time clickable buttons for users to provide their annotation of the pair. With this annotation, an (offline) learning process is initiated: the pair and the user's annotation are added to the training pool of the DL model so that the model can be re-trained on increasingly large data. Whenever enough data has been collected, the model is re-trained; this re-training also triggers the re-training of the hybrid model, leading to improved results. Conclusion This paper presented an interactive visualization interface for explainable NLI. The interface uses three different approaches to compute inference and visualizes the features that lead to each decision. In contrast to black-box machine-learning models, this approach enables users to get intuitions of the decision-making process (Spinner et al., 2020), as well as to distill linguistic knowledge about the analyzed phenomena. The options for user-defined heuristics and user-driven learning can help refine the used models and components and optimize them to the users' intuition and domain understanding. To increase explainability and comparability, future work will allow the user to a) choose between different DL models for training, b) choose between hybrid models trained on different datasets, c) define their own rules for the hybrid classifier, and d) display the decision tree of the hybrid classifier for better exploration.
3,564.4
2020-12-01T00:00:00.000
[ "Computer Science" ]
Radiation‐induced tissue damage and response Abstract Normal tissue responses to ionizing radiation have been a major subject for study since the discovery of X‐rays at the end of the 19th century. Shortly thereafter, time–dose relationships were established for some normal tissue endpoints that led to investigations into how the size of dose per fraction and the quality of radiation affected outcome. The assessment of the radiosensitivity of bone marrow stem cells using colony‐forming assays by Till and McCulloch prompted the establishment of in situ clonogenic assays for other tissues that added to the radiobiology toolbox. These clonogenic and functional endpoints enabled mathematical modeling to be performed that elucidated how tissue structure, and in particular turnover time, impacted clinically relevant fractionated radiation schedules. More recently, lineage tracing technology, advanced imaging and single cell sequencing have shed further light on the behavior of cells within stem, and other, cellular compartments, both in homeostasis and after radiation damage. The discovery of heterogeneity within the stem cell compartment and plasticity in response to injury have added new dimensions to the consideration of radiation‐induced tissue damage. Clinically, radiobiology of the 20th century garnered wisdom relevant to photon treatments delivered to a fairly wide field at around 2 Gy per fraction, 5 days per week, for 5–7 weeks. Recently, the scope of radiobiology has been extended by advances in technology, imaging and computing, as well as by the use of charged particles. These allow radiation to be delivered more precisely to tumors while minimizing the amount of normal tissue receiving high doses. One result has been an increase in the use of schedules with higher doses per fraction given in a shorter time frame (hypofractionation). We are unable to cover these new technologies in detail in this review, just as we must omit low‐dose stochastic effects, and many aspects of dose, dose rate and radiation quality. We argue that structural diversity and plasticity within tissue compartments provides a general context for discussion of most radiation responses, while acknowledging many omissions. © 2020 The Authors. The Journal of Pathology published by John Wiley & Sons Ltd on behalf of Pathological Society of Great Britain and Ireland. Introduction Within weeks of Röntgen's discovery of X-rays in 1895 [1], many workers developed dermatitis from using low power X-ray tubes to try to reproduce his findings. A few years later, Becquerel showed that natural sources of radioactivity can also cause inflammatory skin burns. Unlike thermal burns, radiation burns develop after a characteristic latent period, an observation that prompted Pierre and Marie Curie to self-experiment on the dose relationships for latency and persistence of radiuminduced lesions. The latency of inflammation in the human skin was in fact sufficiently predictable to calibrate radiation tubes for clinical dosimetry using the minimal erythematous dose as the unit. In 1905, Heineke [2] noted that the chronology of latencies for different tissues was relatively constant across species, even though Miescher [3] noted several 'waves' of erythema in human skin that Pohle [4] later attributed to radiation-induced changes in capillary density. Early histopathological observations on the vascular effects of ionizing radiation (IR) documented swelling and degeneration of endothelium and capillary occlusion [5], hyperemia and exudation of serum and red cells [6], capillary leakage [7] and inhibition of vascular capillary budding [8]. A lengthy debate began as to the importance of vascular radiation damage for loss of tissue function that continues to this day [9]. Although vascular responses are clearly relevant, in general the radiation response of different adult tissues is best explained by their diversity of structure and endogenous stem/progenitor cell content, which are the major thrust of this review. Heineke [10] was the first to point to tissue differences in time-dose responses, contrasting the very rapid appearance of radiation-induced lymphopenia with the 2-week latency of severe dermatitis, and with the relative lack of changes in liver and kidney over the same time period. He also noted that lymphocytes died within a few hours and were cleared by phagocytes that appeared in tissues in large numbers [11]. Differences in radiosensitivity between cell populations within a tissue were highlighted by Regaud and Blanc's [12] detailed histological descriptions of spermatogenesis. Bergonie and Tribondeau [13] condensed findings at that time into a 'law', saying essentially that 'the effects of irradiation on cells are more intense the greater their reproductive activity' [14]. This holds some truth, but there are many exceptions, not the least being that some non-dividing cells, such as small lymphocytes, are very radiosensitive, dying by apoptosis. Regaud and Nogier [15] went on to show that three radiation doses given 15 days apart could sterilize rams without damaging the scrotum, indicating that differences in proliferative potential between tissues could be exploited by dose fractionation. This has remained a central thesis of cancer radiation therapy (RT) until recent times, with 1.8-2 Gy per fraction given daily, 5 days a week for 5-7 weeks becoming 'conventional' treatment. The optimization of RT for cancer clearly needed to be understood to determine how best to manipulate the time, dose and size of dose per fraction to exploit differences between individual normal tissues and tumors. This was aided by the advent of in vitro clonogenic assays [16] and in vivo colony-forming assays developed by Till and McCulloch [17]. Considered to be the 'fathers of stem cell science', Till and McCulloch showed that bone marrow cell transfer in mice could prevent lethality after whole body irradiation (WBI) and that the radiosensitivity of the stem cells could be assessed in vivo by their ability to form colony-forming units in the spleen (CFU-S) after transfer. Withers extended the stem cell approach by developing in situ clonogenic assays for radiation responses in skin [18], jejunum [19], colon [20], testes [21] and kidney [22], whereas Jirtle et al [23] used an in vivo transfer system to quantify radiation responses of hepatocytes. Functional assays were developed for other tissues, including lung (pneumonitis and fibrosis), spinal cord (paralysis), wound healing (breaking strength), mucosa (inflammation) and hair follicles (epilation). Withers and coworkers [24] developed an isoeffect formula that became the most popular way to compare these clonogenic and functional endpoints in different tissues based on simple linear (α) and quadratic (β) components. By plotting isoeffect curves for fractions of different sizes, they showed that endpoints for late responding normal tissues changed more dramatically with size of dose per fraction than those for acute tissues (up to 6 weeks after RT); a difference that could be described by α/β ratios [25]. Withers [26] also summarized the biology behind dose fractionation effects by the 4Rs. (1) Repair of sublethal damage that spares late responding tissues with slow turnover, e.g. CNS. (2) Redistribution into the radiosensitive G2/M phase of the cell cycle that spares tissues with slow turnover. (3) Repopulation/ regeneration that spares normal tissues with rapid turnover that can proliferate during a fractionated course, e.g. mucosa. (4) Reoxygenation that decreases the hypoxic radioresistant fraction within tumors. These principles have guided clinical radiation oncologists for decades and prompted successful clinical trials for fractionation schemes that deviated from the classical 2 Gy per fraction [27]. This review will focus on how structural diversity between adult tissues impacts their radiation responses and how plasticity within tissue compartments might impact their regeneration. We are not able to consider, in full, important issues like dose, dose rate and the quality of radiation, or non-adult tissues. Low-dose stochastic effects will be sacrificed for cover of deterministic effects. Much of the relevant radiobiology is derived from animal models using photon irradiation delivered to fairly large fields, but we will briefly comment on how new technologies entering the clinic for cancer RT might change existing paradigms. Tissue diversity and response Tissues vary greatly in both their tolerance to radiation and in their latency, which is determined largely by the tissue turnover. For example, in C3H mice, 14 Gy Xrays deplete epithelium in about 3 days in the jejunum, 5 days in the colon, 10 days in the stomach, 12-24 days in the skin, 30 days in seminiferous tubules of the testis and 300 days in kidney tubules [22]. Stem cells have also been reported to turnover at very different rates in different tissues under steady-state conditions [28]. Under homeostatic conditions, by definition, the rate of cell production equals the rate of cell loss, i.e. the cell loss factor (ϕ) = 1.0. Asymmetric division to give cell loss may be a property of individual 'stem' cells or of a population that stochastically produces on average 50% 'stem' and 50% differentiating cells [28,29]; properties that may be autonomous or mediated through a supportive 'niche', a concept first defined in 1978 [30]. Regeneration occurs after cell loss and requires an increase in cell production and a decrease in ϕ to <1.0. The advent of lineage tracing technology, advanced imaging and single cell sequencing has improved the accuracy with which the behavior of stem, and other, cellular compartments in tissues can be studied. These are reviewed elegantly elsewhere [28]. The consensus is that there is a 'continuum of stemness' with considerable steady-state heterogeneity in numbers and organization, as well as 'plasticity', which includes reprogramming of more differentiated cells towards stemness and that could be particularly relevant in radiation-induced regeneration [31,32]. Given this heterogeneity, we will use the term 'stem' cell loosely, rather than attempt to define it with respect to specific markers, which may be misleading [28]. The basic cellular elements present in most normal tissues, and the effects of IR, are summarized in Figure 1. In a sense, these recent findings of stem cell heterogeneity strengthen the relevance of in situ 648 WH McBride and D Schaue clonogenic assays as measures of radiation response, as the term 'clonogen' can be taken to represent a functional regenerative unit that implies stem cell involvement but without prejudice. In addition to the variables noted above, differences between tissues with respect to radiosensitivity and volume effects may also be explained by their organization into functional subunits (FSU), where a FSU is the volume that can be regenerated from one surviving clonogen [33]. For example, epilation may occur after a lower radiation dose than desquamation simply because hair follicles have a smaller number of clonogens/FSU [34]. A FSU may correspond to a structural element, for example a tubule, or may not, for example, it may depend on how far a clone can migrate. If FSUs are arranged in series, like links in a chain, as in the spinal cord or nerves, tissues will probably demonstrate a strong, high dose-volume effect over a short distance. If arranged in parallel, as in skin and liver, the tissue will be better able to tolerate high doses to small volumes but may be susceptible to low doses to large volumes as function will be determined largely by the volume not irradiated. Tissue tolerance to IR is therefore a function of both the number and the radiosensitivity of clonogens in a FSU, and the number and organization of FSUs in a tissue. This analysis requires that normal tissues have stem cells or clonogenic regenerating units, which will now be discussed with respect to tissue turnover. Acute responding tissues Acute responding tissues, like intestine, bone marrow, skin and testes, turn over rapidly and have well-defined stem cell populations, at least a portion of which actively cycle under steady-state conditions. However, responses of these tissues to IR vary considerably, as does their ability to regenerate. Regeneration can be measured by split-dose experiments, i.e. the size of a second dose needed to negate the effect of prolongation in overall treatment time. In such experiments, the jejunum recovers rapidly and well, with the replicating pool being regenerated prior to differentiation being resumed; ϕ becomes close to zero. For skin, ϕ becomes closer to 0.5, whereas regeneration in the testes is very slow and ϕ is little altered. Sperm counts remain low for a long time after exposure. These differences are relevant to the fractionation schedule used clinically and to the retreatment of lesions with RT, but also mean that each tissue must be considered on its merits. The jejunum turns over very rapidly. The crypt houses a niche where basal columnar epithelial stem cells (BSC) reside. At least some BSCs cycle daily and give rise to rapidly proliferating progenitor/transit amplifying cells that differentiate into mature intestinal epithelial cells that journey up the villus and are shed into the lumen [35]. BSCs that express the R-spondin receptor and Wnt target molecule LGR5 are interspersed between Paneth cells, which, together with crypt structure, mesenchymal fibroblasts, endothelium, pericytes and other cell types, nurture and protect the stem cell population by providing Notch, WNT and BMP family members, growth factors such as EGF and cytokines such as IL-11 and IL-22 [36]. After 19 Gy total abdominal irradiation, which is lethal for 50% of defined-flora C3H mice by 5-10 days, jejunal clonogenic cell survival is decreased In the lower panel, IR triggers damage responses that set in motion recurring cascades of oxidative/reductive forces that aim to restore function but instead cause further cell death, senescence and autophagy, and immune involvement in the form of a myeloid shift and chronic inflammation. Radiation tissue damage 649 to about 8 × 10 -6 [37]. The colon is generally more radioresistant and responds later than the small intestine [20,38]. Radiosensitivity varies with the strain of mouse, the length along the intestine, the proximity of Peyer's patches [39] and the microbial flora, as irradiation can promote bacterial translocation across the intestinal barrier [40]. Furthermore, intestinal acute radiation syndrome (iARS) after WBI requires a lower dose than after total abdominal irradiation, a difference that can be neutralized by transfer of immunohematopoietic cells, suggesting 'accelerated' hematopoietic ARS as a potential cause of death [37,41]. The LGR5 + BSCs have been reported variably as radiosensitive [42] and radioresistant [43], but they are generally depleted from the crypt by 3.5 days after 14 Gy and clonogenic epithelial foci begin to appear from a radioresistant subset. In 1977, Potten [44] suggested that these reside at position +4 just above the uppermost Paneth cell; a population we now know to be LGR5 − and Bmi1 + [42]. Mindful of the pitfalls involved in the use of markers [45], the data suggest that a Bmi1 + radioresistant reserve stem cell pool, or at least a reprogrammable plastic population, restores LGR5 + cells in the crypt niche as well as the radiosensitive transit amplifying compartment following irradiation exposure [42,46]. It should be noted that reprogramming may be limited by p53-mediated DNA damage responses (DDR) [47] and it is dose and time dependent, senescence sensitive [32] and influenced by non-targeted effects [48]. Bone marrow, under steady-state conditions, contains primitive hematopoietic stem cells (HSC) that are functionally and molecularly heterogeneous but within which is a rare population with unique capability for self-renewal and multilineage differentiation [49]. HSCs are held in niches under the influence of various adjacent stromal cells that present bound or secreted molecules, including the classic hematopoietic CSFs, CXCL12, pleiotropin and many other signals [50,51]. Several niches have been proposed, but differential expression on lineage-negative HSC of signaling lymphocytic activation molecule (SLAM) family receptors, such as CD150, CD48 and CD244, has most clearly defined a vascular niche with HSCs in proximity to arterioles and sinusoids [52]. Numerous influences on HSC behavior by factors derived from endothelium, pericytes and fibroblasts have been described [51]. Indeed, there is growing evidence for a hemangioblast stem cell that can give rise to both HSCs and endothelial progenitor cells [53]. At least those HSC capable of long-term lineage reconstitution in radiation-conditioned hosts (LT-HSC) are largely non-cycling and stay in the bone marrow [54], but produce low levels of progenitor cells that are released into the circulation under normal conditions. Egress is achieved through a complex interplay of cytokines, chemokines and adhesion molecules, particularly involving the SDF-1/CXCR4 axis, purinergic signaling and phosphosphingolipids [50]. As HSCs differentiate they follow a hierarchical 'age' structure with progressive restriction in lineage and self-renewal (Figure 1), which is reflected in the changing composition of exogenous CFU-S on days 5, 8 and 11 after marrow transfer into irradiated hosts [55]. There is also evidence of a self-sustaining myeloerythroid progenitor population [56] that acts as an emergency reserve of immature and maturing leukocytes for rapid mobilization in response to challenge, such as after WBI. Radiation responses in the hematopoietic system are inevitably complex. Circulating lymphocytes are very radiosensitive, dying largely by apoptosis. Levels drop rapidly, even after local irradiation as cells circulate through the radiation field. Circulating myeloid cells by contrast can transiently increase in number within hours after local IR or even lethal WBI, forming part of an emergency mobilization response [57]. Precursors appear in the circulation that express both neutrophil and macrophage markers [58]. These migrate into many tissues, irradiated or not, under the influence of CSF1 [59] and perhaps CCL2, IL-6 and IFN-α/β pathways [58,60]. Later overshoots in the production of myeloid cells in the circulation have been described in many species [57]. Quiescent LT-HSC are relatively radioresistant, but appear to be very sensitive to oxidative stress, which can be generated by even low (0.02 Gy) doses of IR [61], generated directly or through proinflammatory pathways (Figure 1). Radiation-induced senescence is a likely outcome of radiation-induced oxidative stress, as is autophagy [62,63]. Proinflammatory responses are controlled by reductive pathways, especially those under Keap1/ Nrf2 control, that critically regulate LT-HSC levels [61,64]. Nrf2 also regulates PU.1, the master myeloid cell regulator [65], probably through NAD(P)H:quinone oxidoreductase, whose loss leads to myeloid hyperplasia [66]. However, a common result is a long-term defect in HSC reconstituting ability and continuing cycles of oxidative and reductive stress [61,67] and inflammation, with a myeloid shift; a pattern that is repeatedly reinforced [68,69]. These chronic inflammatory events are a hallmark of late radiation effects, including life shortening [69]. G-CSF [70] and probably other inflammatory stimuli can exacerbate the HSC defect, which raises questions as to the use of CSFs as mitigators of radiation damage. These radiation-induced changes appear to be remarkably similar to aging with its LT-HSC defects [71], and upregulation of genes specifying the myeloid cells at the expense of those of the lymphoid system [72]. Perhaps a myeloid shift is the price we pay for having a self-sustaining, partly autonomous myeloerythroid progenitor population that responds rapidly to injury to initiate tissue repair [56]. This population is sufficient to protect against hematopoietic acute radiation syndrome (hARS) in mice and is responsible for the day 8 CFU-S population [73]. Rapid recovery of this progenitor population may allow sufficient time for the regeneration of other hematopoietic compartments with slower turnover. Certainly, promoting this population results in mitigation of hARS and other WBI syndromes [58]. Not surprisingly, this is dose-dependent and after higher WBI doses accelerated hARS can occur, which is presumably due to critical loss of a more differentiated progenitor cell population. WH McBride and D Schaue In the skin and mucosa, squamous epithelia proliferate solely in the basal layer and cell loss is determined largely at the population level [28,74]. Progeny committed to differentiate migrate upwards to form a layer of keratinocytes that are then shed. After IR exposure, cells lost from the proliferating compartment are regenerated by a shift towards symmetrical division and differentiation decreases, although shedding continues unabated. The extent of clonogenic cell depletion determines the outcome. Survival of >10 −6 clonogens per cm 2 is needed to prevent radiation-induced moist desquamation in mouse skin [18]. Proliferating clones can often be seen in skin and mucosa of patients during conventional RT. The mucosa begins to repopulate 10-12 days after the initiation of treatment and this increases the tolerance of the tissue by at least 1 Gy/day, equivalent to a doubling of clonogens every 2 days [18]. Other epidermal stem cell pools exist in the skin in follicles, glands and other sites that are competent to differentiate along all epidermal lineages. Although, normally, their involvement is restricted to their own lineage their additional plasticity may manifest in injury situations [28]. In testes, Regaud [12] showed that radiosensitivity decreased with differentiation, radiating from spermatogonia on the periphery to spermatids in the center of the seminal tubules [16]. Spermatogenesis is very radiosensitive, with transient infertility around doses above 0.1 Gy that is permanent at 5-8 Gy [75]. Spermatogenic stem cells cycle continuously, but slowly, in the basal layer of the tubules, appearing to divide symmetrically before differentiating into mature haploid spermatozoa, a process that takes about 70 days. Single cell tracing in mice of GFRalpha1+ stem cells in vivo suggests an additional dynamic dimension where syncytial spermatogonia can contribute to stem cell function in homeostasis. After irradiation, spermatogenesis is only partially restored and regeneration is poor [21,76]. Recovery is very slow, taking 1-2 years after 2-3 Gy, with a risk of azoospermia after higher doses [77]. Acute radiation syndromes (ARS) Dose-time lethality curves of iARS and hARS are fairly predictable. iARS occurs before hARS but requires a higher dose. Mortality occurs within a narrow dose window. Increasing the dose decreases latency slightly before a plateau is reached [41,57]. Mortality is generally due to loss of proliferative stem/progenitor cells that fail to provide functional cells, but other causes have been identified. For example, immunosuppression can allow bacterial translocation across the gut and sepsis with lethality that is generally earlier than normal and occurs after lower doses. Immunosuppression may resolve in a couple of months, but full immune reconstitution may take many months or years, or may never occur. Extensive skin damage can also contribute to morbidity and mortality. For example, about 20% of Chernobyl patients who developed ARS had skin lesions involving over 50% of their body surface; some had respiratory tract lesions, probably from isotope inhalation. A small number had beta burns as the primary cause of death [78]. In addition to iARS and hARS, a cerebrovascular/ CNS syndrome (CVS/CNS-ARS) can occur within a day or two after exposure to very high radiation doses (e.g. >20 Gy). This is associated with edema, hemorrhage and neutrophil infiltrates and, although some oligodendrocytes die by rapid radiation-induced apoptosis [79], vascular damage is the most likely culprit, which may be through direct cell kill or radiation-induced TNF-α or VEGF [80]. A detailed description of radiation-induced inflammation [81] is beyond the scope of this review, but is relevant to radiation-induced tissue damage (Figure 1). In brief, although the initial wave of ROS generated by IR through radiolysis of water is over within 10 −3 s, ROS levels remain high, being generated by biological processes; the main sources being damaged mitochondria and activated NADPH oxidases (NOX/DUOX). Classic ATM-p53-bax DDR cause ROS release from mitochondria. Immune DDR involves damage-associated molecular pattern (DAMP) molecules released from damaged cells, like HMGB1 [60], and cytoplasmic RNA and DNA [82]. Toll-like receptors, RIG-1 (RNA) and cGAS/cGMP/STING (DNA) sensors connect DDR to proinflammatory responses, largely through NF-κB and TKB1/IRF3 pathways, to activate positive and negative feedback loops for senescence, autophagy and cell death that perpetuate redox imbalances [60,83,84]. Oxidative and reductive forces drive polar opposite phenotypes in the immune system and dictate the nature of the cytokines expressed and their role in normal tissue endpoints [85], including fibrosis [86]. One consequence is the establishment of self-sustaining periodic redox alterations and persisting cycles of tissue damage and inflammation. Radiation-induced micronuclei are major sources of cytoplasmic DNA and activate the cGAS/ cGMP/STING pathway. As these are produced during mitosis, they link the production of proinflammatory cytokines, especially type 1 IFN, to cell turnover. Furthermore, late effects in general are characterized by the presence of increasing chronic inflammatory responses that cause considerable morbidity, frailty and life shortening. Late responding tissues The fairly distinct, if plastic, stem, progenitor, functional cell compartments with rapid turnover and acute responses to IR seen in hierarchical tissues [87] can be contrasted with late responding tissues with slow turnover, where it has been less easy to identify the contribution of stem cells to homeostasis and radiation-induced regeneration. This distinction is important, as dose fractionation in the clinic spares slowly responding tissues more than those that show an early response. With the exception of CNS-ARS, the CNS is the classic late responding tissue. In adult brain, neural stem/ progenitor cells have been identified in niches in the subventricular zone and in the dentate gyrus of the Radiation tissue damage 651 hippocampus. These are active sites of neurogenesis. Lineage tracing has shown that most stem/progenitor cells cycle and turnover slowly but can be activated to proliferate before migrating away to produce neurons or glia [88]. After adult brain irradiation with the equivalent of single doses of~15-25 Gy, symptoms can appear in one or more phases; in days to weeks (acute phase), 1-6 months (subacute phase) or around 6 months or more (late phase). Acute and subacute symptoms are normally reversible, but late damage is progressive and more serious. Late effects have a predilection for white matter [89] but the histopathologic picture varies, including coagulation necrosis, vascular fibrinoid necrosis, edema and severe demyelination. Neurogenesis and proliferation in the hippocampus are inhibited by even low doses of IR [90]. Progenitor cells seem more sensitive than quiescent stem cells and long-term repopulation and recovery are slow [91][92][93]. Even though transit amplifying cells have been reported to regenerate low-dose irradiated (4 Gy) niches, suggesting some reprogramming can occur [88], the role of stem cells in recovery of the CNS is still controversial. The possible effects of irradiating the stem cell niches in patients with glioma have generated considerable discussion, with the possibility of generating neurocognitive defects being placed against the chances of the niche being the source of glioma stem cells, suggesting that increased radiation dose to the subventricular zone may be associated with longer progression-free survival. From the beginning, opinions have been divided between glial and vascular origins of brain late effects. In general, higher doses tend to precipitate hemorrhagic necrosis that appears slightly earlier and at higher doses than severe neuronal loss following demyelination [94]. Changes in cognition, including spatial and object recognition memory, fear conditioning and pattern separation behaviors have been ascribed to radiation-induced defects in neurogenesis [91,95] but, although important, these can be detected earlier and after lower doses than late demyelination, suggesting that they can be caused by neuroinflammation and oxidative stress. Proinflammatory cytokine expression occurs in mouse brain within minutes of cranial irradiation and thereafter pursues a rollercoaster path with further increases during the subacute and late periods that are associated with diffuse and severe demyelination, respectively, attempts at remyelination, and gliosis with immune cell infiltration and microglial activation [96,97]. TNFR2 is required for proliferation of neural progenitor cells [98] and its loss increases seizures in the brain [99], including after irradiation where subacute lethality is precipitated [80]. Therefore, although there is evidence that radiation-induced damage to the stem cell hippocampal niche can result in cognitive damage, its contribution to high-dose late effects remain indirect. The effects of irradiation on the kidney are most often measured functionally using filtration assays and indicate little recovery; previous irradiation can seriously compromise retreatment [100]. Indeed, it remains controversial whether stem/progenitor cells actually exist in the mammalian adult kidney [101], but Withers and colleagues [22] showed that extensive tubular damage was the dominant lesion after irradiation, preceding glomerular sclerosis. Removing irradiated kidneys at 60-68 weeks after exposure he found regenerating epithelialized tubules 60-68 weeks after irradiation, the number declining logarithmically with dose. Like the kidney, the liver has low turnover but can be stimulated to regenerate rapidly after surgery, leading to the general assumption that all mature hepatocytes are able to maintain homeostasis. However, lineage tracing in mice using Wnt-responsive Axin2 identified a population of proliferating and self-renewing diploid cells adjacent to the central vein in the liver lobule that could give rise to mature polyploid hepatocytes [102] and LGR5 + adult liver stem cells can be grown as organoids [103]. The origin of the liver clones that grow on transplantation to recipient mice and whose radiation characteristics have been examined and found to be characteristic of a late responding tissue, is not known [23,104]. Future clinical relevance An important premise in RT over the last century is that normal tissues (and tumors) vary in their response to dose fractionation, with late responding tissues with slow turnover being spared by fractionation more than early responding tissues. Although this is generally true for highly fractionated schedules, in recent years a variety of developments have been introduced into the radiation oncology clinic that aim to limit the amount of normal tissue within the radiation field. Although this is certainly desirable, the extent to which total tumor dose can be increased is debatable, as supralethal doses of radiation do not necessarily improve outcome [105]. The impact of this technology on the responses of acute and late responding tissues and the 4Rs of dose fractionation is worthy of discussion. CT/MRI imaging and faster computers have allowed intensity-modulated RT with multileaf collimators shaping the beam to conform more closely to tumor shape while minimizing the amount of normal tissue exposed to high-dose RT. Dose-volume histograms are produced that estimate the dose to different tissues. As a result, the use of hypofractionated regimens with RT given in one to five fractions has gained in popularity as it is more convenient for both patients and clinicians. Isoeffective doses for the change in size of dose per fraction can be estimated using linear (α) quadratic (β) exponents, although there is considerable uncertainty when extrapolating to high single doses. Hypofractionation makes sense, particularly for tumors that have a slow turnover in sites where there is little advantage from prolonged fractionation, e.g. prostate, and there is little to be gained from the enhanced recovery that the use of low doses per fraction bring late responding normal tissues. However, as stated earlier, it should be remembered that proliferation of acute responding normal tissues increases their radiation tolerance. Indeed, many regenerate faster than 652 WH McBride and D Schaue most tumors. As a result, shorter, more intense treatments can increase acute effects by not allowing enough time for their regeneration, even if the dose is calculated to be isoeffective for late effects and tumor. Also, volume effects, location and the organization of normal tissue FSUs become more critical when intensitymodulated RT is used and there are many unanswered questions in this regard. Finally, the impact of hypofractionation on dose-related inflammation and immune activation in vivo is still uncertain, even though it is known that some patients can generate tumor immunity that can be boosted by RT, something that has resulted in a large number of clinical trials combining RT with immunotherapy. In a similar vein, charged particles, especially protons and carbon ions, have been introduced in some centers. Charged particles have a lower entry dose and form Bragg peaks where most of the energy is deposited. It is possible to spread out the Bragg peak to completely cover tumor with rapid fall-off. The paths of charged particles are very different from photons and their relative biological effectiveness (RBE) is higherdose for dose. The RBE for protons is only slightly more than for photons but carbon ions have far higher values. Again, dose corrections can be made for RBE, although this varies with the tissue and along the path, for example for protons it increases at the distal end of the Bragg peak. These uncertainties in the magnitude and location of high RBE radiations is a concern for late responding more than acute responding tissues. FLASH RT is being tested where IR is delivered at an ultrahigh dose rate (>100 Gy/s) compared with conventional RT (0.1 Gy/s). FLASH RT appears to give neurocognitive benefits by decreasing oxidative stress and neuroinflammation [106]. A full explanation has still to be established, but it may relate to alterations in the chemical interactions between ions and radicals in space and time, with different species being generated. Conclusions Different tissues have different responses to IR. Tissues with rapid turnover and continuously cycling stem/progenitor populations respond acutely after exposure and regenerate rapidly. Such tissues show less effect of dose fractionation as long as regeneration is not compromised. Tissues with slow turnover respond late to IR and may have less dependence on stem/progenitor cells for regeneration and may rely more on proliferation and reprogramming of more mature cells. Chronic inflammation appears to play a greater role indirectly or directly in causing tissue failure. These differences are important in considering clinical RT and cancer treatment and may need further consideration with the rapid expansion into the clinic of novel technologies whose radiobiological effects are less well known. They are also relevant to mitigation of the effects of radiological exposure in accidents or terrorist action.
7,606.2
2020-01-28T00:00:00.000
[ "Medicine", "Biology" ]
The knowledge map of gender equality in cross-cultural communication: A bibliometric approach It is urgent to solve the gender issues in global cross-cultural communication. Countries worldwide should responsible for achieving gender equality (SDG5). Hence, the study aims to portray the knowledge map of the gender issue in intercultural communication to explore the research status and future potential. The study used CiteSpace to conduct a bibliometric method within 2728 English articles on cross-cultural communication and gender equality topics from the Web of Science (WoS). After cluster analysis and time series analysis, this study emphasis the continued attention and increasing trend of publications and elaborates on the critical authors, institutions, and countries of research on this issue. The results introduced Putnick as the dominant author contributed to the topic. The University of Oxford ranked the top1 in the institution cooperation relationship. Europe countries and the United States have made major contributions and influenced Asian and African countries, such as Burkina Faso, North Macedonia, and Kosovo. Gender issues in Asia and Africa are getting much attention. The keyword clusters formed by the authors' cooperation include gender equality, life satisfaction, network analysis, and alcohol use. In addition, childbirth technology, patient safety competition, life satisfaction, capital safety, and sex difference are the key word clustering results of institutional cooperation. At the level of national cooperation, internet addition, risk sexual behavior, covid-19 pandemic and suicidal idea have become the main keywords The results of keyword cluster analysis show that gender role attribute, psychological properties, dating policy, professional fulfillment, and entrepreneurial intention have become the main topics in the current research. The research frontier analysis reflects the importance of gender, women and health. The research on self-efficacy, diversity, image, life satisfaction and choice has become the trend of cross-cultural communication and gender issues. Furthermore, abundant achievement emerged in the subjects of Psychology, Education, Sociology, and Business economics. Geography, Language and Literature, Medicine, and Health industries also have been highly influential in recent years. Therefore, the conclusion suggests the studies of gender issues can be further deepened into more authors, areas, subject and other multiple cooperation sectors. Introduction to the results of bibliometric analysis of gender issues and intercultural communication. village and the urban system and, thus, in the sustainable development of rural areas [54]. Therefore, it has become a trend for intercultural communication to promote sustainable development [25]. Although there are more fields and researchers to join in this section, it is necessary to understand the research clues of intercultural communication affecting sustainable development to provide new ideas and contribute to sustainable development in the era of globalization. Gender equality and sustainable development Gender equality is essential in pursuing the SDGs, which emphasize the equal rights of women, children, and sexual minorities in social development and promote harmony between people and society [55]. Gender issues remain acute in many countries in the 21st century [21]. However, evidence points to cross-national convergence and persistent (or even growing) heterogeneity in women's status when different aspects of gender inequality [56]. To make sense of this contradiction, Cole et al. (2018) examine how culture moderates the relationship between economic development and gender inequality [56]. Gender equality itself is part of the concept of sustainable development. Shannon, Jansen [8]provided evidence for why gender equality in science, medicine, and global health matters for health and health-related outcomes. Alarcón et al. (2019) explored the interconnections between the Sustainable Development Goals (SDGs) and tourism from a gender perspective [57]. Scarborough et al. (2019) show that gender attitudes have more than one underlying dimension and that these dimensions have changed at different rates over time [58]. In light of the sustainability goals introduced through the UN's 2030 Agenda for Sustainable Development, Levin et al. (2019) present a model to systematically address gender mainstreaming in transport planning [59]. Lau et al. (2021) provide an overview of four common gender assumptions and offer four suggestions for a more scholarly pursuit of gender equality in climate change policy and practice [60]. These achievements fully demonstrate the importance of gender equality for sustainable development. Moreover, the evidence on the relationship between gender equality and sustainable development in the Middle East and North Africa shows that adolescents have a substantial demographic dividend advantage and are likely to make more contributions to economic growth and development [61]. The goal of sustainable development is characterized by gender intersection. Girls are an essential part of demography on the way to achieving SDGs, especially the mainstreaming of gender equality into all SDGs [62]. Regarding anthropological tendency, gender equality also has particular legal value [63]. In addition, the study on gender equality from Nigeria also confirmed that gender equality is conducive to achieving the expected sustainable development of the country [64]. In summary, many studies in the past five years have focused on cross-cultural communication and gender equality. Moreover, many results have focused on the relationship between the two and sustainable development. However, the discussion of gender issues in intercultural communication is still not deep enough, and the fields involved are not extensive enough. Some research trends are already in analyzing cross-cultural communication research [2]. However, almost no results have paid attention to gender research in intercultural communication. In particular, these findings rarely focus on the inspiration of gender equality in intercultural communication for sustainable development. This paper tries to analyze gender research in cultural communication based on many academic achievements, analyze research trends, authors, institutions, national cooperation, discipline distribution, and keyword contributions, provide corresponding suggestions for the development of this field, and promote sustainable development. Research method Web of Science (WoS), as one of the primary databases of bibliographic research, covers cutting-edge research in many disciplines and is a high-quality tool commonly used in bibliometric analysis [65]]. The study used the advanced search function in WOS, based on the research results after 2018, and carried out the output of the following three search formats. The first search formula (1) requires that the literature subject contain both cross-cultural and gender English articles, revealing 1236 results. The second search format (2) requires that the literature subject include cross culture and gender Articles in English, totaling 1258 results. The third search query (3) requires the literature subject to include cross-culture, cross-cultural, or cross cultural, and gender within English articles. A total of 2728 results delete the articles bookmarked in the previous search. The result obtained in the final search formula 3 is 1324. As one of the main tools for visual analysis of knowledge graphs, CiteSpace has made an essential share for analyzing research status and trends in various fields [66]. The study imported these results (3818) into CiteSpace, divided them, as shown in Table 3, left only the article-like results, and finally analyzed the 2728 results for knowledge graph analysis. That includes discussions of the timing, author, institution, and keywords to reveal the current status and future of gender research in intercultural communication. Based on these articles, the study of different categories of clustering and time series analysis in the "title + topic + abstract" by software. Dig out different clusters under author, institution, country, and subject categories, and introduce the development and relationships of related topics through time series. (1) (((TS=(cross-cultural)) AND TS=(gender)) AND DT=(Article)) AND LA=(English) (2) (((TS= (cross culture)) AND TS=(gender)) AND DT=(Article)) AND LA=(English) (3) (((((TS=(cross-cultural)) OR TS= (cross culture)) OR TS= (cross cultural)) AND TS= (gender)) AND LA=(English)) AND DT= (Article) (TS = topics; DT = literature type; LA = language) The research first uses three retrieval formulas to obtain the data in WOS and then uses Citespace to perform de-duplication processing. Then, it uses 2728 articles as the primary reference data for the next step of the analysis. In the next part, the research analyzes the year of publication, author cooperation, institutional cooperation, national cooperation, discipline cooperation, and keyword co-occurrence of the research on gender equality in cross-cultural communication by bibliometric method. Almost every part determines the clustering relationship in the knowledge map through cluster analysis to find the current status and future research clues of cross-cultural communication and gender equality. Years of publications Cite Space conducted an annual analysis of 2728 results and found that since 2018, Table 4 suggests the number of posts published has gradually increased each year, from 2018 (438) to 2021 (675). The study of gender in cross-culture has developed better and better in the academic community in recent years, and many results have promoted the development of this research. Because 2022 has only passed half a year, the number of literatures have reached 368, and the study is expected to be 2022. By the end of the year, the number of related posts may exceed 700. Therefore, the study of gender issues in cross-culture is in line with the current research hotpots and trends. The outcome reflects the importance of gender issues in cross-culture in human social development, and observing gender issues from a new cultural communication perspective is more conducive to solving gender-related social problems in the world and promoting the realization of SGD5 (gender equality). Knowledge maps of author cooperation The analysis of authors' cooperative knowledge map aims at mining authors who have made outstanding contributions to the research in the field and interpreting the cooperation between authors through relationship nodes [67]. By the cooperative author cluster computing, 5 clusters appeared. Moreover, the study concluded the related authors of six clusters of collaborative knowledge atlas. Putnick made a related effort around the theme of the argument method. WHO and Taber have contributed significantly to gender equality. Brooks emerged under the life satisfaction cluster, and the network analysis part of Chang et al. was relatively active in the middle. Alignment methods and gender equality became the most central topics of discussion. As the most significant author collaborative theme clustering, the alignment method appeared on average in the 2015, with a size of 36. Gender equality appeared on average in the 2018, which was 21 in size, becoming the second cluster. Similarly, as a topic of discussion in 2019. life satisfaction was 19. Network analysis appeared on average in 2018. Alcohol use appeared on average in 2016. Table 5 explicitly introduces the keywords formed by different clusters and their sizes. Therefore, the topics and hot-pots of academic concern are different in different periods. The scholars who explored this topic in 2018-2019 were the most numerous and concentrated. The authors from European and American countries seem to be more interested in the issue of intercultural communication and gender equality [68]. They have pushed this topic into the research field worldwide. Intercultural communication and gender equality are indeed an idea that involves the well-being of all humanity [69]. Hence, Table 6 Table 4 Trends of cross-cultural and gender research publications. Year No. publication Knowledge maps of institutional cooperation The atlas of institutional cooperation can reflect the leading institutions that study gender equality in cross-cultural communication worldwide. Through the visual analysis of institutional cooperation, the theme and development of cooperation between institutions can emerge [67]. The study used the "title + keyword + abstract" under the CiteSpace agency cooperation map for cluster analysis. A total of 10 clusters were generated: childbirth technology, patient safety competence, life satisfaction, marital satisfaction, sex difference, alcohol use severity, emerging adult, network analysis, psychogenic nonepileptic seizure, and middle eastern countries. The University of Washington and the University of Southern Denmark are still working on children and patients. Zhejiang University's attention to marine life is also continuing around 2022. Loyola University has also contributed to the study of gender roles. In terms of annual distribution, since 2018, the University of Washington, the University of British Columbia, the University of Auckland, the University of Hong Kong, and the University of Amsterdam have been working on these types of research. The University of Chicago, Harvard University, and the University of Edinburgh have also gradually emerged research results in 2019. Even more surprising is that relevant research in Asian and African countries will promote gender research across cultures, such as Korea University, Sudan University, Wuhan University, and Nigeria University, around 2020. From the overall point of view, the contribution of Table 6 The main authors contribute to the topic of cross-culture and gender. universities in Europe and the United States to this topic is relatively high. In recent years, universities in Asia and Africa have also been influenced by cross-cultural and gender studies in European and American universities, making essential contributions to adults, the Middle East, and pathology. Knowledge maps of country cooperation The analysis of the national cooperation map aims to make it easier to identify the major countries that have contributed to the thematic research. Through the analysis of the national cooperation knowledge map, it is mainly to decrypt the development held by different countries at a particular time and the research alliances formed with other nations. Moreover, this analysis can also reflect the main themes of different countries [67]. Six clusters were formed using the clustering method of "keyword + title + abstract" based on national cooperation. Pakistan and Italy have corresponding results under the clustering of addictive networks around 2018, which will also impact this topic in 2022. Spain, Guinea, Bissau, and Qatar have made efforts since 2018 the study risk of sexual behavior in Aruba. Senegal has received continued influence from previous countries in the last year or two discussions on this topic. Furthermore, India, Australia, Sri Lanka, Tanzania, and Vietnam contributed more to the dunning-Kruger effect around 2018. Brunei, Rwanda, and other countries continued to make achievements. The research results of the dunning-Kruger effect show that developing countries research is more prominent under this cluster, especially in the Asian and African regions. European countries such as Finland and Portugal have led the study of the intodermqol questionnaire in the early stages. In the past two years, Uzbekistan, Nigeria, Angola, and other countries have also appeared in the clustering of the intodermqol questionnaire in 2022. Montenegro emerged as a representative in clusters. The covid-19 pandemic is a hot topic that has been studied continuously since 2020. China and Togo are more representatives in this cluster. As for suicidal ideation, studies focused on it almost until 2020, with few national studies observing suicidal ideation in 2021 and 2022 clustering. At the same time, the study of gender in cross-culture was more the result of developed countries in Europe and the United States in the early stage, and they made more contributions to this, from the perspective of time, the impact on the later research is also beyond doubt. However, after 2019, developing countries dominated by Asia and Africa paid more attention to the problem of sex ratio in their own countries. Fig. 2 explains that Burkina Faso, North Macedonia, and Kosovo are particularly prominent. From 2018 to 2020, Morocco and Suriname with strength of 0.43 have become the region of most significant concern 2020, which means that the world is currently paying particular attention to the gender issues in these countries across cultures. It is not difficult to see that these regions are mainly African, which fully reflects the plight of the African region in this regard and the contribution made by the world to this. The study used CiteSpace to analyze the Centrality of the National Cooperation Atlas, which we chose centrality ranks in the top ten. Centrality is the number of shortest paths through points in a network and a measure of the size of connections played by nodes in the overall network [70]. Moreover, the counts can reflect the quantity of countries contributing to the topics. Nodes with significant centrality are relatively easy to become critical nodes in the network. Table 7 Knowledge maps of keywords Keyword co-occurrence analysis is the foremost step of visual analysis and one of the most powerful data processing functions of Cite Space [67]. The keyword map analysis aims to reveal the main topics and hotspots that may be involved in gender equality in cross-cultural communication. To some extent, the keyword map allows researchers to interpret the current research situation and get clues for future research. The study uses the clustering method of "keyword + title + abstract" to form 9 clusters at the keyword analysis level. These clusters are expanded gender role attitude, psychometric properties, dating violence, professional fulfillment, entrepreneurial intention, social skill, physical activity, interpersonal deviance, and work-family practices. There is a vast number of nodes in 2018, indicating that the discussion of this topic in 2018 is particularly prominent, and this year focuses on topics such as self-efficacy, adults, children, age, women, experience, cultural behavior, gender differences, discrimination, After that, the research pushed the topic into new areas, such as life satisfaction, medals, illness, personal characteristics, wisdom, and other keywords gradually appeared. Keywords include variance structure analysis, structural equation models, capital, and workforce. Inspire our research on this topic to enter the management field in 2022, primarily using quantitative research methods to solve related problems. Keyword strongest citation burst emphasizes the development track of relevant topics and can provide reasons for future research because keyword emergence suggests the most popular research direction in the field [67]. Fig. 3 continued to analyze highlighting high-citation keywords and selected the top 25 high-citation keywords. The study found that in 2018-2019, validation factor analysis became the highest citation, meaning that much research on gender in cross-cultures comes from a quantitative perspective and that academia is trying to address the correlated factor problem. Secondly, keywords such as material use, collectivism, individualism, and commitment declare that the study of this topic involves multiple fields, especially in terms of culture. Highly cited keywords include job satisfaction, business, employee, and occupation. Gender studies in cross-cultural contexts have contributed to economic management, especially human resource management. In the past two years (2020-2022), it has been revealed more from the psychological perspective, and we can get some clues from keywords such as life satisfaction, image, self-efficacy, death, and diversity, hence, the academia should continue focus on the psychological study in the context of gender issue in intercultural communication. Fig. 3. Top 25 keywords with the strongest citations burst. Table 8 introduces the keyword with the most significant number of nodes as gender, which existed 422 times. The study only selected keywords with a node number greater than 100, women, health, gender differences, behavior, attitudes, prevalence, culture, models, and validity. Among them, which means gender issues in cross-culture, women's issues, in particular, are particularly prominent in terms of gender discrimination, health, and psychological aspects such as behavior and attitudes. Among them, perception has the strongest centrality (0.03), indicating that perception has the most significant impact, reflecting that the study of gender issues in cross-cultural concepts is a hot topic. On the other hand, the keywords selected are all from 2018, which means that these studies in 2018 have a great inspiration for subsequent attempts and are also crucial. Knowledge maps of research subjects The strongest citation bursts analysis of subject categories emphasizes the main research disciplines involved in cross-cultural communication and gender equality, which can provide more opportunities for the integration of disciplines for future research and also make up for the lack of research in some disciplines [49]. Fig. 4 analyzes the highest cited discipline, finding that geography, rehabilitation, philosophy, psychology, biology, Chemistry, and primary health care were frequently cited until 2020. After 2020, linguistics, health policy and services, and medical informatics became the leading disciplines of cross-cultural and gender studies. Table 9 extracted disciplines with more than 100 nodes. The counts in Table 9 represents the number of literatures contributed by each discipline. It is clear that psychology (622) and SSCI (588) have absolute advantages and have become the primary disciplinary sources of cross-cultural and gender studies. In addition, the fields of environment, health, business, and medical care have also contributed a lot to this topic. Moreover, the field of education (147) is among the main contributing subjects. From the perspective of Centrality, the node of SSCI (0.91) is the most prominent and clear. However, psychology also has a certain degree of correlation with other disciplines (0.25). However, the Psychology, Multidisciplinary social science citation index (SSCI) is relatively independent and almost challenging to relate to other disciplines. Discussion and conclusion Cross-cultural has almost become a theme in the current process of building a community with a shared future for humanity [71]. The challenges faced by management [72], behavior [73], psychology [74], social media [75], cross-cultural communication opportunity [31], and its importance to the development of the times have long been important topics in scientific research. As gender issues are an essential aspect of the SDG5, gender issues will inevitably be discussed in intercultural communication [76]. Gender roles in cross-cultural communication [28] (abuse of women and children, female roles, feminism), gender language [77] (women's voice, female expression, children's language), and gender literature have become the main aspects of gender issues worldwide. Nonetheless, few studies have summarized and reviewed these achievements. This article belongs to the primary stage of bibliometric analysis. Therefore, the study is only a study of gender equality in crosscultural communication, revealing the critical areas involved in this topic between 2018 and 2022, as well as the relevant research status, providing more ideas for cross-cultural communication and gender equality research, and guiding a specific direction. The results of bibliometric analysis of gender equality in the context of intercultural communication, although gender issues are crucial to sustainable development. The bibliometric analysis of cross-cultural communication has made some breakthroughs, but the research content is relatively simple, mainly from the perspective of communication science, and hardly combines the gender issues. Therefore, the results have new significance for sustainable development. Therefore, the innovation in research ideas and methods has broken through the previous research's focus on gender issues only. Both qualitative and quantitative research is the basis for this paper to obtain conclusions and inspire future research. We are conducive to sorting out the development trend of cross-cultural and gender research, hot topics, the current status of the author, institutional cooperation, and the direction of future research. The achievements of gender issues study in cross-cultural context had increased year by year, indicating that the topic has gradually Table 8 The main keywords to the topic of cross-culture and gender. attracted more and more attention worldwide. From the national cooperation map perspective, the current Asian and African regions in the cross-cultural gender problem is still more severe. In the early stage, the developed countries and world-class universities have made important achievements on this topic. They continue to affect the attention of developing countries such as Asia, the Middle East, Africa, Latin America, and other developing countries to pay attention to gender issues in the past two years. It will take a long time to achieve gender equality, and gender equality is only one aspect of gender issues. From the perspective of national cooperation, the United States, the United Kingdom, and China have contributed the most to cross-cultural and gender research, focusing on gender issues in their own countries or other regions. These countries have highlighted their interest in SDG5 attention. Moreover, among these countries, the University of Washington, Harvard University, Zhejiang University, the University of Amsterdam, and the University of Southern Denmark have become the main force of cross-cultural and gender studies and have led higher education institutions in Asia and Africa to devote themselves to this aspect of research. The study illustrates from the thematic atlas that gender issues in current cross-cultural contexts are represented in many categories. Such as gender roles, marriage, violence, alcohol addiction, sexual assault, child abuse, working in family life, life satisfaction, employment, entrepreneurship, social skills, psychological problems, and much more. The focus of different periods and different scholars are not exact. In the past two years, the research has paid more attention to gender topics from the perspectives of psychology, business and economics, education, linguistics, and medicine, such as women's employment and life satisfaction, work self-efficacy, death, abuse, sexual health, gender language literature and other issues. These results inspire researchers to enter the field of research from many aspects and further focus on gender issues in intercultural communication in the context of a covid-19 pandemic. Specifically, researchers can discuss cross-cultural gender issues from the fields of education, business, and politics rather than simply discussing the relevant content of their own cultures. Furthermore, the study encourages more scholars to conduct comparative research on gender issues in different cultures so that more fields can pay attention to gender issues in different cultures. The results will address society's negative development due to gender discrimination or roles. On the other hand, in discussing gender issues in intercultural communication, while solving the topic of women and children, we should pay more to the issue of sexual minorities in cross-cultural cultures. Furthermore, more and more countries have positively solved the problems of sexual minorities from the aspects of laws, policies, and insurance have individual results to instruct and guide different societies to pay attention to these groups. Nevertheless, from this analysis of the study literature, there is no clue that sexual minorities in cross-cultures are receiving sufficient attention. Hence, the study implicates the introduction of relevant gender policies in cross-cultural communication, including gender issues in education, gender protection and respect topics in work, and the need for continuous support in the policy. Comparing gender policies in different countries is one of the critical methods to promote gender studies in intercultural communication. Developed countries Table 9 The main keywords to the topic of cross-culture and gender. should pay more attention to women's issues in Africa and the Middle East, especially gender discrimination in cross-cultural communication. At the same time, cooperation institutions in different countries should strengthen the research alliance of crosscultural communication and gender equality and further realize SDG5 through education, enterprises, and employment. Bibliometric analysis results lead researchers to recognize the significance of studying intercultural communication and gender equality for SDG5. However, there are also some defects. Only the data from WoS can be used as the dataset. In the future, relevant content may be mined from more databases, including reliable documents in Scopus, SSCI, and ProQuest. In addition, the research results of the past five years mainly reflect the current research status, so the research and exploration of the context of this topic are from the perspective of diachronic development, especially its origin and development process. If we examine intercultural communication and gender equality from a macro perspective, future research must focus on the whole development process of the topic. Finally, in data generation, the query topic is particularly critical. The research should probably do more synonym retrieval or similar phrase retrieval to expand the search scope, which is also a problem that should be paid attention to in future research. In the future, the other researchers could contribute more diachronic, comparative, and empirical research to address cross-cultural and gender issues in different fields, continue serving the SDG5, and pursue more opportunities to respect and love people and lives after the Covid-19 pandemic. This article explains the current hot topics of cross-cultural and gender studies from the perspective of literature atlas analysis and the scholars, institutions, and countries working on the topic. The study introduces the positive trend of the development of this topic year by year and encourages more subject areas to participate in the discussion of this topic, not only psychology, management, pedagogy, language, and literature, but also other disciplines closely related to cross-culture should pay attention to this aspect. Gender issues manifest themselves differently at different times and regions, so there is still a long way to go to solve this problem.
6,427.4
2023-05-01T00:00:00.000
[ "Economics" ]
Beyond radial profiles: Using log-normal distributions to model the multiphase circumgalactic medium Recent observations and simulations reveal that the circumgalactic medium (CGM) surrounding galaxies is multiphase, with the gas temperatures spanning a wide range at most radii, $\sim 10^4\ {\rm K}$ to the virial temperature ($\sim 10^6$ K for Milky Way). Traditional CGM models using simple density profiles are inadequate at reproducing observations that indicate a broad temperature range. Alternatively, a model based on probability distribution functions (PDFs) with parameters motivated by simulations can better match multi-wavelength observations. In this work, we use log-normal distributions, commonly seen in the simulations of the multiphase interstellar and circumgalactic media, to model the multiphase CGM. We generalize the isothermal background model by Faerman et al. 2017 to include more general CGM profiles. We extend the existing probabilistic models from 1D-PDFs in temperature to 2D-PDFs in density-temperature phase space and constrain its parameters using a Milky Way-like {\tt Illustris TNG50-1} halo. We generate various synthetic observables such as column densities of different ions, UV/X-ray spectra, and dispersion and emission measures. X-ray and radio (Fast Radio Burst) observations mainly constrain the hot gas properties. However, interpreting cold/warm phase diagnostics is not straightforward since these phases are patchy, with inherent variability in intercepting these clouds along arbitrary lines of sight. We provide a tabulated comparison of model predictions with observations and plan to expand this into a comprehensive compilation of models and data. Our modeling provides a simple analytic framework that is useful for describing important aspects of the multiphase CGM. INTRODUCTION Several independent observational probes over the last decade have uncovered the circumgalactic medium (CGM hereafter), the diffuse atmospheres around galaxies like our Milky Way (for a review, see Tumlinson et al. 2017 andFaucher-Giguère &Oh 2023).Being diffuse, the CGM is hard to detect but is the major baryonic component of the galactic halos, making up to a few times more mass than the combined mass of the stars and the interstellar medium (ISM; Werk et al. 2014;Das et al. 2020). Traditionally, the CGM is modeled with a parametric density ★ E-mail<EMAIL_ADDRESS>(AD) † E-mail: msbisht@rrimail.rri.res.in(MB) ‡ E-mail<EMAIL_ADDRESS>(PS) profile/distribution of the volume-filling hot phase (Maller & Bullock 2004;Henley & Shelton 2010;Sharma et al. 2012b;Gupta et al. 2012;Miller & Bregman 2013;Mathews & Prochaska 2017;Yao et al. 2017;Stern et al. 2019;Yamasaki & Totani 2020;Faerman et al. 2020).This approach is incomplete because observations (e.g., Werk et al. 2014;Tumlinson et al. 2013) show that most of the sightlines intercept not only the hot phase but also the ions tracing the cold/warm phase (10 4 − 10 5.5 K).Traditional models that only include the hot phase, therefore, fail to explain the ubiquity of the cooler phases in observations.Presently, there are only piecemeal physical models to account for the cold/warm phases.For instance, to explain the observed OVI (O +5 ) column densities, Faerman et al. 2017 propose an ad hoc introduction of a 10 5.5 K phase.Likewise, Faerman & Werk 2023 presume a fixed volume fraction for the cold phase in ionization/thermal equilibrium.Since the cold phase has low thermal pressure, it is necessary to have a large non-thermal support to maintain the total pressure balance with the hot phase.However, the conclusions drawn from such models depend heavily on the model assumptions.Therefore, generating multi-wavelength observables from a diverse range of physical models is imperative. To address the limitation of simple profiles to describe a multiphase CGM, we generalize the model of Faerman et al. (2017) based on a log-normal volume distribution across the complete range of CGM temperatures.This model is physically well-motivated, since log-normal distribution of densities and temperatures are routinely inferred from observations and simulations of different phases of the ISM (Körtgen et al. 2017;Chen et al. 2018) and the CGM (Das et al. 2021b;Vijayan & Li 2022;Mohapatra et al. 2022).Further, a log-normal temperature distribution captures the "core" (up to a quadratic order Taylor series expansion in ln ) of a generic peaked temperature distribution.Because of the central limit theorem, lognormal distributions are understood as a natural outcome of generic multiplicative random walk processes.Additionally, log-normal distributions are a useful choice for modeling the multiphase CGM, as the weighted integrals of log-normal PDFs (probability distribution functions) involved in calculating different synthetic observables are analytically tractable. We generalize the Faerman et al. (2017) (FSM17 hereafter) model of the CGM to allow for different phases in any generic thermodynamic state.In the FSM17 model, the baseline isothermal log-normal PDF of the hot phase is modified to include an additional warm log-normal component at 10 5.5 K. To demonstrate the flexibility of our generalization, we replace the isothermal baseline model of FSM17 with an isentropic profile (following Faerman et al. 2020) but retain the same prescription for the modified warm component.Our generalization can incorporate any arbitrary thermodynamic relation between phases.Specifically, we consider the hot and warm phases to be either isochoric or isobaric with respect to each other.We then compare the observables from all these models (OVI/VII/VIII/NV column densities, X-ray emission measure [EM], emission spectra, and dispersion measure [DM]) with the observed data. Generate unmodified gas profiles Create modified PDFs from unmodified PDFs Estimate local & global density from modified PDFs Figure 1.Flowchart of the procedure proposed in FSM17 to produce a warm phase from an isothermal unmodified profile (see section 2.1).We generalize the FSM17 procedure to model any generic thermodynamic profile of the CGM (see section 2.2) with the flexibility to have an arbitrary pressuredensity relation across the phases (see section 2.3).Synthetic observables can be generated to match against observations after profiles are computed (see section 2.4). We next construct a one-zone, three-phase model comprising hot (∼ 10 6 K), warm (∼ 10 5 K), and cold (∼ 10 4 K) gas.Each phase is modeled as a two-dimensional log-normal PDF in the densitytemperature ( − ) phase space.We adjust the volume fraction and the median density and temperature along with their corresponding spreads for each phase to match the − distribution measured for a Milky Way-like halo from the Illustris TNG50-1 cosmological simulation (Nelson et al. 2020).From this fitted three-phase model, we generate column densities of OVI and MgII (the warm and cold gas tracers, respectively), estimate the EM, DM, and X-ray surface brightness, and compare with the observations.Additionally, we formulate a simple, approximate prescription to move beyond the one-zone approximation in our three-phase model of the CGM. Our paper is organized as follows.Section 2 explains the twophase FSM17 model and introduces our generalized framework and the integrals needed to generate observables from the PDFs.Section 3 introduces the three-phase model with 2D (in density-temperature space) log-normal distribution for each phase, calibrated with a simulated Milky Way-like halo from Illustris TNG50-1 cosmological simulation.Section 4 discusses the implications of our work for the multiphase CGM, in particular, the influence of warm/cold clouds with a small volume filling fraction.Section 5 concludes with a summary of our work. GENERALIZED FAERMAN MODELS In this section, we elaborate on the model introduced by FSM17.We list the symbols and notation used to reformulate the FSM17 model in Tab. 1, crucial for generalizing to a broad class of probabilistic CGM models.Fig. 1 outlines the steps discussed in section 2.1 for generating these probabilistic models.As a specific example, in section 2.2 we replace the isothermal hot gas profile in FSM17 with an isentropic profile but retain the same prescription for the warm gas.In section 2.3, we emphasize the implication of choosing isochoric or isobaric radiative cooling to generate the warm phase.The thermodynamic state of the hot and the warm gas used in these models can significantly alter synthetic observables, as discussed in section 2.4. FSM17 model for the CGM Here we list the different steps involved in using the FSM17 model to study the CGM (see Fig. 1): (i) Create unmodified hydrostatic profiles for the volume-filling gas in the CGM.We denote unmodified profiles with superscript (), () , () , () unmodified profiles of hot CGM () , () mass, volume for a phase in temperature range [ , + ] = Σ () total mass in range [ , + ] = Σ () total volume in range [ , + ] () , () total mass, volume for a phase across all s = Σ () total mass including all phases across all s = Σ () total volume including all phases across all s () volume fraction in a phase ⟨ () ⟩ = () / () local average density of a particular phase ⟨ () ⟩ = () / global average density of a particular phase such as () , () , () .Such unmodified profiles can be prescribed by any generic CGM model, like the precipitation model (Sharma et al. 2012b;Voit 2019) or the isentropic model (Faerman et al. 2020).FSM17 used isothermal gas in hydrostatic equilibrium as the unmodified profile.The total pressure tot considered by FSM17 has thermal, non-thermal (e.g., due to magnetic fields and cosmic rays), and turbulent components such that tot = th + nth + turb , where nth = ( − 1) th , turb / th = ( turb / th ) 2 , and th = tot /( + 2 turb / 2 th ).In this isothermal model, the total pressure is given by the simple hydrostatic solution, where is the effective isothermal sound speed, taking into account the contributions from the non-thermal and turbulent pressures, and the subscript 0 refers to a reference radius with the total pressure () 0,tot and the potential Φ 0 .Therefore, the effective sound speed in Eq. ( 1) is 2 = 2 th + 2 turb (here we use the same notation as FSM17).Tab. 2 lists our parameter values, which are taken from the fiducial models in FSM17 and FSM20, chosen mainly to match the column density of OVI.The non-thermal pressure support due to cosmic rays (e.g., Salem et al. 2015;Ji et al. 2020;Butsky et al. 2022) or turbulence (easier to detect in clusters, e.g., Li et al. 2020;ZuHone et al. 2018;Mohapatra et al. 2021than in CGM , e.g., Buie et al. 2020a,b;Chen et al. 2023) in the CGM can be significant. The thermal pressure and density are related by the ideal gas equation of state here is the mass-weighted average temperature at each radius, which also determines the thermal broadening 2 th = () / of the unmodified gas.The temperature is assumed to be constant in the unmodified profile used in FSM17.We later also use an isentropic unmodified profile (see section 2.2) from Faerman et al. 2020 (FSM20 hereafter) to illustrate the procedure for a general unmodified profile. (ii) Unmodified log-normal temperature-PDFs: After obtaining the unmodified profiles, FSM17 assumes a log-normal volume distribution of temperature in each radial shell of the CGM.The motivation behind this is that the non-thermal and turbulent processes in the CGM cause fluctuations around the unmodified gas properties and results in a locally peaked temperature/density distribution.Therefore, we describe the gas temperature distribution in each shell by a log-normal PDF and multiphase gas exists co-spatially at all radii.A perfectly uniform multiphase shell is an approximation, and the cooler phases are expected to be spatially inhomogeneous (e.g., see Figure 2).We quantify the amount of such co-spatial multiphase gas at any radius by the volume/mass fraction of the gas in the temperature range [, + ].The gas distribution is assumed to be log-normal (in volume and consequently also in mass) at all radii from the center of the CGM and the volume PDF (see the light red line in Figure 3) can be expressed as where ≡ ln / is the normal distribution with zero mean and standard deviation .Note that the distribution in ln space, P () () = N () (, ), is Gaussian.In the FSM17 prescription, the unmodified gas at every radius is assumed to be isobaric so that 8.5 (IC), 2.8 (IB) 28.7 (IC), 38.6 (IB) the product of density and temperature within a phase is constant.This is an approximation of the thermodynamic state of the gas. In general, density and temperature are independent, and 2D PDFs (say in density and temperature) are necessary to describe the gas distribution (cf.section 3).For a shell of volume and mass at a radius , the volume fraction of gas in the temperature range [, + ] is P where ⟨⟩ denotes volume-weighted average and ⟨⟩ denotes massweighted average.The middle expression in Eq. 4 relating the mass and volume PDF is generic for any thermodynamic process of the internal perturbations within a phase.However, the rightmost expression assumes that the perturbations within the unmodified gas are isobaric.This isobaric assumption implies () () () () = ⟨ () () () ()⟩ = ⟨ () ⟩()⟨ () ⟩ () and hence gives the rightmost expression in Eq. 4. The leftmost and the rightmost expressions in Eq. 4 can be inte-grated over all temperatures to obtain1 where we use = med,V .Further, on using the square completion given by Eq. 5 in the footnote, the RHS of the preceding expression can be simplified to relating the volume PDF and the mass-weighted average temperature as Consistent with the isobaric assumption introduced earlier, we obtain the mass PDF by combining Eqs. 4 and 6 (and using Eq.5), where med, , and med, .This is similar to Eq. 2 and is applicable only under the isobaric assumption. (iii) Modifying the unmodified PDFs: FSM17 modifies the unmodified distribution of gas at every radius to incorporate the effects of physical processes such as radiative cooling.FSM17 model considers two temperature phases, namely hot and warm, around which the temperature distribution is log-normal.Further, the assumption is that the coolest/densest unmodified gas at any radius, with the ratio of the cooling time to the free-fall time cool / ff smaller than a threshold value (chosen to be 4 here), cools isochorically to the warm phase.The cooling time is cool = ( − 1) −1 / 2 Λ[] and the free-fall time is where is gas thermal pressure, is the hydrogen number density, Λ[] is the cooling function, is the shell radius and [] is the gravitational acceleration.This model for the dropout of warm gas from the hot atmosphere is motivated by simulations of thermal instability in gravitationally stratified atmospheres (e.g., Choudhury et al. 2019).However, the warm gas is assumed not to cool further.The choice of the cool / ff threshold determines the cut-off temperature below which the hot gas is assumed to be thermally unstable, and the cooling to the warm phase is assumed to be isochoric.Isochoric cooling preserves the area under the curve of the unmodified PDF undergoing modification (see Fig. 2 for a cartoon and cyan dashed line in Fig. 3). The volume fraction of the warm phase at any distance from the halo center is where med, corresponds to the temperature below which the unmodified gas cools to produce the warm phase, and we have introduced the integral of a normal distribution, Similarly, the mass fraction of the warm gas is where med, corresponds to the cut-off temperature .Note that > implies () > () (compare Eqs. 8 & 9), which is expected since the warm gas is denser.Now, we need to decide the thermodynamic condition of the re-distributed gas.The warm phase is assumed to attain a new lognormal distribution about a specified median temperature () med, (see the vertical blue dotted line in Fig. 3).Since the warm phase has a short cooling time, it is assumed to be maintained in a steady state, at () med, by heating due to feedback and/or turbulent mixing.The warm gas volume distribution at any radius is given by where ≡ ln / med, is the median temperature of the warm gas, and N (, ) is the Gaussian distribution with standard deviation (see Eq. 3).Similarly, the mass-PDF for the warm gas is given by P med, .Note that these expressions are analogous to unmodified PDFs (Eqs. 2 to 7) since the underlying PDF is the same, namely log-normal. The modified hot phase is assumed to maintain the unmodified PDF above the cutoff temperature and is given by where corresponds to the (radius dependent) cut-off temperature and H() is the Heaviside function (unity for > 0 and zero for < 0). (iv) Local and global gas density: It is useful to distinguish between the local and global average mass densities.We define the local gas density for a phase as ⟨ () ⟩ ≡ () / () (without subscript in ⟨⟩; see Tab. 1 for notation) as the gas mass in phase divided by the volume occupied by this phase.The global average gas density in phase is ⟨ () ⟩ ≡ () /, defined as the gas mass in th phase divided by the total volume = Σ () (Σ is sum over all phases).Thus, the local warm gas density is the physical density of the warm gas clumps. The global average density of a phase corresponds to the same mass being spread uniformly over the whole volume .To calculate observables like column densities, we assume that the cooler phases are uniformly spread throughout the volume in the form of a mist (Fig. 2(b); the same figure also shows other possibilities where the warm clouds do not uniformly fill the whole spherical shell).The reality is more complex than the mist limit which, by definition, gives an area-covering fraction of unity and is not consistent with observations.The warm/cool gas is likely to be spread in the form of clouds that occupy a small volume and cover a projected area med,V = 1.5 × 10 6 K (vertical red dotted line; see Eq. 6).The vertical black dashed line marks the temperature where the cooling time to the free-fall time ratio cool / ff = 4 in this shell.Gas cooler than is assumed to be thermally unstable and populates a new (internally-) isobaric distribution to form a warm phase about a median temperature () med,V = 3 × 10 5 K (vertical blue dotted line).The cyan and blue dashed curves show the redistributed log-normal PDFs of the warm gas for isochoric and isobaric modification, respectively (see sections 2.1 & 2.3).Corresponding modified hot gas distributions are shown in dark red and orange curves.Being probability distributions, the sum of PDFs for warm and hot phases in both modifications (red+cyan: isochoric; orange+blue: isobaric) is normalized to unity. fraction ≲ 1 (e.g., panels a, c, d in Fig. 2).The CGM is expected to exhibit a patchy distribution of clouds with varying sizes and properties (also supported by cosmological simulations, e.g., Nelson et al. 2020).Consequently, different quasar sightlines probe a large variety of these cold and warm clouds, owing to the inherent stochasticity in their spatial distribution (see section 4.2 and also Hummels et al. 2023 for a comprehensive discussion). For volume and mass fractions respectively, the expressions for the local and global average gas densities in any phase (where = ℎ, ) are given as and where Since each phase is assumed internally isobaric, we can define the density in the temperature range [, + ] as () = ⟨ () ⟩⟨ () ⟩ /.The ratio of Eqs. 12 and 13 gives, i.e., the global average density of a phase equals the product of the volume fraction and the local average density of that phase.The solid lines in the left panel of Fig. 4 show the average global and local density profiles for the hot and warm phases for the isothermal CGM ( = 1 polytrope) from which warm gas condenses iso- The average global density profile for the warm phase in the right panel shows a dip towards the center because cool / ff is larger there and only a small amount of unmodified gas lies below our chosen cool / ff = 4 threshold.In both the panels, the global density profiles for isochoric (solid red + blue) and isobaric (dashed red + blue) modifications coincide because the mass in each phase is the same in these cases.For / 200 ≳ 0.25 the temperature of the unmodified isentropic profile (right panel) approaches the chosen warm phase temperature () med,V = 10 5.5 K (see inset in the right panel).In this case, we assume that only a fraction ≤ 0.4 of the unmodified gas can condense into the warm phase.Such a high dropout fraction compared to the isothermal model explains the large variations in different densities at large radii in the isentropic model.chorically.As expected, the local density in all cases is larger than the global one.The global warm gas density at large radii is smaller because cool / ff is larger and a smaller amount of gas drops out to the warm phase according to our prescription.The global and local densities of the hot phase are similar because most of the shell volume is occupied by the hot phase.The right panel is for an isentropic CGM ( = 5/3 polytrope) discussed later in section 2.2. (v) Calculating observables: Having obtained the hot and warm gas temperature PDFs and their corresponding profiles, we can now calculate several observables such as OVI, OVII, OVIII, NV column densities, dispersion measure (DM), X-ray spectrum, and emission measure (EM).As calculating observables is independent of generating the model profiles, details on computing observables are postponed until section 2.4. As a demonstration of our general description of the probabilistic CGM model, we now discuss specific modifications to the original FSM17 model. Isentropic unmodified profile FSM17 used an isothermal (≈ 1.5 × 10 6 K; comparable to the halo virial temperature) unmodified profile for the hot volume-filling CGM of Milky Way.Observations indicate the presence of OVI and NV ions in the Milky Way CGM (Werk et al. 2013;Tumlinson et al. 2011b).Since these ions exist at a lower temperature (∼ 10 5.5 K), a CGM at the virial temperature is too hot to host sufficient OVI ions.To get around this, FSM17 proposed a thermal instability ansatz (discussed in section 2.1) to introduce an additional warm phase at 10 5.5 K.This results in a CGM that can host OVI and NV columns consistent with the observations (see top panels of Fig. 5).On the other hand, FSM20 proposed an isentropic model without a spread in temperature at any radius.In contrast to the modified isothermal model, where every galactocentric distance hosts both the hot and warm phases, FSM20 has a unique temperature at every radius.Such an isentropic atmosphere in hydrostatic equilibrium (with reasonable boundary conditions) results in a transition from hot to warm temperatures at large radii approaching the virial radius (see inset in the right panel of Fig. 4).This naturally produces a CGM that can host enough OVI and NV ions at large radii, in compliance with observations, without the necessity of introducing a separate warm phase (see Fig. 10 of FSM20). As pointed out earlier, any reasonable unmodified profile can be modified based on some physical prescription.The isothermal model in FSM17 (presented in section 2.1) or the isentropic model in FSM20 are examples of such unmodified hydrostatic atmospheres.In this section, we modify the unmodified isentropic profile of FSM20 below the cool / ff = 4 threshold and introduce a warm phase.This highlights the general applicability of the procedure described in section 2.1.Another possibility (which we do not explore further) is to choose an unmodified hydrostatic profile at the precipitation threshold (say with cool / ff = 20 everywhere; Sharma et al. 2012b;Voit 2019) and modify gas below cool / ff = 4 to account for the condensation of the denser gas (as motivated by Choudhury et al. 2019). The right panel of Fig. 4 shows the unmodified isentropic profile (purple dot-dashed line) and the global and local number density profiles for the modified hot and warm phases.The global warm gas density profile at the center shows a dip because only a small fraction of gas condenses, as cool / ff at small radii is large.If we conserve the total shell volume and the mass of the warm gas condensing out, we expect the global densities for both isochoric and isobaric modification to be the same.However, note that the separation between the isochoric and isobaric modified density profiles for both the hot and warm phases at / 200 ≳ 0.3 happens because we restrict the mass fraction of the warm gas to () ≤ 0.4.The motivation behind this choice is that hydrostatic equilibrium assumes that most mass is in the 'hot' volume-filling phase.2Once the dropout gas mass fraction exceeds this, we limit the dropout mass to this value. Isobaric instead of isochoric cooling to warm phase FSM17 creates the warm phase by cooling the unmodified PDF below a cutoff temperature and assumes that the cooling happens isochorically to the warm phase (which is internally isobaric).While cooling may happen isochorically at intermediate temperatures with cooling times shorter than the sound-crossing time across cooling clouds (e.g., see Fig. 8 in Mohapatra et al. 2022), the warm/cold clouds are expected to achieve pressure equilibrium after a few sound crossing times.In any case, it is a useful generalization to relax the isochoric assumption.The other extreme assumption is to assume that cooling from unmodified temperatures to the warm phase occurs isobarically.In this case, the warm gas will occupy a smaller volume and the hot phase pressure can drop because of adiabatic cooling (see Fig. 2(c) which shows the warm phase volume in panel (a) compressed to a smaller volume).This can cause compression and reduction in the volume of each shell in the absence of additional heating sources.Motivated by the importance of AGN/supernova feedback heating, the required additional heating of the hot phase is assumed to be present to preserve the shell volume. The mass fraction in the warm phase is the same as for isochoric cooling to the warm phase, given by Eq. 9 (since the same mass cools to the warm phase under both isochoric and isobaric assumptions).But this warm phase occupies a smaller volume under isobaric assumption compared to isochoric.The average warm gas density is, ⟨ () ⟩ = ⟨ () ⟩⟨ () ⟩ /⟨ () ⟩ since the warm phase and the unmodified gas have the same pressure.Thus, the warm phase volume fraction for this case is given by Recall that ⟨ () ⟩ /⟨ () (see Eq. 6).Fig. 4 shows that the local number density is higher for isobaric modification as compared to isochoric.However, since the mass of gas in all phases for both models is identical, the global densities are also, therefore, identical (shell volume is constant). The volume (and also the area if not all sightlines are covered by clouds) filling factor for the isobaric warm phase is expected to be smaller than the isochoric case.While the column densities for isobaric and isochoric modifications are similar (see Fig. 5), the EM and luminosity are expected to be higher for isobaric modification because of a higher density in this case. Generating observables from probabilistic CGM models Most of the observables are line-of-sight (LOS) integrals of various physical quantities.For example, column density is ∫ where is the local number density of a particular ion, and is an infinitesimal path length along the LOS.Similarly, EM is proportional to ∫ 2 .In our probabilistic model, every shell of the CGM is multiphase, and each phase contributes to these LOS integrals.We first discuss the column density estimate from our probabilistic model.Carrying this forward to other LOS integrals is straightforward. Column density in absorption The contribution by phase in the temperature range [, + ] to the LOS integral is ∫ () () , where () is the path length through phase .Assuming clouds to be uniformly spread throughout the shell, i.e., the mist limit, implies () = ⊥ () and () = () / (see Tab. 1 for notation), where ⊥ is an infinitesimal area perpendicular to the LOS and () is the differential volume of phase .The column density is (), where the last expression follows from the definition of volume PDF.Note that () is the local density of phase and, in general, () depends on (, ), i.e., the thermodynamic state assumed within the phase.The expression for column density can be simplified further using the global average density of a particular phase , i.e., ⟨ () ().Thus, the column density in the mist limit from all the phases can be written as Σ ∫ ⟨ () ⟩ .The global number density is appropriate here because only a fraction of the available volume is occupied by the gas in a given phase.Similar expressions can be obtained for other LOS integrals. Specifically, the expression for the column density of OVI at an impact parameter is where ⟨ OVI ⟩ is the global density of OVI ion in a shell at radius , as motivated in the previous paragraphs.The modified PDF and the global densities of the different gas phases can be used to estimate the global number density profiles of different ions (this assumes that all phases are uniformly mixed at each radius, also known as the mist approximation; see Fig. 2b).The global average number density of OVI (to be plugged in Eq. 16) as a function of radius is given by (assuming photo+collisional ionization equilibrium; PIE) the following expression, Figure 5.The column density of different ions as a function of the impact parameter (normalized by the virial radius) from our isothermal ( = 1 polytrope) and isentropic ( = 5/3 polytrope) models (see Fig. 4 for density profiles).In all the panels, the solid lines refer to isochoric modification, while the isobaric modification is shown using the dotted lines.Using the dashed lines, we also show the column density estimates generated from the unmodified (isothermal and isentropic) profiles for reference.Orange lines are for the isothermal model and cyan for isentropic.Top panels: The column densities of ions tracing the warm phase, OVI on the left and NV on the right.The data points are inferred from absorption spectra of quasar sightlines through external galaxies (OVI: COS-Halos [Tumlinson et al. 2011a;Werk et al. 2016] and eCGM surveys [Johnson et al. 2015] in solid black markers, CGM 2 survey [Tchernyshyov et al. 2022] in open black markers, and CUBS VII survey [Qu et al. 2024] where () OVI ( () , ), for each component, where () depends on () and , O is the number ratio of oxygen to hydrogen atoms in the sun (Asplund et al. 2009), () is the CGM metallicity (metal mass to total gas mass ratio) at that radius, and OVI is the OVI ion fraction.We can use the equation of state () = ⟨ () ⟩⟨ () ⟩ to calculate the temperature integral above (since we assume the phases to be internally isobaric).We adopt the metallicity profile introduced by FSM20 (see their Eqs.8 & 9).Our model parameters are listed in Tab. 2. We stick to FSM17 parameter values since we wish to test how models tuned for a particular observable fare against a broader range of multi-wavelength observables.Further, the observables are estimated considering the mist limit, which gives a covering fraction of unity (see discussion at point (iv) in section 2.1 for details).It is anticipated that the measured column densities will exceed our mist limit estimates because of the discrete nature of the clouds (discussed further in section 4.2) and the ease of detecting higher columns. The top panels of Fig. 5 show the OVI and NV column density profiles from our isothermal ( = 1) model as a function of the impact parameter.These ions trace the warm ∼ 10 5.5 K phase (e.g., see Fig. 6 in Tumlinson et al. 2017).The observational data polytropes with isochoric modification).The left and middle columns display our modeled Milky Way emission measure, with and without a coronal disk respectively, as observed from the position of the solar system.The right column shows the corresponding dispersion measure.These maps are generated from different CGM models discussed in section 2 (gas profiles shown in Fig. 4).Eq. 19 models the coronal disk component.The red crosshair in the EM maps marks the eFEDS sightline (, ≈ 230 • , 30 • ) observed by Ponti et al. 2023b and the estimated EM is 2.937 × 10 −2 pc cm −6 .The red crosshair in the DM maps in the right column marks the sightline (, = 142.19 • , 41.22 • ) of a nearby FRB, in the M81 galaxy (Bhardwaj et al. 2021).The DM estimated for the Milky Way halo along this sightline is 30 pc cm −3 .The region near the Galactic center is hatched in the maps to indicate that predictions would be unreliable there.This area is contaminated by eROSITA bubbles (Predehl et al. 2020) and other features, as well as the central cusp in the number density profile (Fig. 4). are taken from Werk et al. 2013Werk et al. , 2016;;Tchernyshyov et al. 2022& Qu et al. 2024.The virial radii of the galaxies from the COS-Halos survey used in the normalization of the impact parameter are taken from Tumlinson et al. 2013.All the observed column densities were calculated from corresponding equivalent widths of ionic transitions in the absorption spectra using either Apparent Optical Depth analysis (Savage & Sembach 1991) or Voigt profile fitting (e.g.Carswell & Webb 2014; also see references from the corresponding surveys).Throughout this work, for the ionization models, we use CLOUDY 2017 (Ferland et al. 2017) and consider the CGM to be in photo+collisional ionization equilibrium (PIE) in the presence of Haardt-Madau extragalactic UV radiation (Haardt & Madau 2012) at a redshift of 0.2 (matching COS-Halos galaxies).For some ion levels, the differences between the collisional (CIE) and photo+collisional ionization equilibrium (PIE) can be significant (e.g., see Fig. 6 Similarly, the bottom panels show the OVII and OVIII column density profiles.These ions trace the hot (≳ 10 5.5 K) gas, and presently virtually no constraints exist on their column densities in external galaxies, except for the one recent observation of OVII column by Mathur et al. 2023.Moreover, absorption/emission properties of ions like OVII may be significantly altered by resonant scattering (Nelson et al. 2023), not taken into account in our modeling.Observations also indicate that the absorption profiles of these ions can be highly saturated.For example, the ratio of equivalent widths for K to K transition for OVII along multiple sightlines probing the Milky Way CGM deviates from a constant value expected (∼ 0.15) in the optically thin regime (Gupta et al. 2012).The poor spectral resolution in X-rays does not allow precise Voigt profile fitting.For OVII, we use the column density range indicated in Tab. 2 of Gupta et al. (2012).From the OVIII equivalent width (EW) given in the same table, we obtain OVIII using the linear relation between the equivalent width (EW) and column density in the optically thin regime.In the absence of adequate constraints from external galaxies for these high ionization states, we show the scaled OVII and OVIII columns of the Milky Way CGM (Gupta et al. 2012;Fang et al. 2015;Miller & Bregman 2015;Miller & Bregman 2013), which are indicated in gray bands. To compare with our probabilistic models, in Fig. 5 we also show the column density profiles for the unmodified profiles using dashed lines (red: isothermal; blue: isentropic).The unmodified isentropic profile in FSM20 is cooler in the outskirts and can, therefore, produce higher OVI column density compared to the unmodified isothermal profile in FSM17.We modify the isentropic profile following the thermal instability ansatz in FSM17 with the threshold cool / ff = 4, and the warm phase has a median temperature of 10 5.5 K.As expected, this modification does not significantly alter the column density profiles of OVI in the isentropic model.Since the unmodified isentropic profiles are cooler at large radii, the OVII and OVIII column densities are smaller farther out than for the isothermal model (see bottom rows of Fig. 5). Emission & Dispersion measures Just like the individual ions, we can calculate the electron number density to determine the dispersion and emission measures produced by our CGM model.For every sightline (, ) in the Molleweide maps in Fig. 6, we sample 1000 points along the line of sight (uniformly spaced from the location of the sun till CGM ).For each of these points, we calculate the (, , ) coordinates from the Galactic center and calculate the desired observable quantity interpolated from our models.These values are then numerically integrated to obtain the observables at each (, ).These observables can alternatively be (numerically) integrated directly in terms of the Galactocentric distance , employing a change of variables as discussed in Appendix B (Eq. B6). In the mist approximation, the contribution by a given phase to the emission measure integral where () can be related to by the assumed thermodynamic equation of state within the phase (isobaric in our case).Note that the ratio of emission measure and the square of column density contributed by a uniform volume is the clumping factor3 Since the ionization of hydrogen and helium is the dominant contributor of free electrons, the dispersion measure (DM = ∫ ) is mostly insensitive to the ionization state of the metals.This makes DM-based inferences robust and less sensitive to model parameters.The DM generated from our models can be compared with the DMs of the Fast Radio Bursts (FRBs) in nearby galaxies (Bhardwaj et al. 2021;Ravi et al. 2023;Cook et al. 2023).The emission measure EM = ∫ ) is another observable that we generate from our CGM models.The EM constraints are available from the observations of the continuum soft X-ray emission from the Milky Way CGM.Estimating EM requires the observed X-ray spectrum to be broken down into contributions from several components like the local hot bubble, cosmic X-ray background, solar wind charge exchange, MW halo, and MW Galactic disk.Many different surveys, till now, have estimated the EM from the Milky Way, e.g., ROSAT (Hirth et al. 1992), Suzaku (Gupta et al. 2014), XMM-Newton (Henley & Shelton 2010, 2013; Das et al. 2019b;Bhattacharyya et al. 2023), HaloSat (Kaaret et al. 2019(Kaaret et al. , 2020;;Bluem et al. 2022), and eROSITA eFEDS (Ponti et al. 2023a,b).We compare our models with these observations. The EM value with just the spherical CGM (middle panels of Fig. 6) is of a few factors smaller than the value observed in the eFEDS field.Thus, we include an X-ray emitting disk adapted from (Yamasaki & Totani 2020) having a density where ,0 is the hydrogen number density at the center of the coronal disk and 0 = 8.5 kpc and 0 = 3.0 kpc are the scale radius and height of the disk.We set ,0 = 4.8 × 10 −3 cm −3 and assume the disk to be isothermal at a temperature of 1.5 × 10 6 K. Our parameter values are different from Yamasaki & Totani 2020, adjusted to match the recently observed X-ray surface brightness of the CGM in the eFEDS field (Ponti et al. 2023b).Simulations and theoretical considerations suggest that this coronal disk is expected and is maintained by heating from supernovae-driven outflows (Weiner et al. 2009;Rubin et al. 2010), which form the rising part of the Galactic fountain (Bregman 1980;Crain et al. 2010;Fraternali 2017;Kim & Ostriker 2018;Grand et al. 2019).A coronal disk not only increases the X-ray surface brightness towards the eFEDS field but also increases its anisotropy towards and opposite to the Galactic center, which can be compared with observations (e.g., Fig. 6 in Bluem et al. 2022). 4We assume that the contribution from the disk (modeled by Eq. 19) can simply be superimposed for all the observables.The top row of Fig. 6 shows the EM and DM maps in Molleweide projection for our isochorically modified isothermal ( = 1 polytrope) CGM model, while the bottom row shows the EM and DM maps for isochorically modified isentropic ( = 5/3 polytrope) CGM model (see section 2.2).The middle column shows the EM maps without the disk to highlight the CGM contribution for the = 1 and the = 5/3 polytropes.Because of a higher density at larger radii in the isothermal model, the isothermal model has higher values of EM and DM (see Fig. 4).The red crosshairs on these maps mark the observed sightlines.Along , ≈ 230 • , 30 • , Ponti et al. 2023b (the eFEDS survey) report the Milky Way EM to be 2.9 − 3.1 × 10 −2 pc cm −6 (their Tab.Recently, the eROSITA X-ray telescope has looked at an unobscured field to constrain the Milky Way CGM properties.The two X-ray bands in the eROSITA eFEDS survey outlined by Ponti et al. 2023b are 0.3 − 0.6 keV and 0.6 − 2.0 keV.Fig. 7 shows the emission spectrum from our isothermal ( = 1 polytrope) and isentropic ( = 5/3 polytrope; see section 2.2) models, both with isochoric modification.The surface brightness is calculated towards the direction of the eFEDS field (, ) ≡ (230 • , 30 • ).The two models give very similar surface brightness spectra because they are dominated by the coronal disk component, which is the same in both cases.The observed X-ray surface brightness in 0.6-2 keV is twice our model predictions because of ISM/CGM clouds along the sightline and an even hotter/super-virial coronal disk component, as suggested by recent observations (Ponti et al. 2023b;Bluem et al. 2022).If needed, such a component can be added to our models, as described in section 3. Fig. 8 maps out the ratio of EM contributed by the spherical CGM component to the disk for both the isothermal and isentropic models.As expected, the spherical component dominates at high latitudes ( ≳ 30 • ).All sky maps from eROSITA at high latitudes and away from the Galactic center can help us distinguish between different CGM+disk models. MORE THAN TWO PHASES Observations and numerical simulations show that in addition to the volume-filling hot phase (∼ 10 6 K) and the intermediate warm phase (∼ 10 5 K), there is a cold (∼ 10 4 K) phase in the CGM.The hot and warm phases presumably cool to produce the cold gas.Cold gas from the dense ISM can also be introduced into the CGM by supernovae/AGN-driven winds.Ram pressure stripping of satellite galaxies can deposit the satellite's cold gas into the CGM of the host galaxy (Rohr et al. 2023).Recent observations of many galaxies also seem to support this picture of three-phase gas distribution (Sameer et al. 2024). Simulations show that the hot and cold phases have relatively narrow distributions in log compared to a broader distribution of warm/intermediate phase (e.g., see the left panel of Fig. 6 in Nelson et al. 2020; middle panel of Fig. 5 in Kanjilal et al. 2021;Fig. 4 in Mohapatra et al. 2022).Similarly, there is a large spread in density for all phases.This motivates us to introduce 2D log-normal distributions (in - space) with appropriate spreads in and for each phase. We introduce a uniform (one-zone) model for the CGM, where spatial variation is not considered for simplicity.Our formalism can be generalized to include variations with radius.We approximate the volume-PDF of the CGM to be the sum of a number of (three for specificity) 2D log-normal distributions centered at chosen median temperatures and densities.Namely (the summation indices are not explicitly mentioned later), med,V are respectively the phase independent reference density, temperature for the volume-PDF.Note that the 2D volume-PDF P 2 is log-normal in (,) but a 2D Gaussian in the (,) space.In the numerical implementation of our model, we used hydrogen number density as an independent variable instead of .The 2D rotated-Gaussians N 2 (x, x , ) are the 2D PDFs for each phase in the − plane, and their explicit expression is where = (cos 2 / 2 1, + sin 2 / 2 2, )/2, = sin 2 (1/ 2 1, − 1/ 2 2, )/2, = (sin 2 / 2 1, + cos 2 / 2 2, )/2 in terms of the standard deviations ( 1, , 2, ) along the principal axes of the individual Gaussians and the angle of rotation ( ) relative to the axis.Note that 4 − 2 = 1/( 2 1, 2 2, ) is a useful simplification utilized later.Hence, the 2D volume-PDF can be fitted using the following parameters for each phase: median density () med,V , tem- .Note that the combined volume fraction from all phases combined is unity by definition. We can obtain the marginalized PDFs by integrating the 2D PDFs along one of the axes.For example, 1D volume-PDF of temperature in log space is, which is again a log-normal PDF centered at with a standard deviation √ 2 1, 2, .For the other dimension, i.e., in density, similar log-normal PDF exists centered around with standard deviation √ 2 1, 2, .We can use our three-phase formalism to fit the simulation/observational data and constrain the free parameters for different phases.In the central plot of Fig. 9 in color, we show the volumeweighted 2D histogram of hydrogen number density and temperature of the CGM gas (non-star forming) of one of the halos from the Illustris TNG50-1 cosmological simulation (halo ID-110, snap-84).The selected halo has a virial mass of 1.1 × 10 12 ℎ −1 ⊙ (comparable to the Milky Way halo; Dehnen et al. 2006) and a star-formation rate of 3.63 ⊙ yr −1 (similar to the Milky Way SFR of 2.0 ± 0.7 ⊙ yr −1 ; Elia et al. 2022).The median SFR of COS-Halos galaxies is ≈ 1.06 ⊙ yr −1 ; see Fig. 6 in Werk et al. 2012).The stellar mass of our chosen halo is 6.49 × 10 10 ℎ −1 ⊙ (slightly higher than the Milky Way stellar mass of 6.08 ± 1.14 × 10 10 ⊙ ; Licquia & Newman 2015) We have approximately fitted the 2D histogram for the Illustris halo with the 2D volume-PDFs of our three-phase model (Eq.20) by eye.The PDFs for all the expressions in this section use hydrogen number density as it is directly available from the simulation data and is independent of the ionization state.The above choice is convenient since plasma models, which are needed for producing observables, use . Tab. 3 lists the best-fit parameters and the output parameters of our fitting.We note that the parameters presented here are obtained manually by trial and error and are not quantitative statistical fits.We are working on an automated MCMC (Bayesian) fitting procedure to obtain a robust estimation of the model parameters and their corresponding uncertainties (which can then be propagated to the synthetic observables) by constraining our models with the simulation data.The white rotated ellipses in the central plot are 1 and 2 contours of the best fit 2D volume PDFs P 2 () (, ) for each phase (hot, warm, and cold) from our three-phase model. 5he black dotted contours, indicating the 2-D PDF over all phases, show that the three log-normal PDFs capture the core of the three phases.Beyond the core, there is a deviation between the analytic model and the histograms from the simulation.In the future, one may explore going beyond log-normal distributions to include tails.The hot phase is the volume-filling phase, whereas the warm and cold phases occupy a smaller volume.The temperature width of the warm phase is broader compared to the hot and cold phases.The cold phase is almost isothermal at ∼ 10 4.1 K but with a large spread in density.Such broad spread for densities in the cold phase is also inferred from the line emission in the Slug nebula (Cantalupo et al. 2019) and is expected to be a robust feature of all multiphase CGMs.The top and the right panels show the 1D volume PDFs marginalized over temperature and density, respectively.The thick yellow and thin purple solid lines show the simulation data and the corresponding best-fit 1D PDFs, respectively.The dotted colored lines show the contribution from individual phases (hot, warm, and cold) using the best-fit parameters.The marginalized PDFs match the simulation data well. Similar to the 1D mass-PDF (Eq.4) discussed in previous sections, we can obtain the 2D mass-PDF in the (, ) space as which is again log-normal and ⟨⟩ is the total average density that can be obtained by the normalization condition as (integrating above over and ), ⟨⟩ The mass fraction () of each phase is then given by Unlike volume fraction which is a model parameter, mass fraction (𝑖) is a derived quantity and depends on other free parameters. Marginalization of the mass PDF gives6 which is again a log-normal PDF. The 2D histogram in the central panel of Fig. 9 shows that the hot phase is volume-filling and has a lower density in contrast to the cold phase, which is dense but occupies a minuscule fraction of the total volume.The luminosity of the gas at temperature is ∝ 2 Λ[].Just like mass, it is important to know the luminosity contributed at different temperatures, especially when CGM emission mapping is expected to be common in the near future (e.g., Tuttle et al. 2019).We obtain the 2-D luminosity PDF, where ⟨⟩ is the volume-averaged luminosity that can be obtained from the normalization condition ∫ P 2 (, ) = 1.For the results in this section, we assume the CGM metallicity to be temperature (phase) independent and fixed to a constant value of 0.3 ⊙ .Further, we assume that the cooling function only depends on temperature (general case can be treated numerically; see Appendix C for details).This assumption is strictly valid in collisional ionization equilibrium (CIE).However, we adopt a temperature-dependent cooling function for plasma in photo+collisional-ionization equilibrium (PIE) in presence of Haardt-Madau extragalactic UV radiation (Haardt & Madau 2012) at a redshift of 0.2.Our cooling function was generated using CLOUDY for a plasma having hydrogen number density fixed to 2.0 × 10 −5 cm −3 (the average hydrogen density obtained by numerically integrating over our analytic volume PDF; Eq. 20).This simplifying assumption allows us to obtain an analytic form for the 1-D luminosity PDF marginalized over density (using Eq. 25 with = 2 and following the same procedure as for the mass PDF).The marginalized luminosity PDF is where is the luminosity fraction of phase , and / ⊙ is the gas metallicity with respect to solar.Note that the luminosity PDF departs from log-normal because of the cooling function.Fig. 10 shows the 1D PDFs in temperature.The solid lines Synthetic observables from three phase model We now move on to generate synthetic observables for our threephase model.We extend the method to obtain observables with 1D PDFs in earlier sections to using 2D PDFs.For example, the expression for the column density of a particular ion (say OVI) remains unaltered from Eqs. 16 and 17, but I () is now expressed as Similarly, we can obtain the expressions for EM, DM, and other observables which now include 2D integrals.Fig. 11 shows the column density of MgII (cold ∼ 10 4 K phase tracer) and OVI (warm phase ∼ 10 5.5 K tracer) ions using solid, dotted and dashed lines for different variants of our three-phase model.We use our fast plasma modeling code AstroPlasma to evaluate the ion fractions at each , appearing in the integrals of the form presented in Eq. 29 and obtain the (volume-weighted) average ion density.The plasma is assumed to be in photo+collisional ionization equilibrium (PIE) at a redshift of 0.2 in the presence of Haardt-Madau extragalactic UV background (Haardt & Madau 2012).We use the average ion density for this one-zone model to evaluate the column density profiles shown by the solid lines.Moving beyond the one-zone model, this average density is then used as the normalization factor 0 appearing in Eq.B3 to obtain the column density for power-law profiles of number density in the CGM.The column densities of OVI and MgII for these power-law density profiles are shown using the dotted and dashed lines in Fig 11 .8 Circles (detections), upper triangles (lower limits), and lower triangles (upper limits) indicate the observed column densities from different surveys.To mitigate bias, we plan to include observations from more surveys in the future, like the COS-Weak (Muzahid et al. 2018) and the MAGG surveys (Dutta et al. 2020), for which the virial radii of the foreground absorbing galaxies are not readily available and needs to independently determined from galaxy surveys, for example, by using halo abundance matching (Churchill et al. 2013). While our MgII column density profiles pass through the data points, the OVI columns are somewhat underestimated.This is also reflected in a lower X-ray surface brightness estimate (see Tab. 4).This is due to a lower hot/warm phase (median) density and temperature in our chosen TNG50-1 halo than the Milky Way estimates.This, however, is not too surprising since cosmological simulations show a large scatter in CGM properties at a given mass (Ramesh et al. 2023) due to feedback, mergers, and different evolution histories of different halos.Detailed analysis of such variations and their implication on our models and synthetic observables is left for future.Also note that the large variation in MgII columns as compared to OVI in observed data is a signature of the patchiness of cold clouds, making them frequent only along a few sightlines as compared to a more area-filling warm phase (see discussion in section 4.2).These CGM models, therefore, provide a baseline estimate and need to be complemented with a model of discrete clouds to achieve better match with different multi-wavelength observations. DISCUSSION Two approaches to model the CGM are used commonly: (i) 1D profiles (hydrostatic or purely phenomenological) of the dominant hot phase and (ii) numerical simulations of the multiphase CGM that incorporate gas at all temperatures ≳ 10 4 K.While approach (i) may be satisfactory for the hot phase and can be confronted with X-ray observations, it does not incorporate warm and cooler phases routinely observed in quasar absorption line surveys of the CGM.Approach (ii), while simulating gas with a range of temperatures, also suffers from drawbacks such as computational expense, insufficient resolution to resolve the structure of cooler phases, dependence on subgrid physics, and a limited statistical sample. In this context, we present a flexible analytic framework for modeling multiphase gas with log-normal distributions that are not only analytically tractable but also motivated by numerical simulations.Our approach can incorporate inputs constrained by numerical simulations and can quickly produce synthetic observables that can be tested against multi-wavelength observations.We can study not just trends with halo mass and environment, but this formalism also provides a baseline prediction for tracers of cold/warm gas, which can be made more realistic by incorporating clouds with non-trivial area covering fraction discussed later in this section 4.2. Interpreting X-ray observations Interpretation of X-ray emission spectra involves breaking up the observed spectrum into emission from different sources such as the CGM, the local hot bubble (Liu et al. 2016), cosmic X-ray background (Gilli et al. 2007), and solar wind charge exchange (Ponti et al. 2023b). In addition to these sources, many recent observations of Xray emission from the Milky Way CGM (Das et al. 2019b;Bluem et al. 2022;Ponti et al. 2023b) consider two isothermal components (APEC models; Smith et al. 2001).The dominant contribution among these two components comes from the gas at the virial temperature of the Galaxy (∼ 0.2 keV) while the sub-dominant (about an order of magnitude lower in emission) contribution is from an additional phase at a higher temperature ∼ 0.7 keV.In some of these works, the physical origin of this ∼ 0.7 keV gas is the Galactic coronal disk maintained by supernovae-driven outflows (e.g., Bregman 1980).However, the emission from the physical disk contributes not only to the ∼ 0.7 keV APEC component in the X-ray emission but also to the ∼ 0.2 keV component at virial temperature (Kaaret et al. 2019).Some observations attribute the physical origin of the X-ray emission from the entire ∼ 0.2 keV component (∼ 0.2 keV isotherm in the APEC model) to the CGM (Ponti et al. 2023b) but it might have a non-negligible contribution from the disk. Spectral fitting of APEC models along any single sightline cannot distinguish among physically distinct multiple components with the same temperature contributing to the total emission.Considering the EM from only the CGM of the Milky Way, towards and away from the Galactic center, one expects a variation of ≲ 2 for high Galactic latitudes (|| ≳ 30 • ; see middle column of Fig. 6).However, the variation as observed by Bluem et al. 2022 at the same latitudes is ≳ 3, which favors models that include a coronal disk component (see also Ponti et al. 2023b).The presence of such a disk has also been pointed out in previous works (Yamasaki & Totani 2020;Kaaret et al. 2019).Therefore, we highlight the possibility that the ∼ 0.2 keV isothermal APEC model termed as the CGM in some observations might actually be disk+CGM and all the gas at the virial temperature cannot be assigned to just the spherical halo of the Milky Way.X-ray surveys such as the eROSITA all-sky survey (eRASS; Predehl et al. 2021) are expected to map out the diffuse X-ray emission at a range of latitudes and longitudes and help us break the degeneracy between various physical components that produce the X-ray spectra along different directions.Our models assume smooth disk and CGM, but the true gas distribution can have large variations (e.g., Das et al. 2021a), which will affect our predictions quantitatively.Moreover, a careful statistical estimate of parameters of even our smooth model is left for future. Finite-size clouds & observational implications In this paper, all the LOS integrals were computed in the mist limit (Fig. 2b) where a small volume fraction is filled by the cold/warm phases, comprising an infinite number of clouds each of which is infinitesimally small.However, real CGM clouds have a finite size, and the observed area filling fraction is smaller than unity (and different for different ions and column density thresholds of detection) for the tracers of warm/cold phase (Augustin et al. 2021).Thus, clouds do not cover all the sightlines, and this natural variability needs to be considered when predicting the column densities of ions.A comprehensive analysis of cloud morphology and their distribution is beyond the scope of the present paper.Nevertheless, in this subsection, we discuss the 'mist' limit of cold/warm clouds in the CGM and demonstrate how such a configuration provides the most probable baseline configuration for the column density of ions.Additionally, we discuss some qualitative effects of a more realistic spatial distribution of clouds (departing from the mist limit) on the observables, which can potentially lower as well as increase the observed column densities of different ions, resulting in large scatter.Cloud distribution also affects the observed area covering fractions of the absorbing clouds. The column densities and covering fractions of the clouds (corresponding to the phases with small volume filling fractions) depend on their arrangement within the CGM volume.For simplicity, let us consider cl non-overlapping clouds, each assumed to be a cube of side with gas density , arranged within a cube of side (representing the CGM) such that ≪ .The volume fraction of clouds is given by = (/) 3 cl .We can express the cloud length in terms of and cl as . The number of MgII clouds identified in Illustris TNG50-1 halos is cl ≳ 10 4 (Nelson et al. 2020;Dutta et al. 2022).We adopt a fiducial value of cl = 10 6 to account for clouds smaller than the resolution limit of typical state-of-the-art cosmological simulations.The fiducial volume fraction of the cold phase is taken as = 10 −3 , consistent with our Illustris TNG50-1 halo (see () in Tab.3); these give / = 0.001 or = 100 pc for a CGM size of 100 kpc. The maximum possible column density max is for the highly improbable arrangement of independent clouds when all clouds lie along the line of sight and the value, in this case, is cl (= 1000 for fiducial parameters; is the local num-ber density of the ion of interest within the cloud).9As the projected plane is covered by just one cloud, the area covering fraction for such a high column density is = 2 / 2 = 2/3 −2/3 cl (only 10 −6 for fiducial parameters!).The number of clouds along this LOS is cl,LOS = cl while any other parallel sightline encounters no cloud.The product of the column density and the area covering fraction is = , which is fixed for all cloud arrangements because 3 = 2 is just the total number of ions in the CGM, which is assumed to be a constant.The configuration where all clouds are lined up next to each other is however extremely unlikely. Fig. 12 shows specific examples of different arrangements of clouds.Panel (a) shows the most probable separation of clouds that corresponds to an equal separation between them; i.e., the mean distance between clouds is ( . This is because the volume of the CGM available for each non-overlapping cloud is 3 / cl .In one of the most likely arrangements shown in (a) clouds are maximally spread out in all directions (along the LOS and also along both directions perpendicular to it).As an example, we choose the mean separation between the clouds to be three cloud sizes (i.e., −1/3 cl / = 3).Since there are 1/3 cl clouds in a direction, and they are maximally separated in the perpendicular plane ( = 2), the number of clouds overlapping along the LOS is The column density is, therefore, = () cl,LOS = .Note that this is nothing but the integral of global number density along the LOS (see Eqs. 14 & 16), which corresponds to the column density in the mist limit.Thus, the integral of global number density along the LOS corresponds to the most probable arrangement of independent tiny clouds with minimal overlap along the way and can be taken as the baseline value for comparing with observations.Since every sightline (both along and perpendicular to the LOS) has equal column density in this case, the area covering fraction is 1 (assuming cl > −2 so that all sightlines intercept at least one cloud on average).In this case, on moving the LOS in any direction by one cloud size, the encountered cloud configuration is statistically identical. If clouds are spread out only in one direction in the plane perpendicular to the LOS ( = 1; see Fig. 12 A smaller area covering fraction implies that the value of column density along some sightlines that encounter warm/cold clouds can be much larger than our mist estimate.Similarly, some other sightlines would encounter a significantly lower column.This deviation from the mist estimate is expected to be larger for phases with smaller volume fraction , such as MgII tracing 10 4 K clouds (see Fig. 10;Tab. 3).A large fluctuation in MgII columns (in comparison to OVI which traces a warmer phase) and a small covering fraction implied by data in Fig. 11 can be explained by variable overlap of clouds along various sightlines.The column density can, therefore, be either enhanced or diminished by orders of magnitude compared to our mist estimate.Observations provide a statistical sample of several LOS and contain upper limits that lie below the mist prediction which can be interpreted with the baseline predictions of the mist limit; these sightlines simply do not encounter substantial cold clouds! The UV absorption spectra indicate the presence of several components along the LOS (e.g., Stocke et al. 2013;Werk et al. 2014;Zahedy et al. 2019;Haislmaier et al. 2021), indicating multiple clouds along a typical LOS.In the case of lensing of the background quasar, an estimate of the transverse extent of clouds (e.g., Rudie et al. 2019;Augustin et al. 2021) can provide crucial information on cloud properties.Observational constraints such as these, combined with extensions of the toy cloud model presented here, can provide a wealth of information about the properties of the CGM clouds in different phases despite being unresolved. Future directions In this paper, we have only focused on the emission and dispersion measures and column densities, but observations provide a wealth of other diagnostics including kinematic information.Our models can be extended to include velocity distributions, which may be drawn from PDFs with 1-point and 2-point statistics consistent with observations and galaxy formation simulations.In the following, we briefly explain the utility of a library of different CGM models that can be compared with independent observational constraints. Towards a library of models In this paper, we introduced probabilistic models of the CGM, including a new three-phase model.We used these models to predict various observables and compared them with observations.A combination of various observational constraints helps us quantitatively assess various models (see Tab. 4).This motivates an effort towards developing a library of models that can be continuously expanded to include the existing and new CGM models of varying complexity, from simple 1D profiles to 2D axisymmetric rotating models (e.g., Sormani et al. 2018), and to models like ours based on PDFs.In addition to CGM observables highlighted in this paper, we aim to add more observables such as the scattering measure (e.g., Ocker et al. 2021) which is sensitive to the size of CGM clouds, and thermal Sunyaev-Zeldovich − parameter (e.g., Bregman et al. 2022) which is an excellent tracer of the CGM mass.We can also include models of magnetic fields and turbulence in the CGM.These models can constrain the turbulent and magnetic support in the CGM when compared against the Faraday rotation measures observed from the CGM of external galaxies (Hafen et al. 2024;Böckmann et al. 2023). Such a library of models and observable predictions would enable one to quickly infer the physical properties and chemical composition of the CGM from observations, constrain model parameters, break degeneracy across models, and rank models according to the number of independent observations that they can match.We aim to expand our public code repository MultiphaseGalacticHaloModel (link provided in the Data availability section 7) into a large library of CGM models and observables.OVI 4.5 × 10 13 − 1.4 × 10 15 1.1 × 10 13 − 1.7 × 10 15 4.6 × 10 12 − 8.7 × 10 14 4.2 × 10 13 − 1.4 × 10 14 2.8 × 10 13 − 3.7 × 10 14 1.5 × 10 13 − 1.9 × 10 15 NV 4.9 × 10 13 − 1.5 × 10 14 5.4 × 10 11 − 1.5 × 10 14 1.2 × 10 11 − 6.2 × 10 13 4.1 × 10 12 − 1.4 × 10 13 2.8 × 10 12 − 3.7 × 10 13 1.5 × 10 12 − 1.9 × 10 14 MgII 2.4 × 10 12 − 1.8 × 10 13 4.1 × 10 5 − 1.8 × 10 9 2.4 × 10 3 − 4.3 × 10 8 5.7 × 10 12 − 1.9 × 10 13 3.9 × 10 12 − 5.1 × 10 13 2.1 × 10 12 − 2.6 × 10 14 Note that for ion columns, only detections are listed in the second column "Observed value".In case of just detections, only upper or lower limits exist and no specific observed values are available (triangles in Fig. 5).For NV there mostly exists upper limits in detection.We discuss the CGM model proposed in Yamasaki & Totani 2020 as an example to demonstrate the insights that can be gained by comparing models to multi-wavelength observations.In this work, the disk emission at the virial temperature of the spherical CGM is approximately an order of magnitude higher than the emission from the CGM gas along any line of sight (e.g., see their Fig. 1).However, the disk contribution in our models is either slightly higher or comparable to the spherical CGM (see Fig. 8).Considering only Milky Way observations, it is not possible to break the degeneracy between our models and the Yamasaki & Totani 2020 model.Low disk contribution to EM in the Yamasaki & Totani 2020 model is a result of a low CGM central density of, 3.7 × 10 −4 cm −3 in contrast to our models ( = 1, 5/3 unmodified profiles) with the CGM density at 10 kpc ∼ 10 −3 cm −3 (see Fig. 4).In the observations of external CGMs, for most sightlines the contribution of the central disk is negligible, and the density of the CGM in the Yamasaki & Totani 2020 model is too low to produce a sufficiently large column of ions like OVI, NV.We can, therefore, justify the choice of parameters favoring a higher density CGM as used in our models by considering UV absorption studies of the CGM of Milky Waylike external galaxies.We also have to bear in mind that the CGM for the same halo mass can show a large scatter in physical properties (Ramesh et al. 2023). Fast plasma modeling repository: AstroPlasma Generating observables like dispersion/emission measures or column densities of ions from fluid fields like density and temperature requires plasma modeling.One of the goals of our code base is to be computationally efficient for quick exploration of the parameter space of various CGM models and to compare with different observations.AstroPlasma is a public code repository written as a standalone package that can provide functions that generate ioniza-tion properties (e.g., various ion fractions, mean molecular weight) for conditions in the CGM. AstroPlasma uses a large database of pre-run CLOUDY (Ferland et al. 2017) models to interpolate the ionization properties for a range of CGM plasma conditions.This database can be expanded or updated as needed.We are able to achieve a low computational cost for evaluating observables by eliminating the need for onthe-fly calculation of expensive chemical networks.AstroPlasma instead looks up its database of CLOUDY models to interpolate the plasma properties across densities, temperatures, and metallicities.Currently, our database has plasma conditions for collisional ionization equilibrium (CIE) and photo-ionization equilibrium (PIE) in the presence of Haardt-Madau UV background (Haardt & Madau 2012) at different cosmological redshifts.We assume optically thin conditions and do not consider any radiative transfer effects.Appendix A illustrates some common usage of AstroPlasma. Comparison with related works Single phase models (e.g., Maller & Bullock 2004;Sharma et al. 2012b;Miller & Bregman 2013;Nakashima et al. 2018;Voit 2019) typically used to model the hot phase are insufficient even qualitatively to reproduce the properties of the observed multiphase CGM.FSM17 made an advance towards modeling the warm phase traced by OVI by adding a phase at 10 5.5 K with a log-normal volume distribution over temperature on top of the isothermal hot phase.Later, Faerman et al. (2020) reproduced OVI column densities by assuming an isentropic, hydrostatic hot gas profile (with significant non-thermal pressure support) for which the outer temperatures are small enough to reproduce OVI column densities.Since this model cannot produce cold gas, Faerman & Werk (2023) recently extended their isentropic model with non-thermal pressure support (due to turbulence, cosmic rays, and magnetic fields) to include a cold component at ∼ 10 4 K in photo-ionization and thermal equilibrium.The cold phase with a small volume fraction ( () ∼ 0.01) and a nonthermal pressure fraction higher than the hot phase could reproduce the ballpark values of the column densities of the ions tracing the cold phase.Because of the lack of ∼ 10 5 K gas in this model, it generally under-predicts the columns of intermediate ions such as CIV and SiIV.Based on PDFs motivated by a large spread in CGM temperatures at most radii in observations and simulations, our approach is fundamentally different and more akin to FSM17. Ramesh et al. ( 2023) have systematically analyzed the CGM properties of 132 Milky Way-like halos in Illustris TNG50-1 cosmological galaxy formation simulations.They find a large spread in CGM properties of their sample (e.g., mass fraction in various CGM phases).The CGM properties depend on the specific star formation rate of the galaxy.Similarly, there is a systematic increase in the CGM X-ray luminosity with an increasing stellar mass of the central galaxy.The temperature-density PDF (the colored 2D histograms shown in Fig. 9) of their CGMs is affected by ongoing AGN feedback (see their Fig.9).Inputs from their statistical study can be incorporated into our formalism, largely motivated by a large spread of densities and temperatures in CGM simulations (see also Esmerian et al. 2021;Fielding et al. 2020, etc.), and variations in CGM properties over a large range of halo masses and redshifts can be quickly calculated. In between the simple models of hot CGM and complex cosmological simulations lie the high-resolution idealized simulations that focus on the physics of flows and thermodynamics around cold gas moving through the hot CGM (e.g., Armillotta et al. 2016;Gronke & Oh 2018;Kanjilal et al. 2021;Mohapatra et al. 2022Mohapatra et al. , 2023;;Yang & Ji 2023).The boundary layers around such clouds are also multiphase with a characteristic volume PDF covering a broad range of temperatures (e.g., see Fig. 5 in Kanjilal et al. 2021, Fig. 4 in Mohapatra et al. 2023) from ∼ 10 4 K to 10 6 K.The relation between density/temperature fluctuations and turbulent Mach number in turbulent cooling layers is fundamentally different from isotropic homogeneous turbulence.For the latter, the rms fluctuations in ln /⟨⟩ scale as M 2 (M is turbulent Mach number) but for radiative turbulence density fluctuations are much higher due to radiative cooling (e.g., Mohapatra & Sharma 2019).The multiphase CGM can be thought of as a superposition of several such clouds, with their radiative boundary layers at intermediate temperatures and the confining pressure decreasing away from the center.Our formalism based on PDFs is also capable of capturing the temperature distribution of such boundary layers.Thus, a description based on PDFs, which is consistent with both the small-scale structure of the CGM clouds and galaxy formation simulations, seems appropriate for studying the CGM as a whole. SUMMARY Here, we summarize the most important results and implications of our work. (i) We have highlighted the need to move beyond simple profiles to model the CGM.Simulations, both idealized and cosmological, indicate the cospatial presence of cold, warm, and hot gas.We have shown that a probabilistic model of the multiphase CGM can reliably explain results from multi-wavelength observations (see Figs. 5,6,7,11).These models, despite being more complex, still remain largely analytic.The standout improvement over the existing CGM models like those introduced in Faerman et al. 2017Faerman et al. , 2020;;Faerman & Werk 2023 is that our probabilistic model can simultaneously explain the presence of most ions observed in UV absorption spectra of quasar sightlines passing through the CGM of intervening galaxies.Additionally, our probabilistic models match the observations of the Milky Way CGM well.These include dispersion measure from a nearby FRB (Bhardwaj et al. 2021) and X-ray emission measure in the soft X-ray bands (Kaaret et al. 2019;Das et al. 2019a;Bluem et al. 2022;Ponti et al. 2023b). (ii) We clarify an apparent confusion in the identification of physical sources of diffuse soft X-ray emission spectra from the Milky Way.We highlight that the observed X-ray emission from the gas at the virial temperature of the CGM (∼ 0.2 keV) may be dominated by a dense disk rather than the spherically symmetric CGM (see Fig. 8). The emission from this dense disk in addition to the CGM is degenerate in the isothermal APEC modeling of the X-ray spectrum along a single sightline.This degeneracy can, however, be lifted if the spatial variation of emission measure across multiple sightlines is considered.The disk can contribute to both ∼ 0.2 keV phase and the newly introduced phase at ∼ 0.7 keV (see Das et al. 2019a;Bluem et al. 2022;Ponti et al. 2023b).Since only the ∼ 0.7 keV component is referred to as the Galactic coronal disk in Ponti et al. 2023b, we highlight the possibility that ∼ 0.7 keV gas in the disk is but a minor contributor to the X-ray emission, and a significantly larger emission comes from the ∼ 0.2 keV phase of the disk.It is also physically plausible to have a dense disk with a broad spread of temperatures between 0.2 keV and 0.7 keV, but this gets decomposed into emission from two components due to the specifics of APEC modeling (e.g., Vijayan & Li 2022).Further investigation on this uncertainty is beyond the scope of this work. (iii) We address the plausible reason for large variations in the observed cold ion column densities across multiple quasar sightlines through different external galaxies (e.g., see Fig. 11).Starting from the warm to the cold phase, the gas becomes progressively less volume filling.We introduce scaling relations in terms of the number of clouds ( cl ) and their volume filling fraction ( ) that can be qualitatively motivated from the distribution of cold/warm clouds within the CGM (see section 4.2 & Fig. 12).We make baseline predictions of column densities of ions in the mist limit. (iv) Our work motivates a library of CGM models, and synthetic observables matched against observations (e.g., Tab. 4).Given recent advances across diverse CGM observations across multiple wavelengths, directly comparing a wide range of models to different observations is warranted.When confronted with new observations, a comparative framework can reveal model strengths, weaknesses, and biases.Ultimately, this exercise will help us accurately estimate important physical parameters, such as the fraction of baryons in various phases of the CGM. A library of models and observables will enable benchmarking against observations and elucidate model limitations and assumptions.We, therefore, create a publicly available code repository called MultiphaseGalacticHaloModel and a computationally inexpensive plasma modeling database called AstroPlasma to continually encompass existing and future CGM models and compare them with the latest observations.(PMRF) from the Ministry of Education (MoE), Govt. of India.AD acknowledges Gurkirat Singh for his efforts in building and testing AstroPlasma.AD acknowledges Dylan Nelson for his help in processing and analyzing Illustris TNG50 data dumps.AD acknowledges Sayak Dutta, Zhijie Qu (屈稚杰), and Sowgat Muzahid for their help in accessing and analyzing the observational data.AD also acknowledges Yakov Faerman, Gabriele Ponti, Sanskriti Das, Hsiao-Wen Chen (陳曉雯), Andrea Afruni, Priyanka Singh, Kartick Sarkar, Sukanya Mallik, and our anonymous referee for their useful comments and discussions.We thank Gary Ferland and collaborators for the CLOUDY code.This research also benefited from discussions at Fundamentals of gaseous halos program (Halo21), which was supported in part by grant NSF PHY-2309135 to the Kavli Institute for Theoretical Physics (KITP). DATA AVAILABILITY We have made all the codes and data used in this paper public.The CLOUDY-like plasma modeling tool that we developed is hosted in the GitHub repo AstroPlasma 10 for general use.All the CGM models and the observables used in this work are hosted as a part of an expanding library of CGM models in the GitHub repo MultiphaseGalacticHaloModel. 11Any other relevant data associated with this article are available to the corresponding author upon reasonable request. where 0 is the constant number density of the CGM gas in the one-zone model, 0 and CGM are the inner and outer radii of the CGM respectively, and is the power-law index, i.e., () ∝ − .We stick to the FSM17 prescription for CGM , i.e., CGM = 1.1 200 . Assuming that the ion densities also follow power-law profiles ( possibly different power-law indices for different phases), it is straightforward to calculate the column density () of any ion using the following integral at any given impact parameter , Substituting Eq.B1 into Eq.B2, we get the following expression containing transcendental functions, 1 2 , − 1 2 ; + 1 2 ; where Γ denotes gamma function and 2 F1 is the Regularized hypergeometric function. 12 The density profile introduced in Eq.B1 can also be thought of as a model for the Milky Way CGM.Just like Eq. B2, which applies for an external galaxy, we make equivalent estimates for Milky Way observables from the solar system location.These observables include the emission and dispersion measures generated from our model, which can then be directly compared with Milky Way observations along any line of sight. We denote as the distance between an observer at the position of the solar system and any point along a sightline in Galactic coordinates (, ).This same point has spherical coordinates (, , ) with respect to the Galactic center.Converting between spherical and Galactic coordinates, sin = cos , and using the law of sines for a triangle (see Fig. where 0 is the distance between the sun and the Galactic center (≈ 8 kpc).Eq.B4 combined with where 0 is the distance between the sun and the Galactic center (≈ 8 kpc), can let us express , and in terms of , , and 0 . Let the function [()] ( which implicitly depends on its distance from the Galactic center) be any observable from our spherically symmetric model integrated along a line of sight, e.g., () () for emission measure or () for dispersion measure.where ⊥,min = 0 √ 1 − cos 2 cos 2 is the minimum perpendicular distance of a sightline along (, ) from the Galactic center (see Fig. BA1).Note that the sightlines towards the Galactic center (first and fourth quadrants) have an additional contribution from smaller .Eq.B6 can be numerically integrated to estimate the emission and dispersion measures or surface brightness for different CGM profiles;14 e.g., a power-law (with index ) profile for our one-zone, three-phase model.A comparison of this model with observations is compiled in Tab. 4. APPENDIX C: IONIZATION & COOLING FUNCTION Here we discuss the effects of photo+collisional ionization on different ion fractions, particularly OVI and MgII.We assume the Haardt-Madau extragalactic UV photo-ionizing background (Haardt & Madau 2012) at a redshift of 0.2.As illustrated in Fig. CA1, in the absence of any photo-ionizing background radiation, the ionization fractions of both OVI and MgII are independent of gas density.The ionization fraction of OVI and MgII are oppositely affected by the photo-ionizing background.OVI is an intermediate ion (tracing warm gas ∼ 10 5.5 K) and its ionization fraction is significantly enhanced at temperatures ≲ 10 5.5 K by the photo-ionizing background.On the other hand, since photo-ionizing background increases the overall ionization of the plasma, it depletes the amount of low ions like MgII (tracing cold gas ∼ 10 4 K), especially at low densities ≲ 10 −4 cm −3 .Further, the strength of the ionizing background radiation increases with redshift, and its effect is also illustrated in Fig. CA1 (see Strawn et al. 2022 andFig. 5 from Faerman et al. 2020 for more discussion on this).Now we show the CLOUDY (Ferland et al. 2017) generated equilibrium cooling function used in this work.For fast computation, we use the CLOUDY generated equilibrium cooling table for plasma at solar metallicity and use an approximate prescription, described in Eq.C1, to scale the cooling value to different metallicities.The robustness of this approximation (C1) is illustrated in the top panel of where < 2 × 10 6 K log 10 ( ) −log 10 2×10 6 log 10 (8×10 7 )−log 10 (2×10 6 ) ; 2 × 10 6 K ≤ ≤ 8 × 10 7 K 1; > 8 × 10 7 K . Because of photo-ionization, the cooling function can be significantly affected by a change in the density of the gas, as illustrated in the bottom panel of Fig. CA2.For all the cooling curves used in our work, we set = 2.0 × 10 −5 cm −3 , roughly the average density of our TNG50 halo (see Fig. 9).This paper has been typeset from a T E X/L A T E X file prepared by the author. Figure 2 . Figure2.Cartoon illustrations of various kinds of modifications to introduce warm gas in any shell of our CGM model: the warm phase condenses out of the hot phase and is either (a) isochoric or (c) isobaric relative to the hot phase.Being isobaric with the hot phase, the warm phase has a higher density and occupies a smaller volume, as indicated by the darker shade of blue in (c).The warm gas may occupy only a part of the shell volume (as in (a), (c), and (d)) rather than being distributed uniformly (the mist limit; shown in (b)).The inset squares in the top right corner of each sub-figure indicate the densities of warm gas in different cases. ⊙ ] 7.9, 1.4 × 10 10 2.5, 1.6 × 10 10 † 200 is the radius within which mean density of the dark matter halo is 200 times the critical density of the present Universe.† † We use the metallicity profile shown in Fig.3of FSM20.§ § Temperature spread denoted as in FSM17.‡ Total volume fraction in the warm phase within the CGM. Figure 3 . Figure 3.The volume probability distribution function (PDF) of temperature in a typical CGM shell at = 20 kpc for a = 1 polytrope (isothermal FSM17 model).The light red curve shows the unmodified log-normal PDF of the hot phase about a median temperature (ℎ) Figure 4 . Figure 4.The different number density profiles for our isothermal ( = 1 polytrope following FSM17; left panel) and isentropic ( = 5/3 polytrope following FSM20; right panel) CGM models (see section 2.1 & 2.2).The purple dot-dashed lines in both panels show the unmodified profiles.Additionally, the inset in the right panel shows the unmodified temperature profile; the median unmodified temperature for the isothermal model (left panel) is 1.5 × 10 6 K.The solid lines show density profiles with isochoric redistribution of the warm phase while isobaric redistribution is shown using dashed lines.The global average density profiles (indicated by ⟨ ⟩ ) are shown in red and blue, whereas the local average density profiles (indicated by ⟨ ⟩) are shown in orange and cyan (see point (iv) in section 2.1 for the definitions of local and global average densities).The average global density profile for the warm phase in the right panel shows a dip towards the center because cool / ff is larger there and only a small amount of unmodified gas lies below our chosen cool / ff = 4 threshold.In both the panels, the global density profiles for isochoric (solid red + blue) and isobaric (dashed red + blue) modifications coincide because the mass in each phase is the same in these cases.For / 200 ≳ 0.25 the temperature of the unmodified isentropic profile (right panel) approaches the chosen warm phase temperature Figure5.The column density of different ions as a function of the impact parameter (normalized by the virial radius) from our isothermal ( = 1 polytrope) and isentropic ( = 5/3 polytrope) models (see Fig.4for density profiles).In all the panels, the solid lines refer to isochoric modification, while the isobaric modification is shown using the dotted lines.Using the dashed lines, we also show the column density estimates generated from the unmodified (isothermal and isentropic) profiles for reference.Orange lines are for the isothermal model and cyan for isentropic.Top panels: The column densities of ions tracing the warm phase, OVI on the left and NV on the right.The data points are inferred from absorption spectra of quasar sightlines through external galaxies (OVI: COS-Halos[Tumlinson et al. 2011a;Werk et al. 2016] and eCGM surveys[Johnson et al. 2015] in solid black markers, CGM 2 survey[Tchernyshyov et al. 2022] in open black markers, and CUBS VII survey[Qu et al. 2024] in solid gray markers; and NV: Werk et al. 2013 in solid black markers).The (inverted)-triangles are (upper)-lower limits.Bottom panels: The column densities of ions tracing the hot phase, OVII on the left and OVIII on the right.The lone observation data point for OVII for an external galaxy is from Mathur et al. 2023 (cf.sections 2.1 & 3.1 in their paper), which also produces OVIII column density of 7.8 ± 2.6 × 10 15 cm −2 (not marked; private communication, Sanskriti Das).Due to limited observations for external galaxies, we only indicate the range estimated from the Milky Way CGM for the OVII and OVIII columns (in gray bands; from Chandra observations by Gupta et al. 2012 and XMM-Newton observations byFang et al. 2015;Das et al. 2019a).Since the sun is only 8 kpc from the Galactic center, the column densities for the Milky Way (×2; shown in gray bands) provide an estimate on the upper limit of the ion columns in Milky Way-like external galaxies. Figure 6 . Figure6.The Molleweide maps of observables in Galactic coordinates (, ) from our models (namely, = 1 [upper panels] and = 5/3 [lower panels] polytropes with isochoric modification).The left and middle columns display our modeled Milky Way emission measure, with and without a coronal disk respectively, as observed from the position of the solar system.The right column shows the corresponding dispersion measure.These maps are generated from different CGM models discussed in section 2 (gas profiles shown in Fig.4).Eq. 19 models the coronal disk component.The red crosshair in the EM maps marks the eFEDS sightline (, ≈ 230 • , 30 • ) observed byPonti et al. 2023b and the estimated EM is 2.937 × 10 −2 pc cm −6 .The red crosshair in the DM maps in the right column marks the sightline (, = 142.19 • , 41.22 • ) of a nearby FRB, in the M81 galaxy(Bhardwaj et al. 2021).The DM estimated for the Milky Way halo along this sightline is 30 pc cm −3 .The region near the Galactic center is hatched in the maps to indicate that predictions would be unreliable there.This area is contaminated by eROSITA bubbles(Predehl et al. 2020) and other features, as well as the central cusp in the number density profile (Fig.4). Figure 7 . Figure 7. Synthetic surface brightness along , = 230 • , 30 • (eFEDS region) from our polytropic models of the CGM (see Fig. 4 and section 2 for details).The cyan curve is for a = 1 polytrope while the orange curve is for = 5/3 polytrope, both of which consider isochoric modifications.The vertical olive (0.3 − 0.6 keV) and purple (0.6 − 2.0 keV) bands mark the energy bands used for the eFEDS survey of the Milky Way CGM in soft X-rays (Ponti et al. 2023b).The observed surface brightness towards the eFEDS field of the CGM in 0.3 − 2 keV is 2.05 × 10 −12 erg cm −2 s −1 deg −2 (Tab.4 in Ponti et al. 2023b).The surface brightness in the same energy band (0.3 − 2.0 keV) obtained from = 1 and = 5/3 polytrope models with the addition of a coronal disk (see Eq. 19) is 1.07 ×10 −12 and 9.4 ×10 −13 erg cm −2 s −1 deg −2 , respectively.The arrows indicate the surface brightness levels observed in each band, while the horizontal lines show the model prediction. 2).Along , = 142.19 • , 41.22 • ,Bhardwaj et al. 2021 estimate the DM of the Milky Way (using a nearby FRB in the M81 galaxy) to be 30 pc cm −3 .The isothermal models exhibit better agreement with the observed EM and DM values, as shown quantitatively in Tab. 4. The model EM values, dominated by the disk, are smaller by a factor of 2-3 than observations (perhaps due to additional ISM contribution along this sightline).The DM values, dominated by the spherical CGM, are in very good agreement.Because of a weaker sensitivity to the dense ISM, the DM is a more constraining probe of the CGM than EM. Figure 8 . Figure 8. Molleweide projection in Galactic coordinates (, ) of the ratio of CGM halo to disk emission measure (EM) for = 1 (top panel) and = 5/3 (bottom panel) polytropes, both with isochoric modification.The red crosshair marks the eFEDS sightline , = 230 • , 30 • .Along this sightline, the halo to disk EM is ≈ 0.7 and 0.25 for the = 1 and the = 5/3 polytropes, respectively.The neighborhood of the Galactic center is again hatched (like 6) due to possible contamination by the eROSITA bubble.In the = 1 map, the CGM dominates towards the Galactic center because of the cusp-like density profile in this case (see left panel of Fig.4). Figure 9 . Figure 9.The central plot in this figure shows the volume-weighted 2D histogram of hydrogen number density and temperature of the CGM for our chosen halo from the Illustris TNG50-1 cosmological simulation (halo ID-110, snap-84; only 'non-star forming' gas within the virial radius is included).The white rotated ellipses indicate 1 and 2 contours for each phase (cold, warm, and hot) obtained by fitting Eq. 20 and the black dashed contours correspond to the total PDF including all phases.The top and right panels show the marginalized 1D PDFs in density and temperature, respectively.The thick solid line in yellow shows the marginalized PDFs from the simulation data.The thin solid lines in purple are from our best-fit model, with contributions from individual phases shown by dotted lines in colors listed in the legend.The best-fit parameters of our model are listed in Tab. 3. Figure 10 . Figure 10.The 1D volume PDFs in temperature marginalized over density.The dashed lines are PDFs from the Illustris TNG50-1 halo (halo ID-110, snap-84).The solid lines show three-phase model PDFs (Eqs.22, 26 and 28) with best-fit parameters listed in Tab. 3. Note that the three-component Gaussian model captures the qualitative trends correctly.The hot, warm, and cold volume PDFs peak at ∼ (10 5.8 , 10 5.2 , 10 4.1 ) K respectively. Figure 11 . Figure 11.Column densities of OVI and MgII ions as a function of impact parameter predicted by our best-fit three-phase model.Observed columns from different surveys are shown with circles (detections), upper triangles (lower limits), and lower triangles (upper limits).Observations from different surveys that are shown here are from the COS-Halos survey (Tab.3 from Werk et al. 2013) in cyan/orange filled markers, CGM 2 survey (Tab.3 from Tchernyshyov et al. 2022) in open orange markers, MAGiiCAT survey (Tab. 1 from Nielsen et al. 2016) in open cyan markers, and CUBS survey (Tab.A1 & Tab. 1 from Qu et al. 2023 for MgII and Tab.B1 from Qu et al. 2024 for OVI) in filled cyan/orange markers with black borders.The galaxies from the COS-Halos and the MAGiiCAT surveys have mass and size comparable to the Milky Way (c.f.Tab. 2 from Tumlinson et al. 2013 for COS-Halos & Tab. 1 from Nielsen et al. 2016 for MAGiiCAT).The one-zone assumption (solid lines) can be relaxed by assuming a power-law profile with index (Eq.B1; see Appendix B for details), keeping the total gas mass in the CGM constant (shown by the dashed [ = 1] and dotted [ = 2] lines).The column density of the OVI is better modeled with a steeper profile ( = 2). Figure 12 .. Figure 12.A portion of the CGM with clouds (a) spread out in two directions perpendicular to the LOS ( = 2), (b) spread out in one direction perpendicular to LOS ( = 1), and (c) no spread perpendicular to LOS ( = 0).Each blue square represents cubical clouds of size .Each 3 × 3 layer shows the arrangement of cubes of side −1/3 cl (mean distance between clouds for the maximally separated, most probable arrangement), each of which contains just one cloud.In all cases shown (a), (b), and (c), every sightline covered by a cloud will have overlapping clouds if the pattern is repeated along the LOS.There are −/3 number of clouds spread maximally in dimensions perpendicular to LOS (see section 4.2).The number of clouds aligned along LOS with non-zero column is thus 1/3 cl / −/3 and the column density is (b)), the column density is = 2/3 and the number of clouds along the LOS is cl,LOS = area covering fraction = 1/3 cl / = 1/3 .In all cases, the product of the area covering fraction and the column density is a constant equal to , as explained in the previous paragraphs.The clouds overlap in all dimensions across the LOS in Fig.12(c) corresponding to = 0 and the column density is 1/3 , such that every sightline with non-zero column has 1/3 cl clouds.Therefore, in general, for any configuration with spread in number of directions ( = 0/1/2), the total number of clouds along the LOS is cl,LOS = gives the column density = cl,LOS × = (1+)/3 . 12I; Figure BA1.The relevant geometry to convert the LOS integral (along , distance along LOS) to an integral involving (distance from Galactic center).For any Galactic coordinates (, ), the angle satisfies cos = cos cos . Figure CA1 . Figure CA1.The ion fraction of OVI (left panels) and MgII (right panels) as a function of hydrogen number density and gas temperature generated using AstroPlasma.The bottom row shows equilibrium values with only collisional ionization (CIE) while the top two rows show equilibrium values with photo+collisional ionization (PIE) in Haardt-Madau extragalactic UV background (Haardt & Madau 2012) at = 0.2 and 0.9.The strength of the UV background increases with redshift. Figure CA2 . Figure CA2.Equilibrium cooling functions generated using CLOUDY 2017 spectral synthesis code(Ferland et al. 2017).Different cases include collisional+photo-ionization in the presence of Haardt-Madau extragalactic UV background(Haardt & Madau 2012) at = 0.2.Top panel: Demonstration of the robustness of our approximate prescription (Eq.C1) that scales the cooling function for different metallicities.The approximate cooling functions (dot-dashed lines) closely follow the actual CLOUDY generated cooling curve (solid lines).Bottom panel shows the equilibrium cooling functions at different hydrogen number densities.Due to the presence of the photo-ionizing background, the cooling functions become weakly dependent on the gas density.For both panels and across this work, we use the cooling function fixed to the average density of our fiducial TNG50 halo, = 2.0 × 10 −5 cm −3 . Table 1 . Symbols & notation used for probabilistic models Table 2 . Parameters of the two-phase CGM models Table 3 . Parameters of the three-phase CGM model *
22,603
2023-09-26T00:00:00.000
[ "Physics" ]
Development and application of an Integrated Business Model framework to describe the digital transformation of manufacturing - a bibliometric analysis ABSTRACT The digitalisation trend is affecting the manufacturing industry by adopting several emerging technologies that can increase the efficiency and output of production processes and operations. A growing body of literature shows that this trend demands a structural rethink of how companies do business. However, there is a lack of holistic contributions describing how aspects of manufacturing digitalisation align with the Business Model Innovation process. This study uses a bibliometric mapping approach to analyse the literature on manufacturing digital transformation through the Integrated Business Model (IBM) lens. The results identify the major research topics discussed in the analysed domain and propose an enriched IBM framework with specific descriptions and connections among the components and their relative strengths. Holistically, the resulting enhanced model may ultimately assist practitioners in understanding the innovation process of the BM triggered by technological shifts in their manufacturing, enabling an alignment of the manufacturing strategy with IBM’s components. Introduction The manufacturing industry is currently experiencing profound changes.In particular, the digitalisation trend is affecting the manufacturing domain, bringing several emerging digital technologies that deeply affect a manufacturing company's operations and production processes in terms of increased efficiency and flexibility (Björkdahl, 2020;Pereira & Romero, 2017).These emerging digital technologies, such as the Internet of Things (IoT), Cloud Computing, and Big data and Analytics (Paschou et al., 2017), are considered the main technological enablers of the fourth industrial transformation labelled as Industry 4.0 (I4.0).In detail, I4.0 embraces these emerging digital technologies leading to the digitalisation of the current industrial domain, i.e. a digitalised and automatised production (Pereira & Romero, 2017) as well as a more integrated value chain (Björkdahl, 2020).Thus the full implementation of I4.0 strictly depends on the successful adoption of emerging digital technologies (Micheler et al., 2019). Digitalisation is one of the first technological trends at the base of the fourth industrial revolution (Zangiacomi et al., 2020) by driving substantial changes in the production systems that are mainly IT-driven (Lasi et al., 2014).Literature thus shows that I4.0 is primarily based on the technology-push innovation in the application-based context and related operational domain (Frank et al., 2019).This work moves away from this trend and aims to support the investigation of the long-term impact of digitalisation in the challenge-driven research context.The such objective involves strategical considerations and results in radical changes in the mechanisms of creating, delivering, and capturing value (Björkdahl, 2020;Mugge et al., 2020), i.e. the holistic view of the Business model (BM). The concept of BM in technical literature is often associated with the interfacing economic activity supporting the transaction: pay-per-use, subscription, lease, and ownership.This view is suitable for predictive research efforts connected with the operational dimension.On the other hand, the digitalisation of manufacturing cannot be analysed through such an interpretation of the BM concept.As previously mentioned, manufacturing digitalisation is a technology trend at the base of I4.0.The technology demands an application, i.e. how to propose, create and capture the embedded value in the technology to create innovation (Chesbrough, 2002).The adoption of digital technologies in manufacturing represents thus a challenge that involves a structural rethink of how companies do business (McKinsey, 2015), i.e. finding the proper application for the technology.Strategic literature presents a holistic concept of BM as a synthesis of this application process.In view of the above, the present work leverages a holistic understanding of the business model described in strategy-related research (Teece, 2010) and not with a specific set of processes related to value capturing, as seen in application-driven literature.In particular, this paper uses the concept of BM as a descriptive framework composed of several aspects that reflect how firms create, deliver, and capture value.Among the existing BM frameworks, the Integrated BM (IBM) introduced by (Wirtz et al., 2016) is a generic model that provides a comprehensive picture of the essential sub-models, i.e. the components of a BM, divided into strategy, customer and market, and value creation aspects.The IBM provides thus a baseline for discussing the different BM's components in the analysed domain, and it is taken as a reference BM framework in this work. In view of the above, the manufacturing industry's digital transformation can be seen as a dynamic transition process that initiates the transition between two stable BM states -the current state and a future state (Maffei et al., 2019).The study of this dynamic process requires a comprehensive characterisation of the BM in all aspects to describe the future BM state.However, the literature in the focal domain lacks a complete description of the BM.On the one hand, there is abundant contribution focusing on the application of digital technologies in the context of manufacturing.On the other hand, the related description of the underpinning BM in manufacturing, is fragmented.The lack of a shared understanding of the BM prevents authors from building upon each other's contributions.This work addresses such a knowledge gap by analysing the vast and fragmented body of literature through the lenses of a holistic understanding of the BM concept and extracting the valuable patterns in it that are presented through a shared framework enabling an increment of knowledge in the domain.The resulting framework thus proposes a vertical characterisation of the BM development domain based on stateof-the-art research, given the recent manufacturing trend towards digitalisation. Given the identified gap and the aim of this work, the following research questions will be addressed: (a) How are the IBM components embodied in current literature on digital manufacturing transformation?And how such contributions can be used to vertically enrich IBM in such an area of investigation?(b) What relations can be identified among the IBM components within the analysed domain?And what is their strength? A semantic analysis of the literature related to the digitalisation of the manufacturing industry was performed using a bibliometric mapping approach to address the research questions.The results of the analysis highlight the major research topics discussed in the area being investigated.These topics are classified according to the IBM framework providing a twofold outcome.On the one hand, the organised topics enrich the existing descriptions of IBM's components with details regarding the domain of manufacturing digital transformation.This, in turn, provides a more vertical coverage of the IBM framework.On the other hand, the classified topics highlight which IBM component the research effort in the analysed domain is focused on.This, in turn, provides indications of the current research pattern contributing to the display of an explanatory model that includes aspects of the digital transformation process of manufacturing.Moreover, the semantic analysis highlights the relations among the components and the relative strengths of such connections. BM and Business Model Innovation (BMI) definitions The definitions of BM and BMI are presented here because they are the basis of the whole literature section.Furthermore, the clarified description of BM and BMI converge in the framework proposed by (Maffei et al., 2019).This framework is the main reference for understanding those concepts, and it is used in this article to discuss the manufacturing industry's digital transformation. According to (Teece, 2010), the BM is an 'architecture of the firm's value creation, delivery, and appropriation mechanisms'.The extant literature has investigated which elements reflect the value creation, delivery, and capture of a BM (e.g.(Foss & Saebi, 2017;Hamel, 2001;Osterwalder et al., 2010;Osterwalder, 2004)); thus, a content-related perspective of the BM emerges (Wirtz et al., 2016).Building on the BM definition, (Foss & Saebi, 2017) define BMI as 'designed, novel, and non-trivial changes to the key elements of a firm's business model and/or the architecture linking these elements'.Given the above, the main difference between the concept of BM and BMI can be described as follows.On the one hand, the BM is a static concept that represents the layout of the designed elements at a specific moment.On the other hand, the BMI is a dynamic concept where the focus shifts depending on how a BM evolves (Demil & Lecocq, 2010), i.e. how the BM elements change to accommodate technology-driven (push) or challenge-driven (pull) innovation (Maffei et al., 2019). Digital transformation processes and the BM transformation The current digitalisation trend has radically affected companies' business processes by pushing toward adopting and exploiting digital technologies (Zangiacomi et al., 2020).Digital technologies such as the IoT, Cloud Computing, and Big Data and Analytics (Paschou et al., 2017) applied specifically in manufacturing bring substantial changes to traditional production systems where inter-company connectivity among different stakeholders in the supply chain (Mueller et al., 2018) and process integration are introduced (Khan & Turowski, 2016), thus increasing the company's overall efficiency.The changes brought by I4.0 are mainly IT-driven (Lasi et al., 2014), meaning that such digital technologies are the enablers of a digital industrial transformation that is often defined as the fourth industrial revolution or I4.0.Implementing I4.0 leads to process optimisation resulting in improved operations for the whole organisation.This is considered the main advantage of supporting the decision towards I4.0 implementation (Sony, 2020). In view of the above, the digitalisation trend has initiated I4.0 (Zangiacomi et al., 2020), and it can be considered a technological push for I4.0 (Frank et al., 2019;Maffei et al., 2019).The implementation of digital technologies pushed in manufacturing is thus adding value to the company's internal processes, mainly in terms of efficiency, for instance, reduced costs, greater flexibility, and increased productivity (Frank et al., 2019).However, it has been argued in the literature that the effect of such a technological push on manufacturing companies is not only related to achieving greater efficiency in production processes and operations (Björkdahl, 2020), and the adoption and exploitation of digital technologies also demand a significant transformation of a company's BM as it changes the current way value is created, delivered, and captured (Björkdahl, 2020;Frank et al., 2019;Mugge et al., 2020).Such a technology-push innovation thus implies a radical BMI for manufacturing companies (Müller, 2019).In their literature review, (Agostini & Nosella, 2021) highlights that the innovation of the BM triggered by digitalisation is a broadly examined topic.The emergence of new BMs as an effect of I4.0 is one of the main topics of investigation in the existing literature (Kraus et al., 2018).As a result, business and strategic facets of the manufacturing industry's digital transformation are now capturing the attention of scholars and practitioners by going beyond mere technological advancement (Agostini & Nosella, 2021). However, how the BM of a manufacturing company engaging in digitalisation efforts is transformed remains poorly explored.The adoption of digital technologies impacts the company's BM by transforming the current one into the desired one, including the new conditions arising from the technology shift.Such a BM transformation process can be modelled according to the framework proposed by (Maffei et al., 2019).The transformation can happen in two alternative ways.First, firms can change single components of their BM (Agostini & Nosella, 2021), i.e. they can make modular changes, and the current BM is thus modified to accommodate the newly introduced factor.In this case, a BMI is seen as a minor perturbation of the current BM (Maffei et al., 2019).Second, firms can change the whole BM (Agostini & Nosella, 2021), i.e. they can make structural changes, and in this case, the perturbation of the current BM requires a total re-alignment of the BM's elements.The BMI is triggered in this case by a specific desired BM (Maffei et al., 2019). In view of the above, the digital transformation of the manufacturing domain is a dynamic transition process that initiates the transition between the current BM state and the future BM (Maffei et al., 2019).Investigating this dynamic process asks for an indepth characterisation of all the structural aspects of a BM to successfully describe the future BM. BM frameworks Scientific contributions in the BM domain have extensively investigated the structural aspects of a BM that have been referred to in different ways, such as the building blocks (Osterwalder et al., 2010), sub-models (Wirtz et al., 2016), elements (Chesbrough, 2002), and components (Morris et al., 2005).The latter is the preferred notation in this work.Scholars have presented the BM aspects using several descriptive frameworks reported in the following paragraphs. In (Chesbrough, 2002) the authors describe the BM's elements by listing the primary functions a BM should fulfil: to formulate a competitive strategy, which is the principal source of competitive advantage; to identify the firm's position within the value network; to define the value proposition; to select the market segment; to structure the defined value chain, and to determine the cost structure and profit potential.This suggests a more operational definition of the elements.The major BM components presented by (Morris et al., 2005) are described by addressing the following six fundamental questions: the competitive strategy that delineates the competitive position of a firm in the market; the market factors that identify the customer target; the offerings created and delivered; the firm´s internal capabilities; the economic factors that highlight the revenue mechanisms; and the investor factors defining time, scope, and size objectives.In (Demil & Lecocq, 2010) the authors present three core components of the BM framework: resources and competencies, organisational structure (i.e.value chain activities and value network), and value propositions to be delivered to customers.The main aim of (Osterwalder et al., 2010) was to standardise the existing frameworks by presenting the so-called BM Canvas, based on previous work by (Osterwalder, 2004), i.e. the BM ontology.The BM Canvas is a comprehensive framework encompassing four business areas: customers, offers, infrastructure, and financial viability.These areas unfold in the following nine building blocks: customer segments, value propositions, channels, customer relationships, revenue streams, key resources, key activities, key partnerships, and cost structures.Likewise, (Wirtz et al., 2016) contributes to the state-of-the-art by introducing the IBM, which provides a comprehensive view of the sub-models, i.e. the components of a BM, as a result of an extensive literature analysis and identifies external and internal factors.The external factors are customers and market components, which are included in the customer model, market offer model, and revenue model.The internal factors, i.e. the manufacturing model, procurement model, and financial model, are part of the value-creation components.Lastly, the strategic components include the resource model, network model, and strategic model. Among the frameworks mentioned above, the IBM by (Wirtz et al., 2016) is a generic model that provides a comprehensive picture of the BM's essential components divided into strategy, customer and market, and value creation aspects.The IBM offers a baseline for discussing the different BM's components in the analysed domain.In view of this, the IBM is taken as a reference in this work. To summarise, this study builds upon IBM and presents a vertically enriched characterisation of each of the IBM components and the relations among them by analysing the literature related to the digitalisation of the manufacturing industry.The proposed framework enriches the existing descriptions of IBM's components and contributes to displaying an explanation model that includes aspects of the process of digital transformation of manufacturing. Methodology The gap presented in the introduction is addressed in this work by referencing the IBM framework and through a semantic analysis of the literature related to digital transformation in manufacturing.The semantic analysis was performed using a bibliometric mapping approach that provides tools and methods to graphically visualise a map of the state-of-the-art of a given knowledge area (van Eck, 2011).VOSviewer was the software selected to perform the analysis because of its main feature of handling large maps and its ability to present the results in an easy-to-interpret way.This software implements a new bibliometric mapping technique called VOS (Visualisation Of Similarity), as proposed by (van Eck & Waltman, 2010).Such a technique constructs a two-dimensional map whose attributes are nodes and arcs.The nodes are the objects of interest (e.g.journals, researchers, keywords) that, in this work, are keywords.The arcs are connections (e.g.coauthorships, co-occurrences) between nodes that reflect their similarity.In this paper, the arcs are the co-occurrences.The closer the nodes are to each other, the higher their similarity, and the further apart the nodes, the lower their similarity (van Eck & Waltman, 2007).The methodology proposed for the literature analysis is detailed in the following paragraphs. The first step consists of creating the dataset used as input in VOSviewer.The key search terms used to extract the scientific papers are combined in the following search string (digitalisation OR digitalisation OR digital transformation OR industry 4.0) AND (business model* OR business model innovation) AND (manufacturing OR production).This string was then used to search the Scopus and Web of Science (WoS) databases.These databases were chosen because they cover the most relevant available literature regarding the unit of analysis of this study.A total of 294 publications were obtained from Scopus, and 329 were obtained from WoS.This body of literature was further analysed with a focus on coherence and consistency.After 2016, the literature started to include a stable and growing number of contributions indicating a consolidation of the topic in the scientific community.For this reason, the initial dataset was reduced by eliminating the 30 contributions before 2016.Additional filters are applied to include only journal, conference, and review papers.This selection contains all types of documents, from conceptual to empirical and technical contributions, given the holistic nature of our investigation.The resulting articles from Scopus and WoS were merged, and duplicates were removed, resulting in a final dataset of 507 articles (see Appendix 4 additional statistics). The final dataset was input in VOSviewer to create a keywords map.The objects displayed in the map were (1) nodes representing the keywords and (2) arcs representing the co-occurrence of two connected keywords.Table 1 presents the terminology associated with nodes and arcs that will be used in the text. The analysis of the keyword map was structured as follows.First, the focus was on assessing the nodes to characterise the IBM´s components.Second, the arcs were investigated to identify the relations among the components and the strengths of such connections.The following paragraphs explain the details of these two steps. Node analysis The node analysis consisted of two stages: screening the nodes and associating the keywords to the IBM components.First, screening the 2312 nodes included in the map highlighted the need to reduce the analysis's dimension and standardise the keywords.Those terms with an occurrence value lower than 6 were removed, and a thesaurus file was used to manage synonyms and the conversion from plural to singular.After these steps, the map included 90 keywords associated with the corresponding IBM components.Second, the association process was performed during a brainstorming session among the authors and was based on semantic association criteria that reflected the description of the components given by (Wirtz et al., 2016).The associated keywords gave an overview of the central topics discussed in the literature, and these major topics provided a detailed description of the IBM components.It was assumed that the more keywords associated, the higher the research activities in that area and, in turn, the higher the importance in the domain.The associated topics provided a vertical description of every single component. Arcs analysis The whole arc analysis was based on a core assumption: if an arc connects two keywords that are assigned to two different components, the components are linked.Given this assumption, the analysis of the arcs consisted of two steps.The first stage is identifying and representing the connections among components through specific matrixes in which the rows are the keywords of the focal component while the columns are the keywords of connected components.The second stage is characterising the co-occurrence values to establish the strengths of the links.The filled cells in the matrixes represent the arcs and contain the link strength value, which is the number of co-occurrences of the linked keywords.The co-occurrence values were further analysed to characterise the relations among the components highlighted in the matrixes.These values may show the strength of a specific relation: the lower the value, the weaker the relation, and the higher the Nodes Arcs Occurrence The number of instances of the dataset in which a keyword appears. Link strength The number of publications in which two linked keywords co-occur. Cluster A set of keywords grouped together. Links The number of outgoing arcs from the nodes. value, the stronger the relation.An in-depth analysis of these values was carried out and is presented in the following paragraphs. A screening of the co-occurrence values was necessary to highlight the more relevant ones.For this purpose, the co-occurrence values were plotted on a histogram to show their distribution over the top three components with the highest number of associated keywords, i.e.MM, RM, and SM.The chart in Figure 1 shows an uneven distribution, and values 1 and 2 account for about 80% of the total. The superimposed trend line showed an elbow that allowed the definition of a cut-off point corresponding to a value of 3: • The left side of the cut-off point included lower co-occurrence values than the right side.There is a lack of pattern in the association with the keywords involved.Therefore, such values were irrelevant for the whole analysis as they did not reveal any semantic association. • The right side of the cut-off point included higher co-occurrence values than the left side.These values showed a pattern in the association of the keywords involved; thus, these links were relevant for the whole analysis because they revealed a semantic association. For the reasons explained above, the analysis focused on higher co-occurrence values on the right side of the cut-off point (values of 4-9), including the cut-off point itself (3).As a result, the matrixes presented contain cells filled with values from 3 to 9. The co-occurrence values may indicate the strength of the relations among components.In particular, the histogram in Figure 2 shows higher co-occurrence values distributed over the MM, RM, and SM.The trend line shows a relatively sharp drop until 6, and after this point, it settles down.The critical point is thus 6, which distinguishes two nuances: To summarise, the nodes analysis highlighted the central topics discussed within the digitalisation in the manufacturing domain and, in turn, the most discussed IBM components in the field.Moreover, the arcs analysis identified relations among the components represented by creating matrixes.The co-occurrence values in the matrixes´ cells indicate the strength of such relations.Further investigation of the cooccurrences led to an in-depth characterisation of the connections identified with the matrixes, specifying weak and strong relations. Nodes analysis results Figure 3 shows the keywords map used as the basis for the nodes analysis.This map was reduced to include 90 keywords with a minimum occurrence value of 6 as a result of the nodes screening phase.The reduced keywords map is displayed in Figure 4.The keywords included in this map are listed in Table 2, and they were used during the association process.Furthermore, Table 2 displays in grey the 56 keywords that were associated with one of the IBM´s components, while the other 34 were excluded from the analysis because they could be related to more than one component, e.g.'industry 4.0', 'circular economy', and 'digital economy'.These keywords expressed broad concepts that may not bring value-added information to this work. Characterisation of the IBM's components Tables 3-5 display the list of associated keywords for each IBM component.These terms provide an overview of the central topics discussed in the analysed literature, presenting a characterisation of the IBM´s components.The already existing description of the IBM components is therefore enhanced with specific characterisations in all its components. • Strategy model (SM) includes keywords such as 'digital transformation', 'digitalisation', 'business models', 'business model innovation', and 'sustainability'.These terms suggest that digital transformation should drive a company's strategic plan.This strategic plan contributes to designing a coherent BM that embraces digital transformation and, in turn, sustains the company´s competitive advantage.A BMI might be needed if the BM changes substantially.Additionally, sustainability considerations must be made when formulating the strategic plan. of delivering a bundle of products and services.The PSS can be realised by employing smart and connected products that use digital technologies to enable awareness and connectivity in the products that, in turn, allow one to create and deliver accompanying services.The products are thus sold as functions to the customers.• Network model (NM) includes the keywords 'ecosystems', 'supply chain', and 'supply chain management'.This component points out a network of suppliers and partners in which the focal company is involved.Such a network aims to create and deliver goods and services to the target market in collaboration with external partners.A supply chain will be designed to coordinate all the core activities required to make the final product available for the customer.The material flow through and out of the focal company is coordinated by supply chain management to maximise the value from all activities.Digital transformation has enhanced supply chain and supply chain management processes, making them faster, more flexible, and more accurate using digital technologies.The need to include digital technologies in the company´s operations has given birth to the concept of 'ecosystem'.In detail, the company can be part of an ecosystem of partners from several industries that contribute to creating elements of the value proposition or contribute with the necessary capabilities to deliver the offering to the market.• Revenue model (RevM) is related to the keyword 'sales'.This component points out the company´s capability to capture the created value with coherent revenue mechanisms, i.e. how a product or service is offered to the customer.'sales' represents the most widely understood revenue stream.In detail, products and/or services are sold to generate an economic income. • Procurement model (PM) is characterised by the keyword 'information management'.Procurement is a data and information-intensive process from recognising goods and services to order management.Thus, data and information are exchanged between the company and the suppliers to procure the necessary items or resources.Digitalisation has digitised such data and information introducing new ways of organising and storing data, e.g. using cloud services, and new ways for processing and extracting information from the data, e.g. using data analytics methods to improve decision-making. • Financial model (FM) includes the keyword 'investments'.This term highlights the need for different kinds of investments to embrace a digital transformation.For instance, the digitalisation of manufacturing operations entails investment in technologies such as the IoT and capabilities such as big data analytics. The number of associated keywords is 19 for RM, 15 for SM, 12 for MM, 4 for MOM, 3 for NM, and 1 for RevM, PM, and FM.Therefore, the RM, SM, and MM are the three IBM components on which the research effort in the investigated domain is focused, given the high number of associated keywords. Arcs analysis results The arcs analysis identified connections among the IBM´s components as well as the strengths of the links.The results are summarised in the matrixes in Figure A1, Figure B1, and Figure C1 (see supplementary material).In these matrixes, the filled cells have a twofold purpose.On the one hand, the total number of filled cells, i.e. the arcs, identifies the connections for each pair of components.On the other hand, each filled cell represents the arc that links the corresponding keywords.The filled cells contain a co-occurrence value that marks each arc.This value indicates the strength of the connections that can be weak or strong based on the analysis of Figure 2. If the total number of weak connections is higher than that of strong connections, the link among the components is weak.If the number of strong links is higher than the weak ones, the link among the components can be defined as strong.The following sub-section ( §2.1) lists and explains the identified connections and their relative strengths. Characterisation of the links and their strengths 4.2.1.1. MM and RM. The connection between the MM and RM reveals the competencies and capabilities needed to re-conceptualise the traditional shop floors as fully digitised, integrated, and collaborative manufacturing systems.This connection is characterised by 3 strong and 7 weak co-occurrence values (Figure 5).The strong values identify established associations in the literature.In detail, the terms '3D printers' and 'additive manufacturing' suggest that skills in handling 3D printers should be developed or acquired to implement the additive manufacturing process.The keywords 'smart factory', 'smart manufacturing', and 'IoT' suggest that capability in handling the IoT should be developed to shift from traditional to smart manufacturing because IoT is the driving technology for the digitalisation paradigm. The weak values identify novel connections in the literature.In detail, 'digital manufacturing' co-occurs with '3D printers', 'blockchain', and 'IoT'.Digital manufacturing exploits I4.0 technology to enable interconnectivity (e.g.IoT is the primary enabler), automation, and data analysis within production systems.In particular, 3D printers have been used in product design in the prototyping phase to shorten the process.In addition, blockchain technology tries to enhance security, traceability, integrity, and transparency within the exchanged data, guaranteeing that all systems are resilient.Therefore, competencies in 3D printers, blockchain, and IoT are necessary for implementing digital manufacturing. Overall, the weak co-occurrence values prevail over the strong ones.This suggests that the link between MM and RM is mainly a weak connection characterised by novel association patterns. MM and SM. The connection between the MM and SM reveals that there should be an alignment between a company's strategic plan and its manufacturing strategy and operations.This connection is characterised by 17 weak co-occurrence values (Figure 6).The keywords 'additive manufacturing' and '3D printing' co-occur with 'business models' and 'sustainability'.These associations highlight that additive manufacturing and 3D printing are manufacturing processes that require the design of proper BMs to sustain their competitive advantage.As for the association with 'sustainability', additive manufacturing and 3D printing may potentially impact the sustainability dimensions.Significant benefits can be identified in reducing new product development time, logistics, production, and inventory costs.The keyword 'digital manufacturing' is associated with 'digital transformation'.The digital transformation of manufacturing has brought fundamental changes in the manufacturing industry by opening the way toward a digitalised manufacturing environment.This may be considered an expected association.The terms 'productivity' and 'sustainability' co-occur, highlighting that efficiency in productivity can directly affect sustainability.The link between 'planning' and 'sustainable development' means that production planning should consider sustainable development.The keyword 'production control' co-occurs with 'business models' and 'digital transformation'.Digital transformation pushes digital technologies into manufacturing that would support production control.Integrating these technologies and production control systems may in turn have an impact on the BM.The following are two expected associations: 'smart manufacturing', 'business models', and 'business model innovation'; 'digital transformation', 'business models', and 'business models innovation'.These associations suggest that striving for smart manufacturing implies a digital transformation strategy that, in turn, has implications for the BM, leading to a possible BMI.The term 'smart factory' co-occurs with 'new business models' and 'business models', thus focusing on the possibility of new emerging BMs and, therefore, on the role of BMs in smart factories.The keyword 'smart manufacturing' co-occurs with 'sustainable development' highlighting that smart manufacturing may influence the company's sustainable development. Overall, the weak co-occurrence values are dominant in the MM-SM connection.This suggests that novel keyword associations with novel association patterns characterise the link. MM and NM. The connection between the MM and NM shows that there should be an alignment between a company's network of suppliers and partners and its manufacturing strategy and operations.This connection is characterised by one weak cooccurrence value (Figure 7). The keyword 'digital manufacturing' is associated with 'supply chains', highlighting that companies undergoing a digital transformation must rethink and redefine their supply chains to fulfil the new conditions introduced by the transformation. Overall, the weak co-occurrence value is dominant in the MM-NM connection.This suggests that novel keyword associations with potential association patterns characterise the link. MM and MOM. The connection between the MM and MOM reveals that there should be an alignment between the market offer, i.e. the value proposition of a company and the whole life cycle, and its manufacturing strategy and operations.This connection is characterised by one weak co-occurrence value (Figure 8).The identified association is between 'product design' and 'smart manufacturing'.Smart manufacturing leverages digital technologies to digitalise all business processes, and applying these technologies has great potential in product design.Integrating virtual representations and simulations in the design phase allows the exploration of several product scenarios, thus making the design process much faster and more efficient.This is paving the way for a more and more digitalised design process. Overall, the weak co-occurrence values are dominant in the MM-MOM connection.This suggests that novel keyword associations with potential association patterns characterise the link. SM and RM. The connection between the SM and RM reveals that there should be an alignment between the strategic plan of a company and the development or acquisition of core and complementary competencies and capabilities required by the related technologies.A strategic plan based on digital transformation should include developing or acquiring the appropriate core and complementary capabilities necessary for such a plan.This connection is characterised by 37 weak and 8 strong co-occurrence values (Figure 9). As for the strong association, the 'IoT' keyword co-occurs with 'business models', 'digital transformation', 'digitalisation', 'new business models', and 'competition'.These expected associations suggest that IoT represents the key competence to be developed in the digitalisation of the manufacturing industry and represents a source of competitive advantage.Developing or acquiring external competencies in IoT, in turn, could imply changes in the current BM or even allow for designing a new BM. The keyword 'business model' co-occurs with 'industrial IoT' (IIoT).This is an expected association suggesting that the development of IIoT may impact the BM.The keyword 'sustainable development' co-occurs with 'industrial research'.This is another expected association, given the increasing emphasis on sustainable development in the manufacturing industry.Investigating the development of new products, processes, and services, i.e. industrial research, should allow for sustainable development.Thus, industrial research is a competence that companies should develop or acquire because they are required to become more environmentally conscious in their operations. The keyword 'competition' co-occurs with 'industrial management'.Industrial managers should integrate the different engineering processes in a new competitive environment created by digitalisation. The weak co-occurrence values identify novel connections in the literature.The keywords 'business modelling' and 'business models' co-occur with capabilities in 'automation', 'big data', 'cyber-physical systems', 'digital twin', 'IIoT', 'industrial management', and 'industrial production'.The BM should be designed to include competencies and capabilities in these technologies according to the strategic plan.The term 'business models innovation' co-occurs with 'IIoT' and 'IoT' suggesting that a BMI could be driven by acquiring and/or developing competencies in these two technologies.The data collected using IoT and IIoT could generate new BMs powered by those data.The keyword 'competition' co-occurs with 'embedded systems' and 'IIoT', and this highlights the importance of developing competence in embedded systems and IIoT to be competitive in digital transformation.The term 'decision making' is associated with 'artificial intelligence', 'embedded systems', 'IIoT', and 'IoT'.IIoT and IoT capture data that can be analysed using artificial intelligence tools.This exploits data-driven learning, which in turn may support and enhance the strategic decision-making process. The following associations are identified as expected associations.The keyword 'digital transformation' co-occurs with '3D printers', 'cyber-physical systems', 'embedded systems', 'IIoT,' and 'IoT'.These fundamental capabilities should be developed to undergo a digital transformation in manufacturing.The keyword 'digitalisation' cooccurs with 'artificial intelligence', 'machine learning' and 'IIoT', and this reinforces the association mentioned above, confirming that IT-driven capabilities are paramount in digital transformation.The term 'new business models' co-occurs with 'artificial intelligence', 'automation', and 'big data'.New value can be captured by designing a new BM that exploits the capabilities of artificial intelligence, automation, and big data. The term 'sustainable development' co-occurs with 'artificial intelligence', 'IIoT', 'industrial production', and 'technological development'.Technological advancement may contribute to advancing sustainable development.In particular, developing capabilities in artificial intelligence, IIoT, and industrial production may contribute to providing solutions that promote sustainable development. Overall, the weak co-occurrence values are dominant in the SM-RM connection.This suggests that novel keyword associations with potential association patterns characterise the link. SM and NM. The connection between the SM and NM suggests that there should be an alignment between a company's strategic plan and its network of suppliers and partners to deliver products and services to the customer successfully.This connection is characterised by 7 weak co-occurrence values. The keywords 'business models', 'business modelling', 'competition', and 'new business models' co-occur with 'supply chain'.This implies that the structure of the supply chain may transform the current BM to meet the new needs arising from digitalisation or to meet the market's new requirements to maintain competitiveness.Adjustments to the structure of the supply chains can create new BMs. The keywords 'digital transformation' and 'business models' co-occur with 'ecosystem'.Traditional supply chains are evolving towards interconnected and integrated ecosystems from raw materials suppliers to the final customer, thanks to digital transformation.This transformation is possible given the technological advancement brought about by digitalisation itself.The BM will describe how each actor of the ecosystem creates, delivers, and captures value within the ecosystem.The keyword 'sustainable development' co-occurs with 'supply chain management'.These two keywords combine sustainability issues with the management of supply chains aiming to promote sustainable development within the supply chain. Overall, the weak co-occurrence values are dominant in the SM-NM connection.This suggests that novel keyword associations with potential association patterns characterise the link (Figure 10). SM and MOM. The connection SM-MOM shows that there should be an alignment between the company's strategic plan and the value proposition as well as its whole life cycle.This connection is characterised by 12 weak and one strong cooccurrence value (Figure 11). As for strong association, the keywords 'business models' and 'life cycle' co-occur.The company's strategic plan may affect products' life cycles that must be combined with a proper BM along the whole life cycle. The weak values identify novel connections in the literature.The keywords 'business modelling', 'competition', 'digital transformation', and 'sustainable development' cooccur with 'life cycle'. Digitalisation can promote sustainable development because it may offer the opportunity to extend the life cycle of products and thus sustain competitive advantage.The keyword 'business models' co-occurs with 'product design' and 'products and services'.These are expected associations because the BM needs to be designed accordingly based on the products and/or services delivered to the customer.The keyword 'competition' cooccurs with 'life cycle' and 'product-service system'.This highlights the role of competition in both the life cycle and PSS.A PSS may change the market competition toward more complex dynamics, e.g. to compete in the service market.The keyword 'digital transformation' co-occurs with 'products and services'.The digital transformation may push towards value propositions that combine products and services thanks the introduced digital technologies. The keyword 'digital transformation' is also associated with 'life cycle' and 'product design'.Digital technologies can bring several advantages along the product life cycle and can be exploited to extend the product's life by creating new digital services.This is an expected association because digital technologies affect the process of product design, e.g. by reducing product development lead times and increasing product customisation.The keyword 'business models' co-occurs with 'products and services'.This is another expected association because having a value proposition focused on both products and services may lead to an adjustment to the current BM.The keyword 'sustainable development' co-occurs with 'life cycle' and 'product design'.Products may be designed more sustainably thus decreasing the impact on human and environmental dimensions during all the products' life cycles. Overall, weak co-occurrence values are dominant in the SM-MOM connection.This suggests that novel keyword associations with potential association patterns characterise the link. SM and PM. The connection between the SM and PM shows that there should be an alignment between a strategic plan and how the information in procurement is managed.This connection is characterised by 3 weak co-occurrence values (Figure 12). The keyword 'business model' co-occurs with 'information management'.Managing information when implementing digitalisation is paramount, from acquiring information to its storage and distribution.The BM needs to be designed to consider eventual partnerships that allow the acquisition of the competencies required to deal with data.The keyword 'competition' co-occurs with 'information management'.The ownership of information is a source of competitive advantage because having control of such information and being able to analyse it may allow the capture of more value.The keyword 'sustainable development' co-occurs with 'information management'.Information management may have great potential in sustainable development because it allows, for instance, decisions to be made based on data that may ultimately support sustainable development.Data may thus be used to enable and finally achieve sustainable development.Overall, the weak co-occurrence values are dominant in the SM-PM connection.This suggests that novel keyword associations with potential association patterns characterise the link. SM and FM. The connection between the SM and FM shows that there should be an alignment between the strategic plan a company and the need for different kinds of investments to embrace digital transformation.This connection is characterised by 1 weak co-occurrence value (Figure 13).The keyword 'competition' co-occurs with 'investment'.The digitalisation trend requires high investment, and companies are thus increasing their investments in digital technologies.However, digitalisation has also increased the complexity of the market competition mechanisms and the threats from new market entrants.This, in turn, has increased the risk of the investment made in digital technologies bringing uncertainties in return on investment. Overall, weak values are dominant in the SM-FM connection.This suggests that novel keyword associations with potential association patterns characterise the link. SM and RevM. The connection between the SM and RevM shows that there should be an alignment between the strategic plan of a company and the company´s capability to capture the created value with coherent revenue mechanisms.This connection is characterised by 2 weak co-occurrence values (Figure 14). The keyword 'business models' co-occurs with 'sales'.This suggests that changes in sales may be experienced through new types of BM enabled by digitalisation.Therefore, the sales mechanism is a strategic factor that will shape the digital strategy. The keyword 'competition' co-occurs with 'sales'.Digitalisation may increase the complexity of competition in the markets, which may affect sales. Overall, weak co-occurrence values are dominant in the SM-RevM connection.This suggests that novel keyword associations with potential association patterns characterise the link. RM and MOM. The connection between the RM and MOM shows that there should be an alignment between the development of competencies and capabilities for digitalisation and the value proposition and its whole life cycle.This connection is characterised by 7 weak co-occurrence values (Figure 15). The keywords '3D printers', 'big data', 'digital twin', 'embedded system', 'IIoT', and 'IoT' co-occur with 'life cycle'.This suggests that such digital technologies are part of the whole life of a product or service, and developing or acquiring competencies and capabilities to master such technologies is paramount.3D printers are used in the early phases of product development to create prototypes.This allows designers to quickly determine the product that best fulfils the customers' requirements, thus helping to reduce lead times.The acquisition of big data from products has opened the way to data-driven product development.Data captured from the usage of the product during its whole life cycle makes it possible to analyse user behaviour and thus make decisions on the design of future products/services.Digital twins can be used in different product life cycle stages, from the engineering phase to the simulation on the shop floor.In addition, the product's digital twin can be used by the customer.IoT technology allows monitoring the product's life cycle and enables the collection of a vast amount of data during the usage of the products.These data are thus analysed to create new value and understand the customer's behaviour.Industrial IoT is more specific for shop floor implementation to monitor and control the production line and increase the visibility of overall production resources.The analysis of the data collected will lead to operations optimisation.The acquisition of such big data from the shop floor steers monitoring and control and prompts actions and decision-making processes on the shop floor towards being data-driven.The keywords 'embedded system' and 'IoT' co-occur with 'product and services', and this link suggests that competencies and capabilities in embedded systems and IoT technology are necessary to integrate products and services and to deliver new product and service offerings. Overall, the weak co-occurrence values are dominant in the RM-MOM connection.This suggests that novel keyword associations with potential association patterns characterise the link. RM and NM. The connection RM-NM shows that the development of competencies and capabilities for digitalisation should be aligned with a company's network of suppliers and partners.One strong and three weak co-occurrence values characterise this connection (Figure 16). The strong value identifies an established association in the literature.The keyword 'IoT' co-occurs with 'supply chains', and this association suggests that developing competencies in such technology may introduce changes in the supply chain. The weak values identify novel connections in the literature.The keywords 'big data' and 'industrial management' co-occur with 'supply chains'.Acquiring capabilities to exploit big data generated along the supply chain will facilitate and improve the monitoring and decision-making process for all supply chain activities.'industrial management' and 'supply chain' is an expected connection.Industrial managers plan how to efficiently use labour, material, machines, and information, and logistics and supply chains are thus aspects to consider in planning a firm's resources.The keyword 'blockchain' co-occurs with 'supply chain management'.This link suggests that developing capabilities for implementing blockchain technology may affect the whole supply chain management. Overall, the weak co-occurrence values are dominant in the RM-NM connection.This suggests that novel keyword associations with potential association patterns characterise the link. RM and PM. The connection between the RM and PM shows that there should be an alignment between developing competencies and capabilities for digitalisation and managing the information in procurement.This connection is characterised by one weak co-occurrence value (Figure 17). The keywords 'artificial intelligence' and 'information management' co-occur, suggesting that artificial intelligence competencies can be developed or acquired to be used in information management.Artificial intelligence tools may support companies in recognising patterns in vast amounts of data, and algorithms for clustering and contextualisation may fulfil this purpose. Overall, the weak co-occurrence values are dominant in the RM-PM connection.This suggests that novel keyword associations with potential association patterns characterise the link. RM and FM. The connection between the RM and FM shows that there should be an alignment between developing competencies and capabilities for digitalisation and the need for different kinds of investments to embrace digital transformation.This connection is characterised by 1 weak co-occurrence value (Figure 18).The keyword 'embedded systems' co-occurs with 'investments'.This link suggests that the manufacturing industry's digital transformation needs investment in embedded systems.In particular, the advent of cyber-physical systems on the shop floors is the basis for I4.0 requirements that machines to be equipped with sensors, microprocessors, or even complete embedded systems. Overall, the weak co-occurrence value is dominant in the RM-FM connection.This suggests that novel keyword associations with potential association patterns characterise the link. The star diagrams (Figures 19-21) provide a graphical representation of the connections among the sub-models identified in the matrixes.The star diagrams summarise the identified links for each RM, SM and MM, which are displayed as the centre points.The lines' thickness represents the link's strength: the ticker the line, the stronger the connection and vice versa. Discussion This paper presents an in-depth analysis of the major research topics discussed in the manufacturing digital transformation domain resulting in an enriched description of the IBM components and their relations.A semantic analysis performed on the relevant literature provides two main contributions that address the identified gaps.First, the nodes analysis provides the topics that enhance the existing description of the IBM´s components and their importance in the domain.Second, the arcs analysis identifies connections among the IBM´s sub-models and their strengths.The dataset used for the research was composed of 507 articles extracted from Scopus and WoS.The whole investigation was thus based on publications found in different databases allowing for a broader spectrum of data to catch all high-quality studies emerging in the field.The bibliometric mapping was performed using VOSviewer, chosen for its main feature of handling large bibliometric maps and its ability to present those maps in an easy-tointerpret way. Results The component characterisation highlights that the MM, RM, and SM are the most discussed and thus crucial in the domain, i.e. a high number of keywords was associated with them.This shows that the extant literature on the digital transformation of manufacturing has extensively investigated manufacturing, resources and strategy aspects.It can be assumed that these components may play a crucial role within the domain because they have attracted most of the research effort in the field.Finally, it may be concluded that technological competencies and capabilities, manufacturing processes and operations, and the strategic path are the dominant and paramount IBM elements that researchers have focused on.The other components, i.e.CM, RevM, PM, and FM, seem poorly investigated because few or no keywords were associated with them.The research activities seem thus not uniform and not established yet.This indicates that customer, revenue, procurement and finance aspects are understudied thus far, identifying potential gaps in the research agenda of the analysed domain.The identified connections are marked with two numbers, i.e. the weak and strong cooccurrence values.These values give a quantitative measure to determine the strength of such relations.As for the weak connections, they are characterised by a high number of weak co-occurrence values.This means that the weak links are described with novel keyword associations with future potential association patterns: research activities are now emerging in the related areas, but mainstream association patterns have not yet been clearly identified.Herby, the research effort is not uniform regarding these areas thus far, highlighting possible future gaps that require further research to be addressed.As for the strong connections, they are characterised by high numbers of strong co-occurrence values.This means that the links are described with more established keyword associations showing recurring association patterns.Researchers have recurrently focused research resources in these areas.Hereby, the research activities seem to be somewhat intense and rather established.In this study, it was observed that a high number of weak co-occurrence values was dominant.Therefore, the identified connections can be considered weak links, i.e. worthwhile future research fields. Identifying and characterising the connections among the IBM's components is paramount because the different aspects of a BM are understood to be interconnected and dependent in the literature.Identifying the links is thus essential because any change or transformation occurring within a component will impact the others connected to it.Having a complete map of such relations will allow the detection of any consequence of a change in one element on the overall BM structure and, in turn, maintain the alignment and coherence within the whole BM.This study thus presents the first map of links among the IBM's components found in the literature on the manufacturing industry's digital transformation. Limitations of the study The decision to use a bibliometric map was in the value that such a map could bring to the study.The bibliometric map was exploited to identify the main keywords discussed in the digitalisation of the manufacturing domain.This consequently presents an overview of the topics that the literature discusses in the field.Furthermore, the map highlights the connections and strengths among the keywords and provides a clear overview of the essential themes addressed in the digitalisation of the manufacturing field and their relationships.However, the map is a simplified representation of the datasets, which may imply information loss.For instance, specific contextual information is hard to detect by analysing only the keywords map.The imposed threshold on the number of occurrences may have hindered a comprehensive analysis of the literature and missed potentially relevant keywords.The data analysed might also have contained noise, e.g.keywords that were out of context and not suitable for the analysis and generic keywords that did not allow a clear association to one of the IBM's components.This should be considered in the interpretation of the map. In conclusion, characterising the IBM components and identifying the links among them create an enriched BM framework with the aspects of the digital manufacturing transformation.On the one hand, such framework provides horizontal and vertical coverage in the description of the components, and the links add more value to the framework creating the first map of connections.The resulting framework is proposed as a descriptive tool in the focal domain supporting the investigation of the long-term impact of digitalisation in a challenge-driven research context.On the other hand, the proposed enriched framework arises from state-of-the-art research.Consequently, the enriched framework lacks empirical evidence Conclusion and future work This paper contributes to the extant literature by presenting an enriched IBM framework that provides a detailed vertical description of its components, the connection among them, and the relative strength of such relations. As its main theoretical contribution, this study sheds light on the research practice identified in the digital transformation of the manufacturing area.Furthermore, this provides insights into the current research activities that have been carried out in this area, pointing out the possible future focus of such research activities.On the one hand, the results show that MM, RM and SM are the most discussed and, thus, essential components in the investigated domain.The current research effort is thus focused on these areas.On the other hand, CM, RevM, PM, and FM seem to be poorly investigated areas, highlighting the need for further research attention. As its main practical contributions, the results of this study present the most important IBM components (i.e.MM, RM and SM) that must be taken into account when efforts to innovate the BM are made to digitalise the production environment.A general and expected implication is that the manufacturing industry's digital transformation strongly influences these components.Thus, companies undergoing such a transformation must consider how the BM aspects are affected.Furthermore, the semantic analysis makes it possible to identify the connections among IBM components and to create a map of such links.This map may potentially track significant changes caused by a disruption of one IBM component.However, further research and empirical evidence are required to investigate whether the map may also help practitioners trace the effect of disruption of one BM component on the others connected to it.In this case, the map may maintain alignment and coherence within the whole IBM according to the changes. Finally, this study has made a step towards characterising the nature of the connections among the BM components by specifying their strengths.However, the suggested links result from the analysis of the extant literature and are labelled based on the cooccurrence values of keywords.Further investigation on empirical cases will be needed to create a complete map of links that would reflect the industrial reality and characterise the BM components better.In addition, analysis of a multi-case study may improve the presented results and contribute to validating them. Figure 1 . Figure 1.Histogram of the co-occurrence values divided by component in focus.The trend line highlights a cut-off point in correspondence with value 3. Figure 2 . Figure 2. Zoom-in on the distribution of the co-occurrence values from 3 (the cut-off point). Figure 3 . Figure 3. Keywords map extracted from VOSviewer.The map includes 2312 nodes.The larger the nodes, the higher the occurrence value.The colour indicates the cluster to which the node belongs.The arcs indicate whether two keywords co-occur. Figure 4 . Figure 4. Reduced keywords map as a result of the screening of the nodes step.The map includes 90 nodes with a minimum occurrence value of 6. Figure 5 . Figure 5. Matrix representing the connections identified between MAnufacturing Model and Resources Model.The bottom part summarises the total number of filled cells for each component pair and the number of weak and strong co-occurrence values. Figure 6 . Figure 6.Matrix representing the connections identified between MAnufacturing Model and Strategy Model.The bottom part summarises the total number of filled cells for each component pair and the number of weak and strong co-occurrence values. Figure 7 . Figure 7. Matrix representing the connections identified between Manufacturing Model and Network Model.The bottom part summarises the total number of filled cells for each component pair and the number of weak and strong co-occurrence values. Figure 8 . Figure 8. Matrix representing the connections identified between Manufacturing Model and Market Offer Model.The bottom part summarises the total number of filled cells for each component pair and the number of weak and strong co-occurrence values. Figure 9 . Figure 9. Matrix representing the connections identified between Strategy Model and Resource Model.The bottom part summarises the total number of filled cells for each component pair and the number of weak and strong co-occurrence values. Figure 10 . Figure 10.Matrix representing the connections identified between Strategy Model and Network Model.The bottom part summarises the total number of filled cells for each component pair and the number of weak and strong co-occurrence values. Figure 11 . Figure 11.Matrix representing the connections identified between Strategy Model and Market Offer Model.The bottom part summarises the total number of filled cells for each component pair and the number of weak and strong co-occurrence values. Figure 12 . Figure 12.Matrix representing the connections identified between Strategy Model and Procurement Model.The bottom part summarises the total number of filled cells for each component pair and the number of weak and strong co-occurrence values. Figure 13 . Figure 13.Matrix representing the connections identified between Strategy Model and Financial Model.The bottom part summarises the total number of filled cells for each component pair and the number of weak and strong co-occurrence values. Figure 14 . Figure 14.Matrix representing the connections identified between Strategy Model and Revenue Model.The bottom part summarises the total number of filled cells for each component pair and the number of weak and strong co-occurrence values. Figure 15 . Figure 15.Matrix representing the connections identified between Resource Model and Market Offer Model.The bottom part summarises the total number of filled cells for each sub-model pair and the number of weak and strong co-occurrence values. Figure 16 . Figure 16.Matrix representing the connections identified between Revenue Model and Network Model.The bottom part summarises the total number of filled cells for each sub-model pair and the number of weak and strong co-occurrence values. Figure 17 . Figure 17.Matrix representing the connections identified between Revenue Model and Procurement Model.The bottom part summarises the total number of filled cells for each sub-model pair and the number of weak and strong co-occurrence values. Figure 18 . Figure 18.Matrix representing the connections identified between Revenue Model and Financial Model.The bottom part summarises the total number of filled cells for each sub-model pair and the number of weak and strong co-occurrence values. Figure 19 . Figure 19.Star diagram identifying the connections for MM.The lines show the link among the components.The thickness of the lines represents the number of weak and strong co-occurrence values.The lines are orange for the weak values and green for the strong ones.The circles are coloured according to the predominant values, i.e. weak links. Figure 20 . Figure 20.Star diagram identifying the connections for RM.The lines show the link among the components.The thickness of the lines represents the number of weak and strong co-occurrence values.The lines are orange for the weak values and green for the strong ones.The circles are coloured according to the predominant values, i.e. weak links. Figure 21 . Figure 21.Star diagram identifying the connections for SM.The lines show the link among the components.The thickness of the lines represents the number of weak and strong co-occurrence values.The lines are orange for the weak values and green for the strong ones.The circles are coloured according to the predominant values, i.e. weak links. Table 2 . List of the 90 keywords.The grey rows are the keywords associated with IBM´s components.The remaining rows are not considered in this analysis. Table 3 . (Wirtz et al., 2016)Strategy model, Resource model, and Network model.The number of keywords associated with each component is specified at the bottom of the table.BM framework adopted from(Wirtz et al., 2016). Table 4 . (Wirtz et al., 2016)Customer model, Market offer model, and Revenue model.The number of keywords associated with each component is specified at the bottom of the table.BM framework adopted from(Wirtz et al., 2016). Table 5 . (Wirtz et al., 2016)Manufacturing model, Procurement model, and Financial model.The number of keywords associated with each component is specified at the bottom of the table.BM framework adopted from(Wirtz et al., 2016).
13,815.2
2023-01-17T00:00:00.000
[ "Business", "Engineering", "Computer Science" ]
A new mitochondrial gene order in the banded cusk-eel Raneya brasiliensis (Actinopterygii, Ophidiiformes) Abstract The complete mitochondrial genome of the banded cusk-eel, Raneya brasilensis (Kaup, 1856), was obtained using next-generation sequencing approaches. The genome sequence was 16,881 bp and exhibited a novel gene order for a vertebrate. Specifically, the WANCY and the nd6 – D-loop regions were re-ordered, supporting the hypothesis that these two regions are hotspots for gene rearrangements in Actinopterygii. Phylogenetic reconstructions confirmed that R. brasiliensis is nested within Ophidiiformes. Mitochondrial genomes are required from additional ophidiins to determine whether the gene rearrangements that we observed are specific to the genus Raneya or to the subfamily Ophidiinae. Mitochondrial (mt) gene orders are extremely conserved in vertebrates. In fish (Actinopterygii), only 35 departures from the canonical mt gene order have been described, whereas over 2000 species have been sequenced (Satoh et al. 2016). In contrast, the vertebrate sister cladethe tunicatesdemonstrates extreme gene order variability, in which each of the sequenced genera presents a different gene order (Gissi et al. 2010;Rubinstein et al. 2013). Consequently, finding a new gene order in Actinopterygii is a rare event. The banded cusk-eel (Raneya brasiliensis [Kaup, 1856]) is a demersal fish present along the eastern coast of South America, from southern Brazil to northern Argentina. We report here a new mt gene order for this species. The R. brasiliensis specimen we studied was collected in Argentina (43.374000 S 64.901944 W), as bycatch from a shrimp beam trawler. The sample has been deposited in the Invertebrate collection of Museo de La Plata, FCNyM-UNLP, Argentina, Acc. Number MLP-CRG 420. Our original aim was to characterize a myxozoan parasite of this species. DNA was extracted from myxozoan-infected tissue using a DNeasy Blood & Tissue Kit (Qiagen, Germantown, MD). A dualindexed Illumina library was created using a Wafergen Biosystems Apollo 324 NGS Library Prep System (TakaraBio, Mountain View, CA), then paired-ended sequencing (150 bp), was performed on an Illumina HiSeq 3000 (Illumina, San Diego, CA) by the Center for Genome Research and Biocomputing of Oregon State University (USA). DNA reads were assembled using IDBA-UD as implemented in IDBA-1.1.1 (Peng et al. 2010) and the fish mt sequence was identified using BLAST searches. Reads were mapped with Geneious Pro version 9.0.5 using 'High Sensitivity' and mapping only paired reads which 'map nearby'. Among the 189,000,000 reads obtained, the mean coverage of the fish mitogenome was computed with Geneious Pro and estimated to be 40,000 (SD 6000; Min ¼ 22,740; Max ¼ 60,423). Annotation was performed with MitoAnnotator (http://mitofish.aori.utokyo.ac.jp/annotation/input.html, last accessed 2017 Nov) (Iwasaki et al. 2013). The complete mt sequence of R. brasiliensis was submitted to the DNA databank of Japan (accession number LC341245). The fish identification to species level was confirmed by constructing a phylogenetic tree based on cox1 sequences, as recommended by Botero-Castro et al. (2016). All cox1 sequences of Ophidiinae available on 7 December 2017 were downloaded from The National Center for Biotechnology Information (NCBI). Other Ophidiiforms with complete mt sequences were used as outgroups. Cox1 sequences were aligned with MAFFT 7.308 (Katoh and Standley 2013) under the L-ins-i algorithm. A phylogenetic tree was reconstructed with RaxML 7.4.2 (Stamatakis 2006) using codon partitions under the GTRGAMMA model. Bootstrap percentages (BP) were computed using the rapid bootstrap option. The phylogenetic position of R. brasiliensis among Ophidiiformes was investigated using all mt protein-coding genes encoded on the H-strand. The nd6 gene and overlapping gene regions were discarded. Each protein-coding gene was aligned separately with MAFFT, as described above. A maximum likelihood (ML) tree was reconstructed with RaxML 7.4.2 as described above, with different model parameters for each codon partition of each protein-coding gene. In addition, a Bayesian reconstruction was performed using MrBayes 3.2.2 (Ronquist et al. 2012) for 12,500,000 generations under default mcmc settings. The partitions and substitution models were the same as those for the ML analysis. The R. brasiliensis mt genome was 16,881 bp, slightly longer than other Ophidiiformes (16,090-16,564 bp). Surprisingly, we identified that the mt gene order was rearranged compared with the standard Actinopterygii gene order (Figure 1). Specifically, we observed different orders in two regions: the WANCY tRNA gene cluster and the nd6 -D-loop region. All rearranged genes had retained their original strand direction, as observed in other Ophidiiformes. In R. brasiliensis, the trnN was transposed to the end of the 'WANCY' region, presenting a gene order of WACYN (Figure 1(A)). The exact position of the origin of light-strand replication (O L ), which is usually located between trnN and trnC in Actinopterygii, could not be determined. Concerning rearrangement of the nd6 -D-loop region, in the standard mitochondrial gene order the cytb gene is usually flanked by the trnE and trnT on its 5 0and 3 0 -ends, respectively (Figure 1(B)). In R. brasiliensis, the cytb gene was flanked by non-coding regions and the nd6 þ trnE gene region was transposed downstream of the cytb gene. The trnE is now flanked by the trnP at its 3 0 -end. This indicated that both nd6 þ trnE and trnT gene regions D-loop (D) Ophidiiformes phylogeny Figure 1. Linearized representation of Raneya brasiliensis mt gene order (A) compared with the typical Actinopterygii mt gene order (B) and with Carapus bermudensis mt gene order (C). tRNA genes are designated by single-letter amino acid codes. Genes that have undergone rearrangement in R. brasiliensis (A) and C. bermudensis (C) are connected with lines to their corresponding location in the typical Actinopterygii gene order (B). Genes encoded on the L-strand are underlined. The phylogenetic position of R. brasiliensis and C. bermudensis among Ophidiiformes was reconstructed based on mt protein-coding genes (D). All species possess the typical Actinopterygii mt gene order except R. brasiliensis and C. bermudensis, which are indicated in bold. Bootstrap supports above 50% and Bayesian posterior probabilities are indicated near the corresponding nodes, separated with a slash. The mt sequence of the specimen obtained in this work is indicated in bold and with an asterisk. have been transposed. The trnT is now found downstream to the D-loop (or control region), and is flanked at its 3 0 -end by a pseudo-trnP, which suggests that the transposition of the trnT involved the duplication of the trnT þ trnP region (Figure 1(A)). Our phylogenetic reconstruction based on cox1 sequences (Figure 2) Figure 2. Maximum likelihood tree of Ophidiinae cox1 sequences. The cox1 sequence of the specimen obtained in this work is indicated in bold and with an asterisk. Bootstrap supports above 50% are indicated near the corresponding node. specimens differed by 1-2 nucleotides only, supporting the correct identification of our sample. We then investigated the position of R. brasiliensis within the Ophidiiformes using mt protein coding sequences (Figure 1(D)). We found that Raneya (Ophidiinae) was a sister clade of the Neobythitinae Bassozetus zenkevitchi and Lamprogrammus niger with high support (BP ¼ 73; posterior probability PP ¼ 0.98). In agreement with Miya et al. (2003), our analyses did not recover the monophyly of Ophidiidae, as Carapus bermudensis (Carapidae, Carapinae) is the sister clade of Sirembo imberbis (Neobythitinae) (Figure 1(D)). Mitochondrial gene order is highly conserved among vertebrates, thus finding a novel rearrangement is a rare event. In this study, we identified multiple unique rearrangements in R. brasiliensis, a representative of the Ophidiiformes. Interestingly mt rearrangements have been described in another member of the Ophidiiformes -C. bermudensis (Miya et al. 2003;Satoh et al. 2016). However, the rearrangements in Carapus and Raneya differ. In Carapus, they involve the trnP and trnM, which are located downstream of the D-loop (Figure 1(C)). Our phylogenetic analyses (Figure 1(D)) showed that Carapus and Raneya are not closely related, and are both more closely related to species that have a standard Actinopterygii gene order. These findings support the hypotheses that mt rearrangements occurred independently in Carapus and Raneya, and that the Ophidiiformes constitute a hotspot for gene rearrangement. Gene rearrangements in Actinopterygii occur more frequently in the WANCY and the region from the nd5 to the Dloop (Satoh et al. 2016). Our results support this view as the R. brasiliensis rearrangements occurred in these specific regions. Tandem duplications followed by random loss is the favoured model to explain mt rearrangements in vertebrates (Satoh et al. 2016). Our finding of a duplicated pseudo-trnP gene in R. brasiliensis supports this view. However, the tandem duplication-random loss model would require at least three separate events of duplication with multiple gene losses, in the lineage leading to Raneya. Additional sequencing of members of the Ophidiinae should shed light on the origins of the novel Raneya gene order. Additional data should also reveal whether the gene order we observed in Raneya is shared by other members of the Ophidinae or whether it is specific to Raneya. Disclosure statement No potential conflict of interest was reported by the authors.
2,038.4
2018-11-21T00:00:00.000
[ "Biology" ]
Magnetic resonance guided elective neck irradiation targeting individual lymph nodes: A new concept Highlights • Individual elective lymph nodes can be identified using multiple Dixon T2-weighted turbo spin echo with fat suppression.• Magnetic Resonance guided individual lymph node irradiation results in lower dose to the organs at risk.• Especially the submandibular glands, carotid arteries and thyroid can be spared.• The magnetic field on the magnetic resonance imaging - linear accelerator did not lead to increased skin dose depositions. Introduction For the treatment of regional occult metastases in patients with laryngeal cancer, elective neck irradiation (ENI) to the regional lymph node (LN) levels is prescribed with a radiation dose of 46-55 Gy. The LN levels are based on anatomical borders, as determined on computed tomography (CT) using delineation guidelines [1], and encompass the regions where individual lymph nodes (i-LNs) could be located. Due to the relatively large treatment volumes, ENI is associated with significant morbidity. Long-term complications include xerostomia [2], dysphagia [3], hypothyroidism [4] and carotid stenosis [5]. Over the past decades diagnostic imaging has improved substantially, lowering the detection threshold of small regional tumor deposits. Still, the dose prescription and target selection for ENI has largely remained unchanged [6]. Therefore, in recent years, several studies have been initiated exploring the de-intensification of ENI to reduce the toxicity of radiation therapy (RT) in patients with head and neck cancer (HNC). Some of these studies succeeded in decreasing the total RT dose in ENI to 35-40 Gy, without increasing the regional recurrence (RR) rate [7][8][9]. A different approach to reduce RT toxicity for HNC patients could be achieved by reducing the electively treated volumes. In the ideal situation only i-LNs are irradiated instead of large regional LN levels. However, the identification of i-LNs is problematic with conventional CT-based RT planning. In recent years new imaging modalities, including magnetic resonance imaging (MRI), have been introduced and successfully integrated into the RT planning process [10]. With the advent of new MRI techniques it is possible to better visualize soft-tissue structures including (small) i-LNs. This enables a new approach for ENI in which we propose to identify clinically non-suspect i-LNs with MRI and treat them accordingly, which we refer to as individual lymph node treatment in elective neck irradiation (i-ENI). With irradiation of i-LNs, the RT dose to the conventional target volumes can be reduced which, in turn, could result in a lower dose to the organs-at-risk (OARs) and reduced RT toxicity for patients with laryngeal cancer. i-ENI includes targeting multiple small i-LNs simultaneously. We anticipate that accurate online (i.e. while the patient is on the treatment table) identification and position verification of these small soft tissue structures is difficult and mandates MRI in order to minimize potential set-up errors. Fortunately, performing online MRI position verification is currently available with hybrid MRI-RT modalities, such as combined magnetic resonance imaging -linear accelerators (MRLs). In this study, two new MR-based i-ENI strategies were compared to conventional ENI in patients with laryngeal cancer. The aim was to explore the potential reduction of RT dose to the OARs. Study designs and patient selection In this in silico study, all pre-treatment imaging of ten patients with squamous cell carcinoma of the larynx (cT2-4aN0M0) treated at the University Medical Centre (UMC) Utrecht, The Netherlands, between 2016 and 2019, were randomly selected out of an anonymized database. The primary tumor was located at the supraglottic level in four patients while six patients had a tumor located at the glottic level. CT and MR imaging During image acquisition for RT planning purposes, patients were immobilized in RT treatment position in the same custom-made 5-points thermoplastic mask. A treatment-planning CT was acquired: slice thickness 3 mm and minimal in-plane resolution was 1x1 mm 2 . MRI scanning was performed on a 3 T MRI scanner, using two flexible receive coils and a posterior receive coil inside the scanner table. The water-only image of the multiple Dixon T2-weighted turbo spin echo (T2 mDixon TSE) scan [11] was used for identification of the i-LNs (slice thickness: 2 mm, in plane resolution: 0.94 × 0.94 mm 2 ), such that i-LNs could be separated from the fatty environment they are located in. The MRI scans were co-registered to the treatment-planning CT, based on mutual information, and manually adjusted if necessary. Definition and delineation of target structures All target structures were contoured by a radiation oncologist, using the treatment planning CT and MRI scans. The gross tumor volume (GTV) consisted of the primary tumor and was contoured on CT. Subsequently, the clinical target volume of the primary tumor (CTV p ) was created by adding a 5-mm margin to the GTV in all spatial directions, excluding air and bony tissue [12]. The corresponding primary planning target volume (PTV p ) was generated by expanding the CTV p with a margin of 3, 4 and 6-8 mm in respectively lateral, ventro-dorsal and cranio-caudal directions [13]. The conventional bilateral elective LN regions (CTV n ) of LN level II-IV were contoured on the CT according to the guidelines published by the European Organization for Research and Treatment of Cancer (EORTC) [1]. The PTV n was generated by adding a uniform margin of 3 mm to the CTV n . All visible i-LNs were identified and delineated (CTV i-LNs ) on the T2-TSE MRI which were given a margin of 3 mm, according to the conventional margins used for the PTV n , to create the PTV i-LNs . i-LNs were identified as structures with hyperintense signal inside the conventional nodal neck volumes. Delineation of OARs The OARs consisted of the parotid glands (PGs), submandibular glands (SMGs), oral cavity (OC), pharynx constrictor muscles (PCMs), carotid arteries (CA), thyroid and the body contour. All OARs were delineated on CT according to international consensus guidelines [14]. The skin, defined as the most superficial 5 mm of the body contour surface, was contoured as well in order to ascertain possible adverse effects on skin dose due to the static magnetic field inside the MRL. The absolute volume of the skin receiving 35 Gy or higher (V 35Gy ) was considered to be clinically relevant [15]. lymph nodes (PTV i-LNs ) without a 'background' dose prescription; RT is performed on a 7 MV 1.5 T MRL by IMRT. Strategy A was intended as the clinical standard RT treatment. The new approach of i-ENI on an MRL was explored in strategies B and C. In strategy C, the maximum potential of OAR sparing was aimed at only irradiating i-LNs, while strategy B was introduced to serve as an intermediate approach between strategies A and C. The difference between B and C is the addition of a so-called background dose to the conventional PTV n in strategy B. Theoretically, (very) small i-LNs containing micrometastasis could be missed on MRI. In order to treat these, a background RT dose of 36 Gy was prescribed in 35 fractions (33 Gy EQD2 (α/ β= 10) ). Treatment planning The plans were generated on the treatment-planning CT. The primary aim of the treatment planning was to achieve clinically acceptable plans for the three strategies. The volume of the PTVs receiving at least 95% of the prescribed dose (V 95% ) was aimed to be 98% or higher. Air inside PTVs was omitted from the structure to ensure sufficient target coverage. Target overdose (V 107% ) was set at a maximum of 1% for each PTV (Table 1). Other technical details on the methods used for treatment planning of all strategies can be found in supplementary material 1. Dose distributions and dose volume histograms (DVHs) were generated for each patient and strategy. Plan evaluation was performed by assessing dosimetric parameters in the OARs. The mean dose (D mean ) received by the OARs, and the V 35Gy in case of the skin, were determined. Statistical analysis Ordinal variables are reported as absolute values. Continuous variables are reported as median with inter quartile range (IQR). Plan evaluation comparing D mean received by the OARs between strategy B vs. strategy A and strategy C vs. strategy A was conducted using Wilcoxon signed-rank test due to the relatively small sample size. All statistical testing was performed with SPSS (Version 25.0). A p-value < 0.05 was considered statistically significant. Table 1 Dosimetric target prescription and OAR constraints used for RT planning of all three strategies. PTV = planning target volume, OAR = organ at risk, V x% = relative volume receiving x% of the prescribed RT dose, D max = maximum dose, ALARA = As low as reasonably achievable. Soft constraints are recommended but may be higher in individual plans. Hard constraints are mandatory for plan approval. Results The mean numbers of i-LNs observed on the MR images on the right/ left side were 18/17, respectively. Whereas on CT only 12 i-LNs were identified on both the right and left side of the neck (supplementary Table 1). The smallest size of delineated i-LNs on MRI was 3 mm measured over the longitudinal axis in the transversal plane. In Fig. 2, the difference in the conspicuity of i-LNs on CT and MRI is demonstrated. The resulting absolute volumes of PTV i-LNs were 85% smaller compared to the conventional PTV n . For all patients clinically acceptable plans were generated for strategies A, B and C in which OAR dose constraints, in terms of maximum dose (D max ) or mean dose (D mean ), were met (Fig. 3, supplementary Tables 2, 3 and 4). Dose reductions in OARs MR-based individual lymph node irradiation (i-ENI), with and without background dose (strategies B/C) resulted in significant reductions of D mean across all patients in the submandibular glands (-8.5/-10.6 Gy), parotid glands (-2.2/-4.0 Gy), pharynx constrictor muscles (-2.8/-5.6 Gy), carotid arteries (-8.9/-11.8 Gy) and thyroid (-8.7/-18.0 Gy), when compared to conventional treatment (strategy A). Nonsignificant D mean reductions, between strategies B/C and A were found in the oral cavity (+0.4/-3.8 Gy). The absence of the background RT dose in strategy C resulted in an extra D mean reduction across all patients in all OARs ranging from − 1.8 to − 9.3 Gy, compared to strategy B ( Table 2). No disadvantageous effects on skin dose were observed due to the magnetic field in the MRL. Actually, compared to conventional elective RT by VMAT (strategy A), MR-based strategies B and C showed an average decrease of − 12.2 cm 3 and -33.0 cm 3 of skin V 35Gy , respectively ( Table 2). Inter-patient variation It was not possible to achieve a reduction of D mean in all OARs with strategies B and C, compared to A in every patient. Sparing of the CAs and thyroid was realized in all patients, while the SMGs, PGs, OC and PCMs structures received a slightly higher dose in some of the MR-based plans. Relatively large variation in D mean reductions in the OARs were observed between patients. The differences in D mean varied from − 12.3 to + 1.6 Gy in the submandibular glands (2 patients received a higher dose in the MR-based plans), from − 13.3 to − 6.8 Gy in the carotid arteries and from − 16.2 to -6.2 Gy in the thyroid with strategy B vs. A. These variations were even larger for strategy C compared with strategy A. The D mean reductions for the other OARs were smaller ( Table 2). Discussion Targeting i-LNs facilitated by MRI guidance is a promising new concept. Significant D mean reductions were achieved with MR-based i-ENI in the SMGs, PGs, PCMs, CAs and thyroid, compared to conventional treatment. Most notably, average D mean reductions>5 Gy were found in the SMGs, CAs and thyroid. In the SMGs however, these reductions were not achieved in all patients. Based on the results of this study we expect the concept of MR-based i-ENI has the potential to reduce RT toxicity for laryngeal patients without compromising the dose in the lymph nodes. As a result of the D mean reductions, advantageous effects on RTassociated toxicity could be expected for patients with laryngeal cancer who are treated with MR-based i-ENI. Based on the organ-specific Normal Tissue Control Probability (NTCP) model for the SMGs [16], the number of patients with salivary flow < 25% of the SMGs 1 year after RT could be expected to decrease by 12% and 16% in case of MR-based i-ENI, with and without additional background RT dose prescription, respectively. For hypothyroidism [17] this reduction amounts to 12% and 22%. Unfortunately no NTCP models are currently available for the CAs; however previous studies revealed that dose reductions in the CAs are associated with less carotid stenosis and cerebrovascular events [5,[18][19][20][21]. These studies imply that the dose reduction could lead to a clinically meaningful reduction in side effects in the majority of our patients. Previous studies described potentially increased dose depositions at skin-tissue interfaces due to the static magnetic field in the MRL [15,22]. This could lead to undesired radiation-induced toxicity. In this study no increase of dose in the most superficial 5 mm of the skin (V 35Gy ) was observed in the MR-based plans. On average 18 i-LNs were delineated in the elective neck volumes on each side per patient, varying from 12 to 31 i-LNs. A higher number of i-LNs (approximately 6 additional i-LNs per side) were identified on MR compared to CT. In a pathological study comparable results were described by Pou et al. who analyzed 118 elective neck dissections in which on average 21.15 LNs were counted per unilateral neck dissection. Nonetheless, 47.5% of all specimens contained < 18 LNs and 18.6% had even < 10 LNs [23]. The variation of the counted LNs could be due to the natural anatomical variation found in humans. In the present study, a 3 T MR-scanner was used, as in the radiotherapy simulation phase, for optimal target and LNs identification. Visibility of small i-LNs on the MRL might be problematic since a lower gradient (1.5 T vs 3 T) is applied and no dedicated head and neck coil is available. Therefore, in an ongoing study, we assess the sensitivity of a 1.5 T MRL for individual lymph node identification in comparison to the 3 T MR scanner (the first results are shown in supplementary material 2). The PTV margins for i-LNs were adopted from the PTV margins of the conventional elective neck volumes and are used to compensate for setup errors. These PTV margins do not included possible movement of i-LNs during RT treatment. Therefore, a separate study will be performed in which the intrafraction and interfraction movement of i-LNs will be determined with MR. The results from the intrafraction, interfraction and i-LNs visibility studies will also indicate whether practical issues such as on-line delineation procedures and adaptive strategies will limit clinical implementation. Few i-LNs were found in the cranial and caudal parts of the conventional elective nodal volumes. As a consequence, when no background RT dose was prescribed with i-ENI (strategy C), the elective field sizes were reduced in the cranial and caudal directions. These reductions could partly explain the large D mean reductions for the thyroid found in strategy C. The sparing effect in the thyroid was smaller when an additional background RT dose was prescribed to the entire LN volumes (strategy B). Sparing of the CAs and thyroid was possible for every patient. However, the SMGs could not always be spared and in some plans received a slightly higher D mean due to variations between VMAT and IMRT planning. The highest sparing potential was observed in anatomical situations where the distance between the target volumes (primary tumor and/or i-LNs) and the SMGs was the largest. Other groups investigate de-intensification of ENI as well. Three previously published studies succeeded in decreasing the dose to the elective neck to 36-40 Gy [7,8] or excluded LN levels [24] without increasing the RR. Other ongoing studies are selecting fewer LN levels based on LN drainage patterns [25] or imaging parameters [26]. Our proposed concept of i-ENI is a different approach in which MRI guidance could enable a more delimited elective target definition, thereby potentially allowing for healthy tissue to be better spared. It is conceivable that two or more de-intensification approaches could be combined in future studies to further reduce RT related toxicity. The background dose of 33 Gy (EQD2) used in strategy B is based on three considerations. First of all, the background dose is applied in patients who do not have clinical nodal involvement (N0) and therefore have a low probability of having occult metastasis. Secondly, all visible i-LNs and occult metastases in those i-LNs are irradiated with the conventional dose and do not have to be covered by the background dose. Thirdly, the background dose is only needed to cover the treatment of Table 2 Dosimetric parameters for all OARs for three RT strategies A, B, and C, as values averaged for all plans. For all OARs but the skin, the D mean is displayed; the V 35Gy is listed for the skin. For strategies B and C, also the difference compared to strategy A is indicated (B vs. A or C vs. A), as an average difference. Abbreviations: SMGs = submandibular glands, PGs = parotid glands, OC = oral cavity, PCMs = pharynx constrictor muscles, CAs = carotid arteries. occult metastases not laying inside the visible i-LNs and therefore will be smaller than the smallest i-LN that is detected with MRI. Calculations by van den Bosch et al. [27] showed a regional tumor control probability (TCP) of 94% if patients received ENI with a total dose of 33 Gy (EQD2 (α/ β= 10) ), under the assumption that all occult metastases had a diameter smaller than 3 mm. In our study the smallest size of detected i-LNs was 3 mm. In addition to this rationale, we are convinced that a lower background dose is justified than the dose (36-40 Gy EQD2) used in other clinical studies [7,8] that investigated the de-intensification of ENI, since our background dose is only needed for small occult metastasis (<3 mm) in N0 patients. Since MR-based i-ENI will have an impact on both patient burden and costs, it is reasonable to select only patients in whom substantial dose reductions in the OARs are expected. For this selection process, a plan comparison for each patient could be performed between different RT strategies. Since plan comparison is a time-consuming process, it could be more efficient to utilize the distance of target areas relative to the OARs as guideline to predict which patients are most likely to benefit from i-ENI. In patients with laryngeal cancer, significant D mean reductions in OARs were observed with MR-based i-ENI compared to conventional treatment. Even with the use of a 36-Gy background RT dose, large D mean reductions (>5 Gy) can be achieved in the thyroid and carotid arteries for all patients and in the submandibular glands for a half of these patients. In selected patients, adapting elective treatment to the i-LNs could lead to less salivary gland dysfunction, carotid stenosis (i.e. stroke) and hypothyroidism. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
4,480
2021-10-01T00:00:00.000
[ "Medicine", "Physics" ]
Micro-Doppler Based Classification of Human Aquatic Activities via Transfer Learning of Convolutional Neural Networks Accurate classification of human aquatic activities using radar has a variety of potential applications such as rescue operations and border patrols. Nevertheless, the classification of activities on water using radar has not been extensively studied, unlike the case on dry ground, due to its unique challenge. Namely, not only is the radar cross section of a human on water small, but the micro-Doppler signatures are much noisier due to water drops and waves. In this paper, we first investigate whether discriminative signatures could be obtained for activities on water through a simulation study. Then, we show how we can effectively achieve high classification accuracy by applying deep convolutional neural networks (DCNN) directly to the spectrogram of real measurement data. From the five-fold cross-validation on our dataset, which consists of five aquatic activities, we report that the conventional feature-based scheme only achieves an accuracy of 45.1%. In contrast, the DCNN trained using only the collected data attains 66.7%, and the transfer learned DCNN, which takes a DCNN pre-trained on a RGB image dataset and fine-tunes the parameters using the collected data, achieves a much higher 80.3%, which is a significant performance boost. Introduction Increased demand for security, law enforcement, rescue operations, and health care has accelerated research in the detection, monitoring, and classification of human activities [1,2] based on remote sensing technologies. In particular, the unique micro-Doppler signatures from human activities enabled diverse and extensive research on human detection and activity classification/analysis using radar sensors [3][4][5][6][7][8][9][10][11][12]. More specifically, the authors of [6] extracted direct micro-Doppler features such as bandwidth and Doppler period, the authors of [7] applied linear predictive code coefficients, and the authors of [8] applied minimum divergence approaches for robust classification under a low signal-to-noise ratio environment. Furthermore, the authors of [9] suggested to use particle filters to extract features, the authors of [10] employed biceptrum-based features, the authors of [11] utilized orthogonal pseudo-Zernike polynomials, and features based on the centroid or the singular value decomposition (SVD) have been exploited in [12]. Compared to optical sensors, the electromagnetic radar sensors can operate in all weather conditions, regardless of lighting changes, and hence are competitive for the applications that require robust operation. So far, however, most of the research has focused on the classifications of human activities on dry ground. In addition to dry ground, the accurate classification of human activities on water (namely, the aquatic activities) has wide applications in rescue operations or coastal border patrols; for example, as monitoring human activities on ocean at night or on a foggy day using optical sensors can be extremely challenging, and the robust detection and classification using radar becomes desirable. However, for the activities on water, it becomes more difficult to design informative handcrafted features based on micro-Doppler signatures, as in [6]. The reason is because human motions on water tend to be more irregular than those on the dry ground, and the micro-Doppler signatures become noisier due to water drops and waves. Moreover, the radar cross section (RCS) of parts of a human subject on water is low so that the Doppler signatures become less apparent than those on dry ground. Therefore, collecting large-scale training data of high quality, which is crucial for the application of machine learning algorithms, has become more difficult and expensive. In this paper, we investigate whether the micro-Doppler signatures can be still utilized to the more challenging case of classifying human activities on water. First, we carry out a simulation study on the micro-Doppler signatures of swimming activities using the point scatterer model to understand whether the signatures for different activities can be discriminative. Then, we continue our preliminary study in [13] by applying deep convolutional neural network (DCNN) directly to the spectrogram for the classification of human activities on water. As has been widely proven in many applications [14][15][16][17], the motivation of applying the DCNN is clear: instead of handcrafting the features for a given classification task, the DCNN can automatically learn the features as well as the classification boundaries directly from the two-dimensional (2-D) spectrogram data. We show that the DCNN becomes much more powerful, particularly with the transfer learning technique, for situations in which collecting high-quality data and devising handcrafted features are more challenging, as in the case of classifying human activities on water. Applying deep neural networks (also known as deep learning) to the micro-Doppler signature-based classification has been attempted only recently. Namely, the authors of [17] were the first to apply a DCNN to the micro-Doppler signature-based human detection and activity classification, the authors of [18] utilized stacked auto-encoder for fall motion detection, and the authors of [19] applied a DCNN similar to that in [17] but with a limited dataset. To the best of our knowledge, leveraging the transfer learning of a DCNN has not been attempted before for the micro-Doppler signature-based activity classification. For our experiments, we used Doppler radar and collected spectrogram data of five human subjects performing five different activities on water: freestyle, backstroke, and breaststroke swimming, swimming while pulling a floating boat, and rowing. We implemented two versions of the DCNN and compared their performances with a baseline Support Vector Machine (SVM) that implements the handcrafted features in [6]. The first DCNN is the one trained from scratch using the collected spectrogram data, which exactly follows the approach of [13,17]. The second DCNN is the transfer learned DCNN, namely, we take a pre-trained DCNN, which is trained on a separate, large-scale RGB image classification dataset, ImageNet [20], and fine-tune the network parameters using the collected spectrogram data. Our result of the transfer learned DCNN significantly outperforming other schemes illustrates that the features learned by the DCNN for the RGB image classification can be successfully transferred to the micro-Doppler signature-based classification. In the following sections, we summarize our simulation study and data collection process, explain the DCNN training in more detail, and present the experimental results. Micro-Doppler Simulation of Swimming Activities It is an interesting research question whether it is possible to obtain meaningful micro-Doppler signatures for the human activities on water when a subject is illuminated by radar. To that end, we carried out a simulation study of micro-Doppler signatures for the swimming activities to understand their characteristics before collecting real measurement data. When a person is swimming, the major detectable parts of a human body from radar are arms. Hence, if the arm motion of a person Sensors 2016, 16,1990 3 of 10 is properly modeled, we can simulate the expected micro-Doppler signatures as similar works were done for human walking in [3,4]. In this section, we focus on two swimming styles, the freestyle and backstroke, and simulate the micro-Doppler to verify whether discriminative signatures could be obtained. Based on [21], we calculated the velocity of point scatterers of upper and lower arms of a swimmer for each swimming style. The arms are modeled as a sum of point scatterers with a separation of wave length (λ), and we assumed the received signal becomes the linear superposition of Doppler shifts from all point scatters. For simplicity, a single scattering model is employed while ignoring multiple reflections. For the freestyle, we modeled the motion as two rotating cylinders, in which the upper arm (r 1 ) rotates with the angular velocity of ω while keeping θ constant as shown in Figure 1a, and the lower arm (r 2 ) is assumed to be always on the x-z plane. In this case, the velocity of each point scatters can be analytically calculated through trigonometry. We set r 1 as 0.28 m, r 2 as 0.42 m, and ω as 2π rad/s. With an operating frequency of 7.25 GHz and a sampling rate of 1 Ksps, the simulated spectrogram with additional Gaussian noise is presented in Figure 2a. For the backstroke, in contrast, we assumed the motion as a single rotating cylinder as shown in Figure 1b. We set the length (r) of the cylinder as 0.7 m and the angular velocity, ω, as π rad/s, since the rotation of the backstroke is typically slower than the freestyle. The resulting simulated spectrogram is shown in Figure 2b. Sensors 2016, 16,1990 3 of 10 swimming, the major detectable parts of a human body from radar are arms. Hence, if the arm motion of a person is properly modeled, we can simulate the expected micro-Doppler signatures as similar works were done for human walking in [3,4]. In this section, we focus on two swimming styles, the freestyle and backstroke, and simulate the micro-Doppler to verify whether discriminative signatures could be obtained. Based on [21], we calculated the velocity of point scatterers of upper and lower arms of a swimmer for each swimming style. The arms are modeled as a sum of point scatterers with a separation of wave length (λ), and we assumed the received signal becomes the linear superposition of Doppler shifts from all point scatters. For simplicity, a single scattering model is employed while ignoring multiple reflections. For the freestyle, we modeled the motion as two rotating cylinders, in which the upper arm (r1) rotates with the angular velocity of ω while keeping θ constant as shown in Figure 1a, and the lower arm (r2) is assumed to be always on the x-z plane. In this case, the velocity of each point scatters can be analytically calculated through trigonometry. We set r1 as 0.28 m, r2 as 0.42 m, and ω as 2π rad/s. With an operating frequency of 7.25 GHz and a sampling rate of 1 Ksps, the simulated spectrogram with additional Gaussian noise is presented in Figure 2a. For the backstroke, in contrast, we assumed the motion as a single rotating cylinder as shown in Figure 1b. We set the length (r) of the cylinder as 0.7 m and the angular velocity, ω, as π rad/s, since the rotation of the backstroke is typically slower than the freestyle. The resulting simulated spectrogram is shown in Figure 2b. By comparing Figure 2a,b, we observe clear sinusoidal signatures in both figures. However, we also see that the signatures from the freestyle and backstroke are not identical and show a subtle difference. Such a difference, which is confirmed by the real measurement data in the next section, suggests that the micro-Doppler signatures for the activities on water can indeed be discriminative, and a powerful classifier may be necessary for the accurate classification of the activities. By comparing Figure 2a,b, we observe clear sinusoidal signatures in both figures. However, we also see that the signatures from the freestyle and backstroke are not identical and show a subtle difference. Such a difference, which is confirmed by the real measurement data in the next section, suggests that the micro-Doppler signatures for the activities on water can indeed be discriminative, and a powerful classifier may be necessary for the accurate classification of the activities. Sensors 2016, 16,1990 3 of 10 swimming, the major detectable parts of a human body from radar are arms. Hence, if the arm motion of a person is properly modeled, we can simulate the expected micro-Doppler signatures as similar works were done for human walking in [3,4]. In this section, we focus on two swimming styles, the freestyle and backstroke, and simulate the micro-Doppler to verify whether discriminative signatures could be obtained. Based on [21], we calculated the velocity of point scatterers of upper and lower arms of a swimmer for each swimming style. The arms are modeled as a sum of point scatterers with a separation of wave length (λ), and we assumed the received signal becomes the linear superposition of Doppler shifts from all point scatters. For simplicity, a single scattering model is employed while ignoring multiple reflections. For the freestyle, we modeled the motion as two rotating cylinders, in which the upper arm (r1) rotates with the angular velocity of ω while keeping θ constant as shown in Figure 1a, and the lower arm (r2) is assumed to be always on the x-z plane. In this case, the velocity of each point scatters can be analytically calculated through trigonometry. We set r1 as 0.28 m, r2 as 0.42 m, and ω as 2π rad/s. With an operating frequency of 7.25 GHz and a sampling rate of 1 Ksps, the simulated spectrogram with additional Gaussian noise is presented in Figure 2a. For the backstroke, in contrast, we assumed the motion as a single rotating cylinder as shown in Figure 1b. We set the length (r) of the cylinder as 0.7 m and the angular velocity, ω, as π rad/s, since the rotation of the backstroke is typically slower than the freestyle. The resulting simulated spectrogram is shown in Figure 2b. By comparing Figure 2a,b, we observe clear sinusoidal signatures in both figures. However, we also see that the signatures from the freestyle and backstroke are not identical and show a subtle difference. Such a difference, which is confirmed by the real measurement data in the next section, suggests that the micro-Doppler signatures for the activities on water can indeed be discriminative, and a powerful classifier may be necessary for the accurate classification of the activities. Measurements of Human Activities on Water For the measurement of the five activities on water, we used the same setup in [13] and collected the spectrogram data of the activities of five human subjects in a swimming pool. The average height and weight of human subjects are 178 cm and 76 kg. The activities include freestyle, backstroke, and breaststroke swimming, pulling a floating object, and rowing a small boat. As we focused only on the human signatures on water, the measurement data was collected in a more controlled environment than that of a sea or a lake. Doppler radar, which operated at 7.25 GHz with an output power of 15 dBm, was used to capture human motions as each human subject approached the radar system. We used vertical polarization assuming that human motion, especially arm motion, effectively interacts with illuminated electromagnetic (EM) waves. The received signal was processed with the joint-time frequency analysis to investigate its time-varying micro-Doppler characteristics. In the short-term Fourier transform, the fast Fourier transform (FFT) size was set at 256, and the non-overlapping step size at 20 ms. Example pictures and spectrograms for each activity are presented in Figure 3. While we recognize that each activity indeed possesses unique micro-Doppler signatures, as suggested by the simulation study in Section 2, they are not as clear as those from dry-ground measurements because of the low RCS and interference of water waves and drops. Measurements of Human Activities on Water For the measurement of the five activities on water, we used the same setup in [13] and collected the spectrogram data of the activities of five human subjects in a swimming pool. The average height and weight of human subjects are 178 cm and 76 kg. The activities include freestyle, backstroke, and breaststroke swimming, pulling a floating object, and rowing a small boat. As we focused only on the human signatures on water, the measurement data was collected in a more controlled environment than that of a sea or a lake. Doppler radar, which operated at 7.25 GHz with an output power of 15 dBm, was used to capture human motions as each human subject approached the radar system. We used vertical polarization assuming that human motion, especially arm motion, effectively interacts with illuminated electromagnetic (EM) waves. The received signal was processed with the joint-time frequency analysis to investigate its time-varying micro-Doppler characteristics. In the short-term Fourier transform, the fast Fourier transform (FFT) size was set at 256, and the non-overlapping step size at 20 ms. Example pictures and spectrograms for each activity are presented in Figure 3. While we recognize that each activity indeed possesses unique micro-Doppler signatures, as suggested by the simulation study in Section 2, they are not as clear as those from dry-ground measurements because of the low RCS and interference of water waves and drops. In order to construct the training and test data sets for our DCNN-based approach, we measured a single human subject five times for each activity. From each measurement, we randomly extracted five spectrograms with 2 s intervals (100 pixels), potentially overlapping with each other. In the cropped spectrogram, the Doppler frequency was between 0 Hz and 500 Hz (256 pixels). The negative frequency does not contain significant information because the human subject was approaching the radar during the measurement. As a result, we have a total of 625 data samples (i.e., spectrograms), which consist of five actions with 25 samples for each action for every 5 subjects. The dimension of each spectrogram was 252 (frequency) by 100 (time). DCNN Trained from Scratch Recently, DCNNs are revolutionizing many applications that mainly involve 2-D data, e.g., image recognition. The key reason is due to their power of automatically learning hierarchical representations (i.e., features) for given classification tasks directly from the raw data input. Such a revolution was realized due to the explosion of data, the advent of high-performance computing In order to construct the training and test data sets for our DCNN-based approach, we measured a single human subject five times for each activity. From each measurement, we randomly extracted five spectrograms with 2 s intervals (100 pixels), potentially overlapping with each other. In the cropped spectrogram, the Doppler frequency was between 0 Hz and 500 Hz (256 pixels). The negative frequency does not contain significant information because the human subject was approaching the radar during the measurement. As a result, we have a total of 625 data samples (i.e., spectrograms), which consist of five actions with 25 samples for each action for every 5 subjects. The dimension of each spectrogram was 252 (frequency) by 100 (time). DCNN Trained from Scratch Recently, DCNNs are revolutionizing many applications that mainly involve 2-D data, e.g., image recognition. The key reason is due to their power of automatically learning hierarchical representations (i.e., features) for given classification tasks directly from the raw data input. Such a revolution was realized due to the explosion of data, the advent of high-performance computing processors such as the graphic processing unit (GPU), and continued algorithmic innovations. A more thorough overview on DCNNs and deep learning in general can be found in [22], and the references therein. The authors of [17] were the first to apply a DCNN to micro-Doppler signature-based human activity classification by casting the problem as an image classification problem. Applying a DCNN to micro-Doppler signature directly achieved the accuracy essentially in par with the handcrafted feature-based state-of-the-art scheme in [6]. In order to apply the framework of [17] to the classification of human activities on water, we can simply feed the spectrogram data obtained in Section 3 and train the parameters of the DCNN. Regarding the handcrafted feature-based scheme, however, we observe that the micro-Doppler signatures of the activities on water are more subtle compared to those of the activities on dry ground, as can be seen in Figure 3; hence, it is not clear whether the handcrafted features developed in [6] would also lead to high accuracy when classifying the activities on water. We tried two different DCNN configurations. The first model (DCNN-Scratch-I), depicted in Figure 4a, is identical to the one considered in [17]. That is, as shown in the figure, we used three convolution layers, in which each layer had 20 convolution filters with 5 pixels-by-5 pixels in size, respectively, followed by a Rectified Linear Unit (ReLU) activation function and a 2 pixels-by-2 pixels max pooling layer. We used 500 hidden nodes with ReLU activation for the fully connected layer, followed by a softmax classifier. The network has about 4 million parameters. The second configuration (DCNN-Scratch-II) is inspired by the recent advances in the DCNN architectures [23,24] that use consecutive convolution filters before pooling as depicted in Figure 4b. The number of filters and filter sizes for each layer are given in the figure, and the network has about 55 million parameters. To train both models, we used the mini-batch Stochastic Gradient Descent (SGD) with momentum, with the learning rate 0.01 for DCNN-Scratch-I and 0.0005 for DCNN-Scratch-II, the momentum 0.9, and the batch size of 50. Dropout was used at the fully connected layer with a rate of 0.5, and the maximum iteration of the mini-batch SGD update was 5000. We also used zero padding at the boundary of the data. Sensors 2016, 16,1990 5 of 10 processors such as the graphic processing unit (GPU), and continued algorithmic innovations. A more thorough overview on DCNNs and deep learning in general can be found in [22], and the references therein. The authors of [17] were the first to apply a DCNN to micro-Doppler signature-based human activity classification by casting the problem as an image classification problem. Applying a DCNN to micro-Doppler signature directly achieved the accuracy essentially in par with the handcrafted feature-based state-of-the-art scheme in [6]. In order to apply the framework of [17] to the classification of human activities on water, we can simply feed the spectrogram data obtained in Section 3 and train the parameters of the DCNN. Regarding the handcrafted feature-based scheme, however, we observe that the micro-Doppler signatures of the activities on water are more subtle compared to those of the activities on dry ground, as can be seen in Figure 3; hence, it is not clear whether the handcrafted features developed in [6] would also lead to high accuracy when classifying the activities on water. We tried two different DCNN configurations. The first model (DCNN-Scratch-I), depicted in Figure 4a, is identical to the one considered in [17]. That is, as shown in the figure, we used three convolution layers, in which each layer had 20 convolution filters with 5 pixels-by-5 pixels in size, respectively, followed by a Rectified Linear Unit (ReLU) activation function and a 2 pixels-by-2 pixels max pooling layer. We used 500 hidden nodes with ReLU activation for the fully connected layer, followed by a softmax classifier. The network has about 4 million parameters. The second configuration (DCNN-Scratch-II) is inspired by the recent advances in the DCNN architectures [23,24] that use consecutive convolution filters before pooling as depicted in Figure 4b. The number of filters and filter sizes for each layer are given in the figure, and the network has about 55 million parameters. To train both models, we used the mini-batch Stochastic Gradient Descent (SGD) with momentum, with the learning rate 0.01 for DCNN-Scratch-I and 0.0005 for DCNN-Scratch-II, the momentum 0.9, and the batch size of 50. Dropout was used at the fully connected layer with a rate of 0.5, and the maximum iteration of the mini-batch SGD update was 5000. We also used zero padding at the boundary of the data. Transfer Learned DCNN While the DCNN trained from scratch with the collected spectrogram could learn useful features and achieve high classification accuracy as in [17], the small amount of our training data for the activities on water (i.e., 625 samples) may not realize the full potential of the DCNN. Therefore, we also experimented with the transfer learned DCNN. Transfer Learned DCNN While the DCNN trained from scratch with the collected spectrogram could learn useful features and achieve high classification accuracy as in [17], the small amount of our training data for the activities on water (i.e., 625 samples) may not realize the full potential of the DCNN. Therefore, we also experimented with the transfer learned DCNN. Sensors 2016, 16,1990 6 of 10 Transfer learning [25] generally refers to the techniques that transfer the knowledge or models learned from a certain task to some other related, but different, task (i.e., a target task) that typically lacks sufficient training data. Such techniques commonly improve the accuracy of the target task provided that the two tasks possess some similarity in the data distribution. While various transfer learning techniques exist, the transfer learning of the DCNN can be done with the following simple procedure: Take a DCNN that is already trained for some classification task that is related to the target task and possesses a large amount of training data, replace the output classification (softmax) layer that matches the target task, and fine-tune (i.e., update) the DCNN parameters with the limited amount of training data from the target task. By following the above procedure, we take a DCNN that is pre-trained with the ImageNet dataset and fine-tune the network parameters using the spectrogram data collected for the activities on water. ImageNet [20] is a large-scale benchmark dataset that consists of 1.5 million RGB training images that are 224 pixels-by-224 pixels, created for computer vision tasks such as image classification or object detection. The dataset was used for the annual ImageNet Large-Scale Visual Recognition Challenge (ILSVRC) and significantly accelerated the innovation for the DCNN-based algorithms. Furthermore, the ImageNet pre-trained DCNN has been successfully used as a base model for transfer learning to some other applications (i.e., target tasks) such as style classification [26] or earth observation classification [27] that have limited training data. However, most of the transfer learning schemes regarding fine-tuning the ImageNet pre-trained DCNN were applied to target tasks that still take the RGB images as input. Therefore, since the characteristics of the natural RGB images in ImageNet and the spectrograms collected from the Doppler radar are completely different, it is not apparent at all whether our approach, i.e., the transfer learning of the ImageNet pre-trained DCNN to the spectrogram-based classification of activities, can be effective. In our experiments, we show such effectiveness of transfer learning with two seminal DCNN models pre-trained on ImageNet, namely, AlexNet [15] and VGG16 [23]. AlexNet, as is depicted in Figure 5a, has five convolutional layers and three fully connected layer with about 60 million parameters. The model is the winner of the 2012 ILSVRC challenge [15] and became the catalyst of recent research on the DCNN. Since the spectrogram image has a single channel, we simply copied the data for each of the three input channels for AlexNet. For fine-tuning the network parameters, the final softmax layer of AlexNet was replaced with a new softmax layer that has five classes, and the entire network parameters were updated with the spectrogram data. The architecture of the second base model, VGG16, is given in Figure 5b. As can be seen in the figure, VGG16 has a much deeper architecture than the others, i.e., 13 convolutional layers and 3 fully connected layers. The network has 138 million parameters and achieved about half the error rate on the ImageNet test set compared to AlexNet in 2014 ILSVRC [23]. We follow the same procedure of transfer learning for VGG16 as to that of AlexNet. We call the two transfer learned DCNNs with each base model as DCNN-TL-AlexNet and DCNN-TL-VGG16, respectively. The hyper-parameters for the mini-batch SGD training of both models were identical to those of the DCNNs trained from scratch, except for learning rates of 0.001 for AlexNet and 0.0005 for VGG16. Sensors 2016, 16,1990 6 of 10 Transfer learning [25] generally refers to the techniques that transfer the knowledge or models learned from a certain task to some other related, but different, task (i.e., a target task) that typically lacks sufficient training data. Such techniques commonly improve the accuracy of the target task provided that the two tasks possess some similarity in the data distribution. While various transfer learning techniques exist, the transfer learning of the DCNN can be done with the following simple procedure: Take a DCNN that is already trained for some classification task that is related to the target task and possesses a large amount of training data, replace the output classification (softmax) layer that matches the target task, and fine-tune (i.e., update) the DCNN parameters with the limited amount of training data from the target task. By following the above procedure, we take a DCNN that is pre-trained with the ImageNet dataset and fine-tune the network parameters using the spectrogram data collected for the activities on water. ImageNet [20] is a large-scale benchmark dataset that consists of 1.5 million RGB training images that are 224 pixels-by-224 pixels, created for computer vision tasks such as image classification or object detection. The dataset was used for the annual ImageNet Large-Scale Visual Recognition Challenge (ILSVRC) and significantly accelerated the innovation for the DCNN-based algorithms. Furthermore, the ImageNet pre-trained DCNN has been successfully used as a base model for transfer learning to some other applications (i.e., target tasks) such as style classification [26] or earth observation classification [27] that have limited training data. However, most of the transfer learning schemes regarding fine-tuning the ImageNet pre-trained DCNN were applied to target tasks that still take the RGB images as input. Therefore, since the characteristics of the natural RGB images in ImageNet and the spectrograms collected from the Doppler radar are completely different, it is not apparent at all whether our approach, i.e., the transfer learning of the ImageNet pre-trained DCNN to the spectrogram-based classification of activities, can be effective. In our experiments, we show such effectiveness of transfer learning with two seminal DCNN models pre-trained on ImageNet, namely, AlexNet [15] and VGG16 [23]. AlexNet, as is depicted in Figure 5a, has five convolutional layers and three fully connected layer with about 60 million parameters. The model is the winner of the 2012 ILSVRC challenge [15] and became the catalyst of recent research on the DCNN. Since the spectrogram image has a single channel, we simply copied the data for each of the three input channels for AlexNet. For fine-tuning the network parameters, the final softmax layer of AlexNet was replaced with a new softmax layer that has five classes, and the entire network parameters were updated with the spectrogram data. The architecture of the second base model, VGG16, is given in Figure 5b. As can be seen in the figure, VGG16 has a much deeper architecture than the others, i.e., 13 convolutional layers and 3 fully connected layers. The network has 138 million parameters and achieved about half the error rate on the ImageNet test set compared to AlexNet in 2014 ILSVRC [23]. We follow the same procedure of transfer learning for VGG16 as to that of AlexNet. We call the two transfer learned DCNNs with each base model as DCNN-TL-AlexNet and DCNN-TL-VGG16, respectively. The hyper-parameters for the mini-batch SGD training of both models were identical to those of the DCNNs trained from scratch, except for learning rates of 0.001 for AlexNet and 0.0005 for VGG16. Experimental Results We followed the approach of [17] and carried out five-fold cross validation (CV) using the spectrogram data collected as in Section 3 to evaluate the performances of the compared methods. Each fold consists of data from each human subject (i.e., 125 samples); thus, the classification accuracy measures the generalization abilities of the algorithms across the human subjects. Note that the preliminary study in [13] carried out the five-fold CV using only the data from a single subject, which is why the accuracy was much higher. For the DCNN models, since the model configurations (e.g., the number of layers and the number of convolution filters) were fixed as explained in Section 4, the only hyper-parameter we chose via CV was the early stopping parameter; namely, we picked the SGD iteration that gave the best average test score. We used Caffe [28] to implement the DCNN and utilized the Intel Xeon processor E5-2620-v3 and NVIDIA GTX Titan X GPU for our experiments. Before we applied the DCNN, we implemented eight handcrafted features from the spectrograms similar to the ones developed in [6] and applied the SVM as a baseline conventional method. The features include torso Doppler, Doppler bandwidth, Doppler offset, the bandwidth without Doppler, the Doppler periodicity, and the variance of the Doppler energy distribution. Note these features extract some general information on micro-Doppler signatures and are not just specifically designed features for dry ground activities. We note that, due to the poor quality of the spectrograms for the activities on water, a few features could not be calculated occasionally; hence, we replaced a missing feature with the average value of the corresponding feature for the same activity and for the same person. For the SVM, we used the Gaussian kernel and chose the best parameters for the kernel width and the regularization parameter for the slack variables among 2500 combinations via CV. Table 1 summarizes the CV results. We see that the baseline SVM that utilizes the handcrafted features achieves an accuracy of 45.1%. While it is certainly better than a random guess among the five activities (i.e., 20%), we can clearly see that the handcrafted features developed for the activities on dry ground [17] are not generalizing enough to the activities on water. On the contrary, we observe that the DCNN-Scratch-I and DCNN-Scratch-II achieve accuracies of 61.9% and 66.7%, respectively, which are significantly better than the baseline SVM (a 40% improvement). This result proves the robust nature of the DCNN for micro-Doppler signature-based classification; namely, instead of designing a separate set of features for different classification tasks, the DCNN can directly learn the features from the raw data of the new task and achieve high accuracy. Furthermore, we see that the transfer learned models, DCNN-TL-AlexNet and DCNN-TL-VGG16, achieve 74.6% and 80.3%, respectively, which are again significantly better than the DCNN models learned from scratch. Comparing to the baseline SVM, DCNN-TL-VGG16 is 78% more accurate. From this result, we observe that the features learned by the DCNN for RGB image classification can be extremely useful even when transferred to the micro-Doppler signature-based human activity classification problem that has only a limited number of spectrogram data for training. The reason is because the DCNN learns features in a hierarchical way; hence, the low-level features learned for RGB image classification, such as edge or texture detectors, may be utilized and fine-tuned to detect useful micro-Doppler signatures for the classification. Figure 6 shows the learning curves (averaged over the 5 folds) of the two DCNN models, DCNN-Scratch-I and DCNN-TL-VGG16. From the figure, we observe that the DCNN-TL-VGG16 consistently dominates DCNN-Scratch-I with a significant gap in accuracy and converges quickly to Sensors 2016, 16,1990 8 of 10 attain the best accuracy in around 100 iterations of the SGD updates. On the contrary, DCNN-Scratch-I, which learns the DCNN parameters from scratch, needs more iterations (around 400 iterations) to converge to the best accuracy it can achieve. From this result, we clearly see that the transfer learning of the DCNN can be done very efficiently and effectively. Sensors 2016, 16,1990 8 of 10 consistently dominates DCNN-Scratch-I with a significant gap in accuracy and converges quickly to attain the best accuracy in around 100 iterations of the SGD updates. On the contrary, DCNN-Scratch-I, which learns the DCNN parameters from scratch, needs more iterations (around 400 iterations) to converge to the best accuracy it can achieve. From this result, we clearly see that the transfer learning of the DCNN can be done very efficiently and effectively. In Figure 7, we provide the visualization of a sample input spectrogram data as it passes through the convolution layer of DCNN-TL-VGG16, the best model. Figure 7a is the raw spectrogram input for one of the "freestyle" motions, and Figure 7b is the visualization of the feature maps (i.e., the results of the convolution of each filter) after the first convolution layer. While there are many filters in the first layer, we only visualized the feature maps that showed the most contrasting characteristics. As stated in Section 3, the micro-Doppler signatures of the "freestyle" motions are quite noisy, so the handcrafted features in [6] may not be able to discriminate the activity well from others. However, as we can see in Figure 7b, the convolution filters in DCNN-TL-VGG16 can successfully capture various aspects of the input spectrogram, e.g., textures and edges, such that the high classification accuracy can be achieved using those captured features as shown in our experiments. In Figure 7, we provide the visualization of a sample input spectrogram data as it passes through the convolution layer of DCNN-TL-VGG16, the best model. Figure 7a is the raw spectrogram input for one of the "freestyle" motions, and Figure 7b is the visualization of the feature maps (i.e., the results of the convolution of each filter) after the first convolution layer. While there are many filters in the first layer, we only visualized the feature maps that showed the most contrasting characteristics. As stated in Section 3, the micro-Doppler signatures of the "freestyle" motions are quite noisy, so the handcrafted features in [6] may not be able to discriminate the activity well from others. However, as we can see in Figure 7b, the convolution filters in DCNN-TL-VGG16 can successfully capture various aspects of the input spectrogram, e.g., textures and edges, such that the high classification accuracy can be achieved using those captured features as shown in our experiments. Sensors 2016, 16,1990 8 of 10 consistently dominates DCNN-Scratch-I with a significant gap in accuracy and converges quickly to attain the best accuracy in around 100 iterations of the SGD updates. On the contrary, DCNN-Scratch-I, which learns the DCNN parameters from scratch, needs more iterations (around 400 iterations) to converge to the best accuracy it can achieve. From this result, we clearly see that the transfer learning of the DCNN can be done very efficiently and effectively. In Figure 7, we provide the visualization of a sample input spectrogram data as it passes through the convolution layer of DCNN-TL-VGG16, the best model. Figure 7a is the raw spectrogram input for one of the "freestyle" motions, and Figure 7b is the visualization of the feature maps (i.e., the results of the convolution of each filter) after the first convolution layer. While there are many filters in the first layer, we only visualized the feature maps that showed the most contrasting characteristics. As stated in Section 3, the micro-Doppler signatures of the "freestyle" motions are quite noisy, so the handcrafted features in [6] may not be able to discriminate the activity well from others. However, as we can see in Figure 7b, the convolution filters in DCNN-TL-VGG16 can successfully capture various aspects of the input spectrogram, e.g., textures and edges, such that the high classification accuracy can be achieved using those captured features as shown in our experiments. Discussion and Conclusions In this paper, we considered the problem of classifying human activities on water based on micro-Doppler signatures. We first carried out a simulation study suggesting that the classification in such a scenario can be challenging. Then, with real measurement data, we applied several DCNN-based methods and achieved almost double the accuracy of the baseline SVM that uses handcrafted features developed for the activities on dry ground. Our contributions are as follows: (i) We carried out the initial, rigorous work on the classification of human aquatic activities, which can be applied to several applications; (ii) We showed the robust nature of the DCNN-based classification framework for the micro-Doppler signature-based activity classification; (iii) We showed that the transfer learning of the ImageNet pre-trained DCNN can be extremely useful when there are only a small number of Doppler radar-based spectrogram data. This result shows that the DCNN approach and the use of the transfer learning technique are promising for further extension on micro-Doppler signature-based detection and classification problems. As mentioned in the introduction, human activity classification in the ocean would be one of the most important applications of this study. It should be noted, however, that, when ocean waves are very strong, it becomes very difficult to accurately detect and classify a human activity. Ocean waves consist of several components such as breaking waves, resonant waves, capillary waves, and gravity waves, and they produce different kinds of scatterings, e.g., Bragg scattering, burst scattering, and whitecap scattering. Thus, when the waves are strong, such complex scatterings as well as the large RCS of the waves make it difficult to detect human signatures through radar. However, when the waves are less strong, such as those from lakes, the micro-Doppler signatures from a human subject can still be identified and used, and a more systematic study for such a situation constitutes a direction of potential future research. In addition, another fruitful future research direction is a comparison of the performance of a DCNN and all combinations of existing feature-based schemes [6][7][8][9][10][11][12] to investigate the optimum performing methods for micro-Doppler signature-based human activity classification.
9,419.2
2016-11-24T00:00:00.000
[ "Computer Science", "Environmental Science" ]
Habitat sampler—A sampling algorithm for habitat type delineation in remote sensing imagery The management of habitats for the conservation and restoration of biodiversity in protected area networks requires an appropriate monitoring to increase our understanding of processes and dynamics in managed ecosystems. Remote sensing offers a unique potential for the derivation of coherent spatiotemporal information to report on natural or management‐induced habitat change. However, the methods used for the delineation of habitat types in remote sensing imagery depend on the extensive process of ground truth sampling as reference to construct image classifiers. In fact, the number of required reference samples is intrinsically unknown in complex scenes due to the heterogeneity of varying habitat conditions. Thus, most classifiers are not transferable in retrospective image analysis or between different ecosystems that is, however, required for an operational use of remote sensing‐based monitoring systems. | INTRODUC TI ON The loss of biodiversity is currently recognized as one of the major global challenges that affects ecosystems worldwide. As a consequence, environmental policies have implemented the conservation and management of species and habitats in protected area (PA) networks (Aichi Target 11 CBD, 2010). A critical step for maintaining and promoting biodiversity through PAs is set by the regular monitoring of the habitat status in terms of habitat extent, species composition and evolving pressures. The goal is to effectively control conservation measures for a target-oriented habitat management Chape et al., 2005;Geldmann et al., 2018;Watson et al., 2014). Management effectiveness evaluations are thus implemented to increase the conservation performance of PAs, particularly with regard to impacts on biodiversity outcomes (Coad et al., 2015;Gray et al., 2016). However, recently management effectiveness evaluations are only implemented for 9% of PAs, reporting on 20% of global PA coverage (UNEP-WCM, IUCN, & NGS, 2018). A repeatable, standardized and replicable monitoring is still being strongly demanded to rapidly reveal spatiotemporal trends, improve the environmental impact assessment of active habitat management and enforce legal control mechanisms on PA status and configuration (Kati et al., 2015;Lengyel et al., 2008;Watson et al., 2016). Biodiversity appears in form of spatially and temporally structured vegetation patterns that integrate processes and functions of ecosystems. Systemically relevant patterns can be described by habitat types as fundamental ecological unit. Variations in habitat type extent and composition allow for an explicit evaluation of species exchange between and within habitat boundaries at the scale of landscapes (Escudero et al., 2003;Loreau et al., 2003). Remote sensing technologies offer a unique perspective to measure states and dynamics of such habitat types (Kennedy et al., 2014). In particular, satellite imagery is the most suitable source to acquire coherent spatiotemporal information about the extent and configuration of habitat types within PA networks Regos et al., 2017;Rose et al., 2015). In this regard, image time-series analyses provide insights into mechanisms driving ecosystem change and hence enable the derivation of pressures, exchange pathways and adaptation processes of habitats on the landscape scale (Kennedy et al., 2014;Pasquarella et al., 2016). According to that, habitat characteristics tracked from space have the potential to increase our understanding of the ecology behind biodiversity and related ecosystem functioning (Pettorelli et al., 2018;Requena-Mullor et al., 2018;Vihervaara et al., 2017). The extraction of spatially explicit information from remote sensing imagery still requires appropriate calibration between ecological field references and image data. Collection of reference samples is the most time-consuming step associated with the highest costs. For retrospective image analysis, it may in many cases be almost impossible to retrieve detailed reference data. At the same time, the amount and distribution of reference samples directly affects model training and predictive accuracy, particularly in ecological applications of remote sensing where the target classes (e.g. habitat types) often exhibit high intraclass variability in structure and species composition (Cingolani et al., 2004;Pouteau & Collin, 2013;Rocchini et al., 2013;Tuia et al., 2009Tuia et al., , 2011a. This is due to the fact that in remote sensing imagery, vegetation units such as habitat types, are neither spectrally (plant vigour, phenology and life cycle) nor spatially (plant species gradients) unique. In addition, the typology of habitats often mixes criteria from phytosociological classification of plant communities with functional aspects of management targets or landscape units (Evans, 2006;Rodwell et al., 2018). In fact, terrestrial sampling of habitat type properties is prone to subjective bias (Vittoz & Guisan, 2007;Wang et al., 2013) and substantially affects the final spatial configuration of resulting maps (Stohlgren et al., 1997). In consequence, reference data sampling acts as a critical determinant for the accuracy of habitat type delineation in statistical classification approaches, which still impedes the implementation of operational remote sensing monitoring systems Haest et al., 2017). It was already shown that in cases of appropriate reference data availability, space-borne remote sensing can be used to map the spatial extent and quality of habitats over large areas (Álvarez-Martínez et al., 2018;Cohen & Godard, 2004;Corbane et al., 2015;McDermid et al., 2005). In this regard, recent, freely available multispectral earth observation systems such as Landsat or the European Sentinel-2 satellites provide a suitable data base for implementing regular monitoring Macintyre et al., 2020;Turner et al., 2015). Comprehensive and representative reference sample collection further enables the modelling of the fuzziness of vegetation, particularly the mapping of ecological gradients in complex environments Foody, 1999;Neumann et al., 2016;Rocchini et al., 2013;Schmidtlein et al., 2007), though in most studies habitat types such as in the definition of the European Natura 2000 sites (Förster et al., 2008;Haest et al., 2010Haest et al., , 2017Stenzel et al., 2014) are discriminated using manually composed training sets (supervised classification) (Xie et al., 2008). The process of image classification can be decoupled from reference data sampling by applying unsupervised clustering approaches. Nevertheless, the resulting spectral clusters do not necessarily display ecological meaningful units. They need to be manually matched against habitat types on the basis of prior knowledge on the expected distribution of habitats (Hasmadi et al., 2017;Townshend & Justice, 1980) using, for example, post hoc spatial aggregation (Belward et al., 1990;Lark, 1995). In that respect, contextual features from the spatial-spectral domain improve classification accuracy; however, they need to be defined structurally on specific scales that is recently performed using deep learning networks (Cao et al., 2018;Zhang et al., 2017). The latter again introduces the demand of many training samples, whereas pixel-based clustering is limited in case of spatially complex vegetation structures (Jain et al., 1999;Palylyk & Crown, 1984;Townshend & Justice, 1980). The strong dependence on ground reference data and related poor model transferability using standard classification methods in remote sensing image analyses still creates a gap between monitoring demands and conservation measures although the concept of habitat types has well established as a way to systematically evaluate ecological representation and management effects in PA networks (Müller et al., 2018;Schmidt et al., 2017b). The challenge is to find an adequate spectral sampling and related models for habitat type delineation in complex scenes. In order to effectively support nature conservation planning, semi-automated self-learning and sampling procedures that are capable of representing the complexity of spatiotemporal habitat dynamics with a minimum user interference can potentially advance relevant monitoring tasks. In this paper, a novel procedure, the Habitat Sampler, is intro- | ME THODS The proposed procedure is divided into (a) the simultaneous sam- Each model m max is thereby related to a unique spatial distribution of references, Mn max , that is sampled as independent point locations in an image (sampling by training). In the second step (b), the model ensemble M is tested against a set H of predefined habitat types that are represented by one habitat spectrum Hs per type. A habitat spectrum is hereinafter referred to as the specific composition of spectral wavebands (mono-or multitemporal) that are provided to characterize a habitat type in remote sensing imagery. Hs is used as input for all classification models i × m max to consecutively predict the labels [1;2] that are subsequently compared to the reference labels in H. A final set of models M fin is selected that maximize the predictive power of one habitat type H fin compared to all others. Finally, only the selected models in M fin are applied to the input image and pixel-based predictions of class labels are summed up for a probability mapping of habitat type H fin (selective prediction). By defining a threshold for the spatial probability distribution, the image is reduced by the pixels that represent H fin and the procedure starts from the beginning to find M models for a habitat type from the shortened set Hs = H − H fin (reductive learning). The method is provided as plain language description for the algorithm (see Appendix S1) and for the workflow (see Appendix S2). Convergence is achieved due to two basic assumptions: (a) habitat types are spatially clustered which implies that correctly classified pixels are more likely to be spatially adjacent to similar class pixels, and (b) habitat types can be spectrally resolved in the scale of image pixels size. There are variables that can be set to control the processes of sampling and model building (Table S1.1). | Model selection by habitat type prediction The requirement for step two (b.1) is the availability of a list of predefined habitat types H 1…n that is commonly made available by experts for nature conservation purposes. There are two options to provide the related habitat spectra Hs for model test: (1) the expert marks each habitat type by one spatial point location within a scene of the input image or (2) the expert accesses a spectral library. The preferred option is (1) since no further pre-processing is required as spectral predictors are extracted directly from the input image. In option (1), one representative image pixel per habitat type is generated. They are mostly composed of mono-or multitemporal spectral wavebands stacks, for example in satellite imagery. Each model m max from the first step (a) in M is subsequently used to predict the defined habitat types one after another ( Figure S1.3). The habitat type that is predicted is hereinafter defined as target habitat H n . Its labels will be assigned to the value [2]. If the target habitat is predicted as [2] and all the others H ex as [1], the model produces a perfect split and is accepted as classification model in M fin . Thereby, the predictive distance P d evaluates the number of perfect splits over all models in a cumulated ratio starting with a failed prediction of target habitat [2] and the others [2] (H ex /H n = 2/2), which results in P d = 0. A perfect split always increases P d by adding up the labels [1/2] in the next steps (e.g. +[1/2] ->H ex /H n = 3/4; P d = 0.25 and +[1/2] ->H ex /H n = 4/6; P d = 0.44). A failed split instead decreases the level of P d by adding up [2/2] (e.g. +[1/2] ->H ex /H n = 3/4; P d = 0.25 and +[2/2] ->H ex /H n = 5/6; P d = 0.11). By adding up all model predictions, P d approaches asymptotically the value of 1 depending on the number of perfect and failed splits. After testing all models, the target habitat is changed and the models from M are tested again until all habitat types H 1…n are predicted once as target habitat. P d is then used as a criterion to assess the cumulated validity among the final model sets M fin . In consequence, the one target habitat that maximizes P d determines an optimal model set among all habitat types that is used as final model set in M fin . According to this, the models in the final set M fin represent only perfect classifiers for the related target habitat. They are based on spatial point locations used as reference samples for model training in the input imagery. | Probability mapping and pixel reduction The models in the final set M fin are finally applied to the input imagery to derive spatially explicit predictions of the selected target habitat. Each pixel is predicted as target habitat [2] or background [1], while all model predictions are summed up to generate a probability map ( Figure S1.4). The more often a pixel is predicted as [2] the higher the probability of the related target habitat. Finally, the user has to decide which threshold to use for the extraction of the respective habitat type pixels. The procedure is repeated until all habitat types are extracted from imagery. As a result, habitat type samples are selected independently from each other, as new point locations are sampled as references from the reduced image pixels. In each step, the user needs to re-decide which probability distribution of pixel values is appropriate for the representation of the current target habitat. The remaining pixels in the final step cannot be assigned to any habitat category and hence represent undefined surface properties. It is used as an example of abandoned dry heath (Calluna vulgaris) establishment and degradation processes in the continental biogeographical region, particularly for the mapping of resulting differences in Calluna life cycle phases (Gimingham, 1972;Watt, 1955 (Table S1.2). The samples for habitat type discrimination are sampled autonomously according to the spatial sampling procedure in step (a) (Figure 1, Habitat Sampler). | Satellite imagery Satellite imagery was taken from the Landsat series that provide multispectral archive imagery from the Thematic Mapper (TM: 1992(TM: , 1997(TM: , 2002(TM: and 2009 that utilize ground control points for geometric alignment (Young et al., 2017) and atmospheric models for surface reflectance derivation (Masek et al., 2006;Vermote et al., 2016). Analysis at finer grain was performed on Copernicus Sentinel-2 A/B satellite imagery provided by the European Space Agency (ESA) in 2018. I used the same spectral regions from Landsat imagery supplemented with the three red-edge channels between 0.703 and 0.779 μm. All bands were resampled to 10-m spatial resolution and geometrically aligned applying the automated co-registration procedure AROSICS (Automated and Robust Open-Source Image Co-Registration Software) (Scheffler et al., 2017). Sentinel-2 at-sensor radiance was finally transferred into top-of-canopy spectral reflectance over radiative transfer modelling in SICOR (Sensor Independent Atmospheric Correction) (Doxani et al., 2018;Hollstein et al., 2016). All bands were provided as multitemporal image stacks including only completely cloud-free scenes for each year (Table S1.2). Due to the high revisit interval of five days and a selection of only a small area extent, Sentinel-2 A/B provide n = 10 cloud-free dates for the construction of a time stack to extract reference habitat spectra. The small area within the 2018 Sentinel scene provides a known reference extent that is used once in order to mark valid point locations at which the spectral predictors can be extracted. These habitat spectra (one per type) were used in step (b) of the habitat sampler for model selection in each year . For this purpose, the spectra are resampled to the respective satellite sensor and reduced to the number of available dates in the respective year. The Sentinel-2 time series is thus used to construct a dense temporal representation of habitat spectra and to perform fine grain analyses for the delineation of management effects in 2018. The performance of autonomous habitat type sampling and prediction within the Habitat Sampler was validated against supervised classification, unsupervised image clustering and spectral unmixing that are commonly used for pattern recognition in remote sensing imagery. As benchmark to represent the case of many training samples I used n = 100 random 75%/25% splits of image spectra extracted from validation samples to train and test random forest (RF) (Breiman, 2001) and support vector machine (SV) (Boser et al., 1992;Vapnik & Lerner, 1963) classification. The accuracy of habitat type discrimination was estimated using the percentage amount of correctly classified habitat types: Overall Accuracy (OA) (Story & Congalton, 1986), and OA corrected for random predictions: Kappa (K) (Cohen, 1960;Kruskal & Goodman, 1954). The averaged benchmark accuracy was tested against Habitat Sampler results that are based on only one reference spectrum Hs per habitat type that were The Habitat Sampler was trained using i x m max random forest models for all associated habitat types H 1…9 . In (a), the hypothetical case of available training data is simulated using a split of digitized validation data, whereas in (b) model training is performed on only the 9 habitat spectra (reduced). In (c), single habitat type spectra Hs were defined as spectral endmembers (EM). Angles between EM and pixel spectra were calculated to assign habitat types on the basis of spectral similarity (spectral angle mapper SAM) (Kruse et al., 1993). | Validation In order to prove that habitat sampling does not simply reproduces images statistics as criterion for cluster partitioning (d), a k-means clustering was applied (Jain et al., 1999) with H 1…9 classes that were manually assigned to habitat types. | Performance evaluation Averaged classification accuracies for habitat type discrimination are generally highest (OAA > 87%; K > 0.87) for the three validation datasets that use random sample splits from image spectra (Table 1) In retrospective Landsat analysis, supervised classification and linear unmixing using habitat spectra as input crucially decreases classification precision (K « 0.5), while unsupervised image clustering can still deliver moderate accuracies (k-means: 0.5 < K < 0.6). Sentinel-2 supervised classification generally leads to higher class agreement during validation, except in unsupervised k-mean clustering. The validation performance of the habitat sampler is between 10.7% in 2009 and 6.1% in 2018 Sentinel-2 lower then benchmark validation data splits, while the benchmark case uses 75% training samples and the Habitat Sampler uses in average 3.8% of the validation dataset for autonomous sampling (one spectrum per habitat type). | Spatiotemporal habitat type dynamics The spatiotemporal evolution of heathland ecosystem patterns shows distinct gradients of heath life cycle development, succession TA B L E 1 Performance metrics overall accuracy (OA) and Kappa (K)_ averaged for n = 100 validation sample splits and spectral profile validation comparing different machine learning classification; HaSa: Habitat Sampler, RF: random forest, SVM: support vector machine, SAM: spectral angle mapper, bench: benchmark training on validation spectra, reduced: training on habitat spectra and degeneration from 1992 to 2018 (Figure 3). All processes that were delineated over associated habitat types (Figure 2) | Spatial patterns of heathland management Small-scale management patterns can be made visible by mapping probabilities of extracted habitat types (Figure 4). Pixels are only plotted for probabilities of corresponding habitat types that are above an individually set probability thresholds. The maximum probability for each habitat type is defined by 2 x n models stored in M fin that represent pixels where all models deliver a unique prediction of H fin . Each pixel is finally assigned to one habitat type above an allo- for a better understanding of landscape effects on biodiversity dynamics (Brose & Hillebrand, 2016;Loreau et al., 2003). Accordingly, the study states that ecologists and conservationists should jointly initiate new advances in remote sensing approaches for a more ecosystem-based design of habitat types that can be mapped and associated over specific processes of succession, life cycle traits and management induced disturbance regimes. | Application and usage The Habitat Sampler is implemented in R (R Core Team, 2020) and can be executed over a single script or as R-package. It is tested for windows and Unix environments and makes use of Leaflet (Cheng et al., 2019) to generate interactive maps in a web browser. User inputs are required over the R command line interpreter. The proposed procedure makes no assumptions about the input image. There are no constraints for the spectral-temporal-spatial domain in which an image is sampled. The user is required to have information about expected habitat types and patterns that can be delineated in imagery as the habitat spectrum is marked per point location or extracted from spectral library a priori. Classifiers in M fin can belong to any machine learning method. It takes into F I G U R E 4 Copernicus Sentinel-2 true-colour composite of managed heathland areas and assigned habitat types in 2018; maps of habitat type probabilities derived from Habitat Sampler of Calluna heath series based on minimum threshold (grey); Calluna heath series are spatially represented as different life cycle phases (b.1-b.4) and natural succession (b.4-b.5) that are reset after the implementation of management measures (b5. -b.1) account that the relative performance of any classifier is specific to the unique features of an application (Khatami et al., 2016). Classifiers convergence is usually fast (<10 steps), whereas divergent behaviour is suppressed by initiating new start configurations. Processing speed crucially depends on the input image size (pixel size, extent, number of layers). Computational efficiency is further determined by the maximum number of samples per step, the number of models saved in M, the sample buffer, the number of iterations and the selected classification algorithm itself that have to be defined individually on the basis of the expected ecosystem complexity (Table S1.1). The proposed procedure autonomously generates a huge amount of labelled reference samples (see Figure 1 and Figure S2.3 for sample distribution) to maximize the predictive power of cumulated classifier outputs. Unknown or incomplete training samples due to a lack of ground truth data, particularly, in complex scenes of natural habitats are one of the major constraints for accurate image classification (Foody, 2010;Maxwell et al., 2018). Even sufficient amounts of reference data can lead to biased predictions due to random variations in partitioning into training and test samples (Bickel et al., 2009;Lyons et al., 2018). The Habitat Sampler results are comparable to cases where model training is based on comprehensive and representative reference samples, while the final habitat type output is predicted as gradual probability maps. It is intended that the user individually defines a threshold for a discrete habitat type. Conventional habitat maps often fail to represent the full complexity of organism-environment relationships Zlinszky & Kania, 2016). On the other hand, clear spatial information is required for communicating and controlling effects of habitat management. In that respect, probability maps enable the preservation of information on alternatives to selected classes, particularly, with regard to continuous species gradients and encroachment parameter that are involved in the design of expert-driven habitat quality assessment schemes (Mairota et al., 2015;Nagendra et al., 2013). | Features The Habitat Sampler can be classified into the statistical category of active learning. In this context, active learning has evolved as a promising tool for the extraction of independent training samples in remote sensing images (Bruzzone & Persello, 2009;Tuia et al., 2011b;Zhang et al., 2016). Therein, the process of sampling is optimized to fully cover the statistical distribution of a target class, while automatization is realized by iteratively improving the performance of the In remote sensing, habitat spectra are defined as spectral-temporal wavebands that are saved as references datasets from imagery or in spectral libraries (Hueni et al., 2009;Milton et al., 2009). By this means, an external feature space can be generated for model training as substitute for image training. Although spectral end member analysis for hyperspectral airborne (Artigas & Yang, 2005;Dudley et al., 2015;Zomer et al., 2009) and Landsat time series (Hostert et al., 2003;Sonnenschein et al., 2011) data demonstrated the applicability of spectral libraries for vegetation mapping, effects of spatial non-stationary, acquisition scale and phenological shifts still hamper the transferability of externally calibrated models to spatially/ temporally independent images (Feilhauer & Schmidtlein, 2011). In the proposed procedure, the steps of model training and matching a habitat spectrum are decoupled via the criterion of predictive distance P d . Here, the classifiers M are only used to predict Hs of all habitat types H 1…N for a comparative filtering. The predictive resemblance to a given habitat spectrum is used to select an optimal model ensemble M fin that will be applied to imagery (selective prediction). This way, external habitat spectra do not determine the constraints for image transfer as they are not involved in model training itself. A target habitat type for which model predictions M fin are mapped will be extracted from the image; thus, the sample population is successively reduced by pixels that are already determined as habitat type (reductive learning). As in each step, the habitat distribution is sampled again uniformly from the remaining pixels, and the procedure creates a balanced training set that overcomes inaccurate model representation due to random clustering of pixel classes in training data splits (Ali et al., 2015;López et al., 2014) or from manually selected field references (Wang et al., 2013). | Spatiotemporal dynamics of continental dry heathlands Remote sensing-based vegetation mapping needs carefully designed classification systems in order to adequately represent the complexity of vegetation (Xie et al., 2008). In this study, I propose the introduction of associated habitat type sequences for a process-based mapping of heathland vegetation. In this context, habitat types are connected over successional gradients and life cycle phases that reveal the temporal evolution of life history traits and degradation of abandoned dry heaths in the continental biogeographical region. The study shows that under rainfall limited conditions Calluna senescence as described in the mature phase, emerges much faster, after 10 years, then described for maritime Heather, where the building phase typically lasts between 7 and 15 years (Gimingham, 1972;Watt, 1955). The fast emergence of Calluna pioneer and building phases towards senescent stands and shrub encroachment indicates that cyclic processes are highly variable according growth forms and rates (Gimingham, 1988;Schellenberg, 2017). Further research is needed for an optimal implementation of habitat management according varying environmental backgrounds that essentially effects the actual restoration success (Fagúndez, 2012;Henning et al., 2017). Although hierarchical classification of heathland age as well as grass and bush encroachment (Delalieux et al., 2012;Fenske et al., 2020;Siegmann et al., 2014;Thoonen et al., 2013), functional heathland signatures (Schmidt et al., 2017a) and species turnover in heath communities Neumann et al., 2015) have proven to provide valuable information for habitat quality assessment, the concept of associated habitat types contribute towards a more process-based monitoring that can be utilized for an effective restoration management. There are a few studies using multispectral imagery to classify structural patterns of heath stands (Fenske et al., 2020;Förster et al., 2008;Raab et al., 2018;Stenzel et al., 2014;Wood & Foody, 1989). Classification accuracies are overall precise (OA > 77%), which is comparable to the delineation of habitat types using the proposed procedure (OA 78%-84%). In fact, this study presents the first long-term retrospective analysis of heathland dynamics that is based on autonomous sampling of required references. It demonstrates that habitat management has a substantial effect not only on the quantity of habitat types but also on the spatial configuration of arising landscape patterns (see Figure 4). In that respect, spatial patterns of management activities have the potential to reflect differing life history traits and evolving patterns of functional diversity. For an operational use of the proposed Habitat Sampler, an optimization of computation time and an increased degree of automatization will be continuously developed and provided on GitHub. In this regard, there will be a spectral time series library made available that can be used for heathland vegetation mapping and will be extendable to habitat types in various other ecosystems. | CON CLUS IONS In this study, a novel procedure is introduced, the Habitat Sampler, that autonomously generates independent sets of reference samples for the training of habitat type classifiers in remote sensing imagery. It combines the principles of selective sampling and active learning with predictive modelling of habitat type probabilities. For that purpose, I show how the steps of model training and habitat type discrimination can be decoupled by selective prediction of individual habitat spectra. The accuracy outperforms supervised classification, spectral unmixing and image clustering when using only one reference spectrum per habitat type as training sample input. Thus, reference data dependencies can be substantially minimized. The procedure is provided as a tool for use in conservation monitoring and management planning for a better representation of habitat dynamics, particularly in retrospective image analyses where in most cases no reference data are available. In a dry heathland area, I present that spatially explicit habitat probabilities can be delineated as associated habitat types that are connected over processes of ecological succession and cyclic life history dynamics. This way, the Habitat Sampler revealed the spatiotemporal evolution of pioneer grasslands, the degradation of heath vegetation as well as recovery dynamics, particularly under the influence of habitat management in open landscapes. There are no restrictions concerning the spatial, temporal nor spectral image domain on which an image is sampled. The final distribution of generated reference samples is representative for the respective image itself and can be applied to any machine learning classifier. According to that, the Habitat Sampler has the potential to be used for an operational mapping of habitat dynamics and related landscape processes, particularly to implement an effective restoration management in protected areas of various ecosystems. ACK N OWLED G EM ENTS The research is part of the F&U-NBS-Verbund project NaTec-KRH. It is based on the "Federal Program on Biological Diversity," a funding programme for the implementation of the "National Strategy PE E R R E V I E W The peer review history for this article is available at https://publo ns.com/publo n/10.1111/ddi.13165. DATA AVA I L A B I L I T Y S TAT E M E N T The R scripts, a user manual and a test dataset with coregistrated and atmospherically corrected Sentinel-2 satellite imagery, is made available per GitHub repository on: https://github.com/carst ennh/ Habit atSam pler. The Habitat Sampler is additionally provided as R-source package, including test datasets. It can be installed from https://github.com/carst ennh/Habit atSam pler/R-package.
6,895
2020-09-26T00:00:00.000
[ "Environmental Science", "Mathematics" ]
Performance Evaluation of Open-Source Endpoint Detection and Response Combining Google Rapid Response and Osquery for Threat Detection Detecting the latest advanced persistent threats (APTs) using conventional information protection systems is a challenging task. Although various systems have been employed to detect such attacks, they are limited by their respective operating systems. Furthermore, they are developed as closed platforms and cannot be customized to meet user environments. To overcome these limitations, open-source endpoint detection and response (EDR) techniques are needed. In this study, we construct one that integrates open-source security frameworks combining GRR (Google Rapid Response) and osquery. A threat-detecting case study is conducted to validate the feasibility of the proposed open-source EDR system. Additionally, APT coverage for the proposed EDR system is analyzed using MITRE’s Adversarial Tactics, Techniques, and Common Knowledge model. The assessment result shows that APT tactics having high levels of threat detection using non-customized osquery configurations comprise 28.5 % of all detections, which is lower than the other response levels. The performance of open-source EDR can be increased by customizing osquery for specific purposes and environments. Open-source EDR combining GRR and osquery has the potential to provide the detection-coverage efficient threat detection system and has the advantage of flexible integration with other applications; it can also be developed for evolving system environments such as cloud and internet of things. I. INTRODUCTION Cyber-attack techniques constantly improve, and advanced persistent threats (APTs) cause serious security problems for companies and organizations [1]. It is difficult for existing information protection systems to detect the latest APTs because they attack the targets persistently for a prolonged period using intelligent advanced hacking techniques in a high-density, high-capacity, and high-speed network environment [2]. The most representative method of responding to APTs includes using a cyber kill chain, as first devised as The associate editor coordinating the review of this manuscript and approving it for publication was Sabu M. Thampi . a military security concept in 2011 by Lockheed Martin. It defines cyberattacks in multiple stages, identifies threats to organizational processes in advance, and analyzes, detects, and prevents cyberattacks and intrusions [3]. Its attack stages consist of reconnaissance, weaponization, delivery, exploitation, installation, command and control (C2), and actionson-objective. Additionally, cyber kill-chain models (e.g., the Mandiant attack lifecycle and the Bryant kill chain) have been developed [4]. Bahrami et al. [5] proposed a taxonomy of APT attacks based on the cyber kill-chain model, according to which tactics, techniques, and procedures for detecting APT attacks were identified. Although the cyber kill chain has emerged as a new framework for responding to APTs, it has VOLUME 10, 2022 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ a limitation of only presenting the threats in a single attack stage, and it relies on security solutions, such as intrusion prevention systems, firewalls, and security information and event management tools for responding. To respond effectively to APTs, behavior-based detection techniques must be applied alongside the kill-chain model [6]. A representative example of signature-based detection is the conventional anti-virus (AV) suite, which defeats malware by substituting their analyzed signatures in the suspected sample. However, signature-based detection has disadvantages of vulnerability to zero-day attacks, a high false-alarm rates, and difficulty responding to attacks that bypass signature detection [7]. By contrast, behavior-based detection techniques analyze the behaviors of malware using artificial intelligence, big data, visualization, and cloud technologies. Refs. [8] and [9] proposed behavior-based malware detection systems for Android. In particular, the system proposed by [8] was based on the fact that new malware is usually a variant of an existing one, whose signatures can be used to detect the new ones based on behaviors and to prevent its future obfuscation and transformation. However, existing behavior-based detection methods are limited by the operating system (OS) and are developed as closed platforms, making it difficult to customize the user environment. Furthermore, it is impossible to effectively respond to quickly evolving cyberattacks. The larger the network and the more diverse the systems, the more severe the vulnerabilities. Endpoint detection and response (EDR) methods are designed to overcome the disadvantages of behavior-based detection techniques. Because EDRs support actions in various OSs and are developed using open sources, they allow experts worldwide to collaborate and prepare responses faster than the evolution speed of attacks. Gartner [10] classified threat detection technologies based on the detection point in the network, the endpoint, and the user and classified products, as shown in Table 1. Accordingly, EDRs are being developed into next-generation endpoint threat detection tools by integrating them with endpoint protection platforms. Furthermore, it was forecasted that the total endpoint security market will grow at a compounded annual growth rate of 7.6 % from USD 12.8B in 2019 to USD 18.4B in 2024 [11]. These trends of endpoint and EDR markets highlight that the demand and market size for EDR solutions are expected to grow steadily. EDRs should be used in conjunction with multiple solutions because they must detect and respond to all events occurring at network endpoints. Therefore, EDR solutions typically incorporate endpoint anomaly detection and response technologies alongside existing AV solutions and techniques. Unlike traditional security flows, recent security solutions are built open-source. A representative solution is GRR Rapid Response (GRR), a remote live forensics tool for responding to enterprise intrusions. It also provides digital evidence collection for forensics and incident-response applications. Open-source security solutions, including GRR, have the advantage of extending their capabilities to other open-source solutions and application programming interfaces (APIs), thus enabling flexible utilization. There are several studies related to such open-source security tools, but they only analyze the functionalities of the tools and the feasibility of utilizing them at some specific circumstances. In this study, we propose next-generation endpoint security-threat detection and response methods and evaluate the detection coverage of the suggested EDR system for the first time. The contributions of this study are as follows: • An open-source EDR system combining GRR and osquery is evaluated for threat detection, and incident detection experiments are conducted. • Using MITRE ATT&CK, all available attacks are organized, and the detection coverage of the EDR environments are identified. The remainder of this paper is organized as follows. Section II describes previous studies on open-source EDRs. Section III describes the structure and functions of open-source EDR systems implemented using GRR and osquery. Section IV conducts attack and detection experiments. Section V defines the detection criteria of opensource EDRs and analyzes detection coverage. Section VI presents the discussion and future research directions. Finally, section VII concludes the study. II. RELATED WORKS In this section, we summarize the literatures related to digital evidence collection and incident response using open sourcebased digital forensics and incident response (DFIR) tools and the security solutions. Like this study, several others have attempted to detect, respond to, and analyze cyber-security limitations using GRR and osquery. Table 2 describes the related works. Reference [12] created memory corruption and persistent attacks using Metasploit on GRR clients and responded to them using GRR's hunt functionality (i.e., GRRScanMem-oryHunt, NetworkStatusHunt, GRRRegistryFinderHunt). Using GRR's cronjob, hunt tasks were scheduled, and memory, network status, and Windows registry-key analysis were performed, resulting in a well-documented analysis of the limitations of slow detection caused by the hunt cycle taking a long time (∼5-10 min). During this period, attackers had enough time to complete their attacks and cover their tracks. Furthermore, [13] described the system structure and functionality of GRR, osquery, and Mozilla InvestiGator, comparing their performances by establishing representative features that were successfully handled. Although both studies suggest the limitations of EDR, there is a limitation in that they do not specifically address the solution. In addition, although attempts have been made to identify the core functions of EDR, there is insufficient research on how to respond to overall cyber-attacks. References [14]- [17] applied a GRR to several domains to collect data and attempted forensic analysis. Ref. [14] used GRR to respond to cyber threats in a healthcare system, displaying the benefits of operating the system simultaneously with remote forensics. Furthermore, they rendered the tool compatible with multiple frameworks, owing to its opensource nature. However, there was an issue that an additional security mechanism is needed for GRR to safely handle sensitive information in the medical system. Containers have also been frequently used to increase the convenience and efficiency of software engineers. Refs. [15]- [17] highlighted the need for digital forensics for web servers operating in containers (e.g., Docker and Kubernetes) and used some to respond to distributed denial-of-service attacks. In these studies, the method of recognizing a DDoS attack by checking the web server log and IP address was used. However, it is not different from the existing network security technique as only a remote tool is used to collect the log, and attempts to automate the attack detection using the advantages of remote live forensics have been lacking. Studies that attempted to integrate GRRs with other analytical tools include [18] and [19]. Reference [18] integrated zeek (bro), an open-source network traffic analyzer and conducted network forensics. Ref. [19] conducted network forensics using the open-source project, DroidWatch, which is designed to collect and monitor data from Android devices alongside GRR. Although some contributions showed that data could be collected from GRRs that did not support Android, limitations persisted in that some log data failed to be collected, and routing was used instead of normal methods to access logs. Furthermore, [20] analyzed and resolved the database limitations of GRR. Log data were stored in a database for incident response and forensic evidence collection, but the method had limitations of increased data with larger-scale services, resulting in resource shortages. Ref. [20] proposed a distributed data repository to overcome the limitations of GRR storage systems, improving scalability, processing speeds, and efficiency. However, it is necessary to study not only the data collection stage, but also the performance of processing, transmitting, and utilizing the collected data. When using the collected log data, additional studies such as performance comparison between the existing method and the distributed storage method can be conducted. In this paper, in conjunction with EDR and Kafka, we propose a way to stream the collected log data and utilize it. Most of the existing studies dealt with only a few attacks in specific situations and conditions, and were limited to basic explanations of the main functions and limitations of each EDR tool. However, in order to respond to APT attacks that are becoming more intelligent day by day, it is necessary to study the EDR system that analyzes attacks at each stage of the APT scenario like a cyber kill chain and systematically responds to them. In this work, we further seek to construct a responsive EDR system with endpoint anomaly detection by processing the collected endpoint data used in extant studies, wherein GRR and osquery are used for DFIR. Also, the performance of proposed EDR system is evaluated via analyzing the detection coverage for all APT stages. III. OPEN-SOURCE EDR SYSTEMS OVERVIEW This section describes GRR and osquery systems and introduces the EDR environments that combines them. A. GRR GRR is an incident-response framework for remote live forensics and includes a client and a server [21]. GRR clients are distributed to systems that need to be detected, and they periodically poll the server for work, referring to a client action that includes file finder, memory, network, OS, and osquery. The GRR client and server communicate via the hypertext transfer protocol (HTTP) by default, and each communication is encrypted using AES (Advanced Encryption Standard) 256. The GRR server comprises a front-end server, a worker, and a user-interface, and its core functions include flow and hunt. The GRR server uses Flows, a type of state machine, to solve resource problems. A Flow is a core entity of the server, and it calls client actions. When the GRR server first enters its flow state, it requires a client action for the state. While waiting for a response from the client, the server initializes all resources. When a response arrives, it fetches the appropriate resource and runs the flow state. This way, the GRR can resolve resource-hogging issues. Flows can be done on thousands of client machines, which is called a hunt. Hunt specifies the flow components for which machines to run the flow on. GRR stores forensic data collected from clients and abstracts and represents them using a virtual file system. Hence, analyzers can identify the client file system. The GRR artifact is a function that collects and manages data generated while using the OS or applications. It can be conveniently used in conjunction with other tools because the information collected from the host is stored in an external storage system in YAML (YAML: Ain't Markup Language) format, a human-friendly data serialization standard for all programming languages. GRR-specific or private artifacts are stored locally. For other functions, GRR supports the Python library (i.e., grr_api_client) for automation and uses PowerShell for automation and scripting. Furthermore, it provides a function that interfaces osquery during the flow of the GRR and obtains client information in a structured query language (SQL) format for more effective analysis. B. OSQUERY Osquery is an operating system instrumentation framework that abstracts and displays information for analysis and monitoring in an SQL table and can be used with MacOS, FreeBSD, Linux, and Windows. Since October 2020, it supports 264 basic schema and provides system information (e.g., processes, CPU, disc, device, memory, kernel, files, Wi-Fi, and network information etc.) and information related to YARA malware research and detection, Docker containers, and Azure cloud services. New schema can be defined directly by users as needed. Osquery supports the shell/console mode, osqueryi, and the hosting monitoring daemon, osqueryd. In particular, osqueryd performs scheduling via osquery configuration. As shown in Fig. 1, the interval time for periodic monitoring can be set. Fig. 1 also shows sample code for scheduling and running ''SELECT * FROM file_events;'' every 300 s. osquery supports plugins for logging various interfaces. In this study, the log results are transmitted by interfacing kafka_producer. Kafka is a distributed streaming platform. When a producer sends a message (topic) to Kafka, the consumer reads it. This has advantages of constructing of a reliable real-time data pipeline between the system and applications with real-time streaming that reacts to and changes data [22]. Thus, the detection data generated by GRR and osquery can be streamed using Kafka, and they can be used for real-time responses. Schema can be managed and queried through Kafka using a confluent schema registry. Hence, when a producer registers schema in the registry, serialized Avro/Protobuf/JSON data are sent to Kafka for each schema ID. Consumers can receive a schema ID from the schema registry, search the corresponding schema, and use it by reverse-serializing it [23]. C. EDR SYSTEM ENVIRONMENT The experimental environment of open-source EDR was set up by building a GRR server and client as virtual machines using VMware. First, we verified whether the detected data were properly transmitted to the EDR in an experimental environment consisting of the GRR server, client, and Kafka. The GRR server does not provide an osquery agent installer, and a process is required to install osquery's official source in the client machine. When osquery is installed on the client machine and placed on a path that can be accessed by GRR, GRR and osquery can be used together. Fig. 2 shows the EDR experimental setup comprising GRR, osquery, and Kafka. The GRR server is connected to the GRR client and uses the HTTP protocol for communication. The GRR client in which osquery has been installed sends query results to the GRR server according to its request. MySQL, the database of the GRR server, is linked to Kafka and can stream the sent detection results. The hunt function of GRR can cause performance degradation of running clients and GRR systems because it results in an extensive load on the system [24]. This causes the hunt to expand its coverage, making it difficult to use for live forensics. When utilizing the hunt function to solve this problem, limits on resource usage should be set (limited). To do so, data can be accumulated in external systems, and forensics can be performed separately. Kafka can be used to transmit data from internal to external systems and to store, manage, and investigate them externally [25]. Kafka is a distributed streaming platform with proven big-data processing performance. In this paper, to resolve the abovementioned performance and scalability problems, the Kafka-based opensource EDR is studied. an organizational level. It provides organizational security awareness, helps identify gaps in defense, and prioritizes risks [26]. The enterprise ATT&CK framework used in this study is based on the general attack procedure, which refers to actions performed from the intrusion stage to the goal achievement stage. This framework sequentially describes 12 attack stages (i.e., initial access, execution, persistence, privilege scaling, defense evasion, credential access, discovery, lateral movement, collection, C2, exfiltration, and impact), which include a total of 184 techniques. IV. ATTACK AND DETECTION EXPERIMENTS The attack scenario for penetration testing consists of initial access, execution, C2, and impact stages of ATT&CK. For penetration testing, we used Metasploit, an exploitation and vulnerability validation tool that assists penetration testing, and the open-source ransomware technique, RAASNet. Wine was also used to run the Windows.exe file in Linux OS. The attack scenario was set based on the APT. First, a Metasploit payload disguised as a normal file is generated and moved to a universal serial-bus (USB) connected to the victim's personal computer (PC). When a malicious payload is stored and executed on a victim's PC via USB, the attacker's C2 server and the victim's PC become connected. Hence, abnormal symptoms detected in the network traffic can be verified and responded to. Furthermore, the victim's PC is infected by the RAASNet ransomware, and it can be responded to by detecting the characteristics encrypted in a file with a specific extension (demon), and the file size and time property are changed. Fig. 3 shows the attack-and-response environment of the experiment, which comprises a GRR server, a client, and an attacker. It shows the attack stages of initial access, execution, C2 corresponding to the attack scenario, and the flow of the attack. The GRR server and client respond to attacks while communicating via HTTP. Table 3 shows the conditions of the attack-and-response experimental conditions. Fig. 4 shows a flowchart of the attack-and-response experiment. The procedure and method of the three-stage attack-and-response are as follows. 1) INITIAL ACCESS The initial access-stage attack consists of techniques related to the attacker's network access attempts. The attacker can access the victim's PC first by inserting malicious exploit code or an attachment file into the web browser, e-mail, or acquiring network access permission. The attacker can also attempt malicious actions by copying malware to a mobile device and inserting it into the system. In this experiment, malware was stored on a USB mobile storage device and inserted into the victim's system. ''SELECT * FROM usb_devices;'' was used as the query statement to respond to the initial access stage. The information regarding the USB devices connected to the PC (class, model (modelname), model_id, protocol, removable, serial, subclass, usb_address, usb_port, vendor, vendor_id, version) can be known, and traces of the USB connection at the time of initial attack can be proven. In particular, the ID and serial number in the USB device information can be used as critical information to detect unauthorized USBs in addition to those registered by the organization. Furthermore, the file in the specified path inside the USB can be identified using the query statement ''SELECT * FROM file WHERE path like ''/media/account/USBname/%'';'' 2) EXECUTION The Metasploit payload injected via the USB in this experiment was a malicious execution file disguised as a normal file, and it bypassed security and AV programs. Among the detailed attack techniques of malware, user execution occurs when the user opens the file. When the payload is executed, a session is then formed between the C2 server and the victim's PC, and network traffic is generated. Because the experimental environment of the victim's PC was Ubuntu OS, Wine was used to run the exe file. 3) C2 A Trojan attack induces the internal system to voluntarily open a port to enable communication with external systems. In this case, it opened a specific port for C2-server communications. When the network traffic is detected, it can be judged as an abnormal symptom, and hacking can be suspected. In this case, it was verified that a connection between the attacker and the victim's system was initiated, owing to verification via a query designed to detect suspicious outbound network activities: ''SELECT s.pid, p.name, local_address, remote_address, family, protocol, local_port, remote_port FROM process_open_sockets s join processes p on s.pid = p.pid WHERE remote_port not in (80, 443) and family = 2;'' 4) IMPACT This step is the final step of APT attack, and it interferes with the availability of systems, services, and resources, or it damages integrity. In this experiment, the victim's PC was infected with ransomware, thereby damaging the availability and integrity of the target folder. The characteristics that can be used to check the behaviors of the ransomware are file extension, timestamp (time information), and change in file size. The properties of the folder, file extension and size, and timestamp can be checked using the query, ''SELECT * FROM file WHERE path LIKE ''/home/user_name/ransomwareinfected folder/%''';'' Additionally, we can use a query that only shows information regarding a specific extension, ''SELECT * FROM file WHERE path LIKE ''/home/user name/ransomware-infected folder/%.extension'';'' or one that determines changes in time values by ransomware infection by checking atime, mtime, and ctime via ''SELECT filename, atime, mtime, ctime FROM file WHERE path LIKE ''/home/user name/ransomware-infected folder/%'';'' Furthermore, the information of file_events in a table related to file-integrity monitoring can be output using the query, ''SELECT * FROM file_events;'' to display the states of files as created, deleted, or updated. Table 4 shows the summary of the attack and detection experiment methods. It reflects the stages of the MITRE ATT&CK, the attack techniques, and the osquery queries for all attacks. Queries that show the most representative characteristics of each attack were selected for this experiment, and the attacks were detected using them. B. RESULT OF DETECTION SCENARIO As a result of conducting the attack-and-response experiment assuming an APT attack scenario based on MITRE ATT&CK, various attacks, including the use of an unauthorized USB, a Trojan horse, and ransomware, were detected using osquery input query statements. GRR provided a client API that enabled Python scripting and automation, and osquery provided the osqueryd daemon, which recorded logs when events were detected. This was used to set response schedules to activate the EDR system according to the established period. Therefore, the occurrence of events in a specific path or file can be detected in real time, and any abnormal symptoms can be responded to immediately by scheduling osqueryd. Through this process, automated detection and response tools that detect changes via specific events and perform investigations of corresponding artifacts are made feasible. Furthermore, the detected data can be transmitted via Kafka, one of osquery's logger plugins, to be utilized for analysis and response. The results of this detection experiment show that several open-source security tools can be interlocked to form an EDR environment to detect, analyze, and respond to endpoint security incidents. V. PERFORMANCE EVALUATION This section defines the detection criteria of open-source EDR, tests its detection coverage, and analyzes the results. Additionally, a developmental direction for improving detection performance is presented. A. DEFINITION OF DETECTION CRITERIA In our research environment, the security-incident detection criteria were established by organizing the information and query-statement requirements required to detect MITRE ATT&CKs. For example, for drive-by-compromise attack techniques during an initial attack tactic, because the attack can be detected through website-access and script execution logs, the query statement to verify the log becomes a requirement for detection. Additionally, the valid-account attack technique can be detected from user account information, logchange permissions, etc., which require query statements to verify that new unauthorized local accounts have been created. It also requires queries to check for unusual changes in privilege escalation alongside all appliances and applications for default credentials and secure-shell protocol keys. B. ANALYSIS OF DETECTION COVERAGE For detection coverage analysis, we investigated and developed query statements to collect a total of 38 queries from osquery GitHub 1 opened by Facebook, osquery query packs, 2 osquery-configuration GitHub, 3 osquery document, 4 and other related research and publication data websites. 5 A total of 691 were collected and developed, and 381 were summarized as a result of merging and deleting duplicate query statements (160 from osquery GitHub, 136 from the osquery query pack, 48 from websites, 31 from related research data, four from osquery documents, and two custom produced according to the collection routes). 6 The queries used in this experiment are all available on github. Based on the ATT&CK framework, the distribution of detection data was generally proportional to the number of techniques at each stage. To define detection coverage, the 381 query statements were analyzed to see whether they satisfied the requirements of detection. Then, the detection level of the GRR was identified by converting the ratio of the number of query statements that satisfied the requirements to a percentage. The response levels for the attack techniques are outlined in Table 5. Depending on the percentage that show satisfaction, fewer than 40 % were classified as low, 40 % or higher and less than 70 % as medium, and 70 % or higher as high. In addition, the importance of each technique was calculated alongside the classification criteria for the response levels given in Table 5, and a high-level analysis of detection coverage was performed. To this end, weights were calculated based on the Teach model, which defines the difficulty of attack for each technique opened by ATT&CK. We assumed that the lower the attack difficulty, the more frequent the attack and the resulting damage appears; hence, a higher weight was given to them, as shown in Table 6. The detection coverage was calculated by applying the following equation according to the response level and attack difficulty; it was converted to a percentage using Eq. (1): (1) C. ANALYSIS RESULTS The analysis result of the detection coverage based on ATT&CK is shown in Fig. 5. In the open-source EDR environment of GRR, the ratio of techniques having high response levels was found to be 28.5 %, the ratio of techniques having medium response levels was 35.1 %, and the ratio of techniques having low response levels was 36.4 %. Thus, the ratio of the high response level was lower than that of the others. The reason for low detection coverage is the lack of query statements for attack detection, which can improve coverage and performance through query statements custom-made for the environment and purpose, rather than basic query statements. Because osquery is based on relational databases, it can obtain the necessary information flexibly by joining multiple schema. For example, in Table 4, a query statement for detecting Trojan attacks was generated by joining the process_open_sockets schema with the processes schema. To maximize the utilization of GRR and osquery-based opensource EDR, such customized settings for user environments are required. Furthermore, we conducted in-depth research on the detection level and the ratio of detection data at each attack stage and the stages having low detection levels and summarized the performances and development directions of open-source EDR. Fig. 6 analyzes the possibility of response to the attack techniques in each of the 12 attack stages of the MITRE ATT&CK, which represents the detection level of each attack stage. The average number of query statements for threat detection in each attack stage was 52.25. The top-four stages with low response levels where the number of queries is lower than the average were initial access, execution, credential access, and lateral movement, and the numbers of queries in each stage are 17, 26, 29, and 17, respectively. Fig. 7 lists the ratios of detection data of each attack stage in ascending order, and the ratio was determined by the number of query statements for threat detection to the total number of query statements. The average ratio of the detection data was 5 %. The ratios were equal to or lower than the average ratios in the four stages having insufficient response: 3 % for initial access and lateral movement, 4 % for execution, and 5 % for credential access. Table 7 lists the techniques having high importance and low detection levels among the total 184 techniques. The weights were set as four or higher by referring to Table 6 for importance. Among all techniques, the number of techniques having low detection levels was 64, and techniques having high importance among these was 29, which accounted for 45 %. This is approximately half and corresponds to approximately 15.7 % of all techniques. In other words, these attack techniques belong to the stage having a higher possibility of attack than other attack techniques, owing to the lower difficulty of attack. However, the detection level was low. In particular, the attack techniques corresponding to the four stages having the lowest detection level in Fig. 7 (i.e., initial access, lateral movement, execution, and credential access) had the lowest relative detection levels in all attacks. Therefore, to improve performance and coverage of the open-source EDR system, additional research and development of technology that can respond to the stages in Table 7 and detailed attacks based on the above four stages are required. VI. DISCUSSION AND FUTURE WORK The open-source EDR is convenient and effective for managing multiple devices, and the source code can be freely modified and improved to support various applications and OSs. Therefore, it is superior to conventional AV and forensic tools in terms of scalability, efficiency, and cost. In this paper, the detection coverage of the proposed EDR system combining GRR and osquery was evaluated, but more effective tools can be combined and improve the detecting performance. Remote monitoring can be effectively performed using an open-source EDR in the cloud, across the internet of things (IoT), and in large-scale service environments. Additionally, using the collected detection data to respond to attacks or for digital forensics can be considered. However, current digital forensics procedures require data collection while blocking all network connections to the system, from which evidence may be collected with integrity. Because system data can be collected remotely using EDRs, and the integrity can be verified using hash functions, digital forensics procedures and regulations must be updated in accordance to attack trends. Furthermore, privacy and critical information leakage can occur because real-time logs are collected while confidential information is processed [14]. Hence, information protection at the management server is needed for processing the detection data, responding to attacks, and conducting remote forensics using the opensource EDR. Furthermore, cloud and IoT usage increases with the acceleration of digital transformation and the development of wireless communication and network technology. Cloud and virtual system environments require data integrity and digital evidence analysis solutions. IoT has security vulnerability problems as a result of its pursuit of low price, light weight, and unburdened performance. Open-source EDR must evolve to meet these needs. As a future work, the proposed framework will be applied to and evaluated in environments having large-scale IoT devices and cloud environments. VII. CONCLUSION The open-source EDR is a cost-effective security tool with high expected value in terms of flexibility, utilization, and scalability, and it can be used in next-generation digital platforms that become hyper-connected, hyper-intelligent, and globally scaled. In this study, attack detection and coverage analysis were possible for all APT attack stages according to MITRE att&ck through open source EDR for the first time. A few stages showed a low detection rate due to insufficient query settings to detect detailed attacks of each stage. Indepth performance evaluation was conducted using Teach model, and attacks which had high importance and low detection level were analyzed. To ensure the coverage of GRR and osquery open-source EDRs, appropriate query statements tailored to the required environment and conditions must be created, especially those based on low-level and highimportance attacks. Also, other effective open source-based tools can be used to increase the performance of EDR system. As a future work, other opensource tools can be compared for their performance using the evaluation framework of this study. He is currently a Professor with the Department of Convergence Security Engineering, Sungshin Women's University (SWU), Seoul. Before joining SWU in March 2017, he was with the Electronics and Telecommunications Research Institute (ETRI) as a Senior Researcher from 2005 to 2017. He served as a Principal Architect and a Project Leader with Newratek, South Korea; and Newracom, USA; from 2014 to 2017. He has authored/coauthored more than 90 technical articles in the areas of information security, wireless networks, and communications, and holds about 160 patents. His current research interests include wireless/mobile networks with an emphasis on information security, networks, and wireless circuit and systems. Dr. Lee is also an Active Participant of and a Contributor to the IEEE 802.11 WLAN Standardization Committee. VOLUME 10, 2022
7,723.8
2022-01-01T00:00:00.000
[ "Computer Science", "Engineering" ]
A Meta-Analysis of Retinoblastoma Copy Numbers Refines the List of Possible Driver Genes Involved in Tumor Progression Background While RB1 loss initiates retinoblastoma development, additional somatic copy number alterations (SCNAs) can drive tumor progression. Although SCNAs have been identified with good concordance between studies at a cytoband resolution, accurate identification of single genes for all recurrent SCNAs is still challenging. This study presents a comprehensive meta-analysis of genome-wide SCNAs integrated with gene expression profiling data, narrowing down the list of plausible retinoblastoma driver genes. Methods We performed SCNA profiling of 45 primary retinoblastoma samples and eight retinoblastoma cell lines by high-resolution microarrays. We combined our data with genomic, clinical and histopathological data of ten published genome-wide SCNA studies, which strongly enhanced the power of our analyses (N = 310). Results Comprehensive recurrence analysis of SCNAs in all studies integrated with gene expression data allowed us to reduce candidate gene lists for 1q, 2p, 6p, 7q and 13q to a limited gene set. Besides the well-established driver genes RB1 (13q-loss) and MYCN (2p-gain) we identified CRB1 and NEK7 (1q-gain), SOX4 (6p-gain) and NUP205 (7q-gain) as novel retinoblastoma driver candidates. Depending on the sample subset and algorithms used, alternative candidates were identified including MIR181 (1q-gain) and DEK (6p gain). Remarkably, our study showed that copy number gains rarely exceeded change of one copy, even in pure tumor samples with 100% homozygosity at the RB1 locus (N = 34), which is indicative for intra-tumor heterogeneity. In addition, profound between-tumor variability was observed that was associated with age at diagnosis and differentiation grades. Interpretation Since focal alterations at commonly altered chromosome regions were rare except for 2p24.3 (MYCN), further functional validation of the oncogenic potential of the described candidate genes is now required. For further investigations, our study provides a refined and revised set of candidate retinoblastoma driver genes. Introduction Retinoblastoma is a pediatric cancer of the retina. Although the disease is relatively rare accounting for 2% of childhood cancers [1], retinoblastoma is the most common intra-ocular malignancy in children [2]. Retinoblastoma development is initiated by two sequential hits [3] of RB1 (RB1 -/patients) and in few cases by amplification of MYCN (RB1 +/+ MYCN A patients) [4]. Hereditary patients carry a deleterious germ line mutation in one RB1 allele and therefore only require one somatic mutation in the wild type RB1 allele for retinoblastoma to develop while non-hereditary patients require two somatic mutations in RB1. However, while bi-allelic inactivation of RB1 can cause benign retinoma lesions, additional genetic alterations can be required for progression to retinoblastoma [5]. In addition, it is has been demonstrated that there is profound variability in the total amount of genomic disruption by SCNAs between retinoblastoma tumors [15]. In several studies it was discussed whether and how the extent of genomic disruption relates to clinical and histopathological variables. However, due to strong connectivity between the independent variables (like age at diagnosis, heredity, laterality and differentiation) and small sample sizes, explanations for the variability in genomic disruption in retinoblastoma were inconclusive due to limited power. Our study aims to refine the set of putative driver genes of recurrent SCNAs and get insight into the variability in genomic disruption. Data from high-resolution genome-wide SNP-arrays of 45 human retinoblastoma samples matched with peripheral blood DNA were used and complemented with clinical and histopathological features. In order to increase the power of our study, results were analyzed together with results of ten published genome-wide SCNA-profiling retinoblastoma studies [6][7][8][9][10][11]13,[15][16][17] adding up to a considerable number of 310 primary retinoblastoma samples. Good agreement in SCNA-frequencies between 11 independent studies Our study describes SCNAs of 45 primary retinoblastoma samples together with SCNAs from 10 published studies (S1 Table) adding up to 310 tumor samples (Fig 1, cohort description is given S2 Table). This allowed for a detailed genome-wide comparison of SCNAs determined from independent and international studies. For each HGNC approved gene, the percentage of the cohort affected by gains/losses is visualized along the genomic coordinates stratified by study (Fig 2). Percentages of SCNAs showed good agreement between studies, except for studies with small sample size and/or platform related differences. In the two small-sized studies (Gratias; N = 2, van der Wal, N = 13), SCNA percentages more easily reached high numbers. For example, in the Gratias study [13], gain of chromosome 1 and 6 was observed in 100% of the cohort, yet only affecting two patients. In addition, some platform specific differences between studies were reflected in the SCNA-percentages. For example, high-resolution studies (SNP-arrays, Kooi, Mol, Zhang) were able to detect small (< 100 Kb) SCNAs which resulted in more spikes in SCNA-frequencies. Three notable SCNA-gain spikes at 7p14.1, 7q34 and 14q11.2 overlapping with the TCR-γ, TCR-β and TCR-α/δ gene clusters respectively, likely were not cancer related. It was shown that loss of these three regions is a frequent and somatic event that occurs in lymphocytes [18]. As a result, the three spikes of SCNA-gains appeared in the tumor-blood matched SNP-array analyses. Most likely these gains were the result of lymphocyte specific deletions and therefore the TCR-α/β/γ/δ genes were omitted from further analyses in this study. Identification of candidate retinoblastoma driver genes A major challenge in the interpretation of SCNAs is to distinguish driver from passenger genes since a single SCNA usually covers tens to hundreds of genes. To do so, SCNA-gain and SCNA-loss frequencies were used to compile a list of most plausible candidate genes and were integrated with micro-array gene expression data. For each approved HGNC gene, the number of patients with SCNA-gains subtracted from the number of patients with losses was calculated, which we call the SCNA-gain-loss difference. By subtracting the number of patients with gene losses from the number of patients with gene gains, non-disease associated SCNAs that arose from random genomic instability, which usually are both gained and lost in a cohort, were not prioritized. In addition to the SCNA-gain-loss difference, the percentage of patients affected by gain/loss of the respective gene (SCNA-percentage) was used as an extra threshold for candidate gene selection. We considered genes with an SCNA-percentage >10% (out of 310 retinoblastoma samples) as candidate genes. This frequency threshold was empirically determined by plotting the number of genes that meet increasing SCNA percentage criteria (S1 Fig). This figure shows that the number of genes passing the SCNA percentage filter decreases rapidly but stabilizes at SCNA percentage between 5 and 15%. Chromosomes that contained candidate genes included chromosomes 1, 2, 6, 7, 13, 16 and 19. For these chromosomes, the SCNA-gain-loss difference is visualized along genomic coordinates (Fig 3). For each chromosome, peaks were defined by the gene with the highest SCNA-gain-loss difference and neighboring genes that showed at maximum 1% decrease in SCNA-gain-loss difference relative to the peak gene, visualized by the dashed numbered rectangles (Fig 3). Genomic coordinates with annotated gene symbols are given (S3 Table). For the Mol study [15] and our current study, matching gene expression profiling was available for 56 samples (S4 Table) [19]. Using this dataset, gene dosage effects (more/less gene copies is correlated with more/less gene expression) were quantified. For each peak region, genes with a significant gene dosage effect are listed in the last column of S3 Table. Whereas the candidate list for chromosomes 1, 2, 6, 7 and 13 (relatively small peak regions) was narrowed down to a handful of genes, for chromosomes 16 and 19 (relatively large peak regions) there were still many candidate genes remaining. In S5 Table, more detailed information is given on the candidate genes including the mean expression of the respective gene in the expression profiling cohort. For candidate genes on chromosome 1, 2, 6, and 7, genes with the highest mean expression were NEK7, MYCN and DDX1, E2F3 and SOX4, and CHCHD3 Searching google scholar for "retinoblastoma", "copy number", "(a)CGH", and "SNP-array", 11 studies were identified that performed genome-wide profiling of retinoblastoma adding up to 290 samples. No duplicates samples were identified. For the Ganguly study [14], copy number results could not be linked to individual tumors and therefore this study, including 25 samples was discarded. The remaining 265 samples were all included in quantitative analysis and were complemented with 45 SNP-arrays from our current study, adding up to 310 samples. respectively. For chromosomes 13 containing deletions, RB1 had the lowest mean expression. For candidates on chromosomes 16 and 19 there were no genes that clearly showed the lowest or highest expression. Therefore, narrowing down retinoblastoma-associated genes any further than presented in S3 Table was considered too speculative and was therefore omitted for chromosomes 16 and 19. To test the robustness of the peaks identified by the described meta-analysis (S3 Table) and to identify smaller or additional peak regions, subset analyses were performed on the high-resolution SNP-array studies for which raw data was available (Mol,Zhang,Kooi,N = 111). In addition to DNAcopy (circular binary segmentation), genoCN (Hidden Markov-Model) was used to infer the copy number states gain, loss and unchanged. The resulting gene-wise frequencies of gain and loss (S2 Fig) are visualized per study. The gain-loss differences determined by genoCN were strongly correlated (correlation test p-value < 2.2E-16, r = 0.95) with the gain-loss differences determined with DNAcopy (S5 Table). Yet for 1q and 6p, the gain-loss differences determined by genoCN reached a maximum at slightly different genes; for 1q at MIR181 and for 6p at KDM1B, DEK, RNA6-263P, RNF144B and MIR548A1(S5 Table). The DEK gene displayed the most significant gene-dosage effect (FDR-adjusted p-value 3.25E-05) and is one of the most highly expressed genes interrogated by the micro-array (rank 38/ 18,290). In addition to genoCN segmentation, GISTIC analysis was performed on the Mol, Zhang and Kooi subset (S3 Fig, S5 and S6 Tables). This algorithm uses not only SCNA frequency and recurrence but also the SCNA amplitude to identify significantly altered regions. GISTIC analysis confirmed the significance of arm-level gains at 1q and 6p and loss at 13q and 16q and also confirmed peak 2 (MYCN, 2p24.3), which was identified by the meta-analysis. While GISTIC analysis could not identify a clear focal peak at 6p, multiple peaks were identified at 1q and 13q, signifying the difficulty of single-gene identification. The highest peak at 1q in GISTIC analysis at 1q32.1 included 39 genes, but did not overlap with peak 1 identified by the meta-analysis (S3 Table). The GISTIC peaks at 13q also did neither overlap with corresponding peak 5 (13q) from the meta-analysis nor with the RB1 locus. Additionally to the peaks identified by the meta-analysis (S3 Table), GISTIC identified focal gain of 14q22.3 including OTX2 (gene dosage effect FDR-adjusted p-value 0.07) also discussed in the Mol study, loss of 1p36.32 (138 genes, S5 and S6 Tables) and loss of 17p13.3 (80 genes, S5 and S6 Tables). Of note, the GISTIC gain peaks at 7p14.1, 7q34 and 14q11.2 are the regions that are For chromosomes that contained commonly altered genes, the gain-loss difference (number of patients with gains minus the number of patients with losses) is plotted for each official HGNC along chromosomal coordinates. For each of these chromosomes, peak regions were defined (also see S3 Table) indicated by the dashed rectangles and were used for retinoblastoma driver discovery. Clustering of tumors based on SCNAs is mainly driven by total genomic disruption Previous studies have shown that there is profound variability in copy number profiles of retinoblastoma samples associated significantly with age at diagnosis [7,8,12,15], heredity [15] and laterality [12,15]. In addition, profound differences in gene expression profiles between retinoblastoma samples were identified [19,20]. To identify genomic retinoblastoma subtypes, unsupervised hierarchical clustering (UHC) of retinoblastoma samples was performed using per gene SCNA-calls, displayed by a heat map ( Fig 4A). The resulting retinoblastoma sample clustering is visualized by the dendrogram on top of the SCNA heat map together with corresponding color-coded sample information. The dendrogram was pruned to yield 4 UHC groups, optimizing for the differences between within cluster-and between cluster distances. Yet, it is arguable whether these 4 UHC groups represent truly distinct molecular retinoblastoma subtypes. To test for mutual exclusivity between frequently altered chromosomes, correlation tests were performed. None of the frequently altered chromosomes were significantly anti-correlated (S4 Fig). Instead of mutually exclusive SCNAs, clustering ( Fig 4A) could be mainly driven by gradual variability in total genomic disruption. Indeed, total genomic disruption (in our study defined as the number of genes affected by SCNAs) was significantly different between the four UHC clusters. (Kruskal-Wallis p-value < 2.2E-16, Anova F-test: p-value < 2.2E-16). To further investigate a possible gradual variability in total genomic disruption, clustering ( Fig 4A) was complemented by ordering the samples based on total genomic disruption ( Fig 4B). There was a remarkable gradual increase in total genomic disruption that correlated with age at diagnosis (p-value < 2.2E-16). For each individual recurrent SCNA-gene, the frequency of occurrence increased with increasing total genomic disruption and age at diagnosis. Furthermore, recurrent 2p (mean age 25, standard error of the mean (SEM) 2.2 months) and 6p gains (mean 26, SEM 2.2 months) were observed in more stable and early diagnosed tumors (oneway ANOVA p-value 0.01) than 1q gains (mean 30 months, SEM 1.9 months) and 16q losses (mean 34, SEM 2.3 months). Possibly, total genomic disruption could be a better descriptive for SCNA-profiles than stratified groups identified by UHC. Genomic disruption increases with age at diagnosis, loss of differentiation, and SCNA-signal strength To assess the statistical significance of the association between total genomic disruption and clinical and histopathological variables, hypothesis testing was performed and presented in Table 1. Data of variables that significantly associated with total genomic disruption were also visualized ( Fig 5). In case the clinical variable was numeric (e.g. age at diagnosis) the tumors were stratified in 4 disruption quartiles (Q1 = 25% least disrupted tumors, Q4 = 25% most disrupted tumors, Fig 4B) each showing boxplots of the clinical variable of interest. By definition, total genomic disruption differed significantly (p-value < 2.2e-16) between these disruption quartiles ( Fig 5A). The average SCNA-amplitude per tumor was calculated from the segmentation mean of SCNAs with amplitudes both below and above the used segmentation threshold. Total genomic disruption linearly increased with SCNA-amplitudes ( Fig 5B). This panel also shows that SCNAamplitude rarely exceeds change of one copy (copy number 3, Log2-ratio 0.58). This means that there must be sample heterogeneity, either by intra-tumor clonal heterogeneity or contamination with non-cancer cells (e.g. retina or blood). For 40/66 (61%) samples in the Mol and our current study, DNA diagnostics identified LOH at the RB1 allele as one of the disease causing events (S7 Table). For these tumors, the homozygosity at the RB1 locus (mBAF values, S7 Table) is indicative for tumor cellularity. Tumor purity was estimated to be very high (mBAF > = 0.99) for 22/40 (55%) tumors, high (mBAF > = 0.90) for 14/40 (35%) and moderate (mBAF > = 0.74) for 4/38 tumors. Also in the tumors for which the tumor cellularity was estimated to be very high, the SCNA amplitudes rarely exceeded one copy. An example is given for tumor 101032-02 (S5 Fig) which clearly shows LOH at 13q and incomplete LOH at 16q. Since the tumor cellularity for the rest of the cohort (272/310, 88%) could not be determined, possible non-cancer cell contamination could not be ruled out for the majority of the cohort. Possibly, the gradual differences in SCNA-amplitudes are caused by differences in withintumor heterogeneity. It has been demonstrated before that within the same tumor, fields of SCNA-devoid differentiated benign precursor lesions were located adjacent to more undifferentiated malignant retinoblastoma fields full of SCNAs [5]. In agreement, increase in SCNAamplitudes and total genomic disruption correlated with decreased differentiation grades (pvalue 0.04). Retinoblastoma cell lines showed high total genomic disruption Cell lines derived from retinoblastoma primary tissue are considered valuable model systems to study retinoblastoma in vitro. To assess the genomic resemblance of retinoblastoma cell lines to primary tumors, we determined genome-wide SCNA-profiles for 8 retinoblastoma cell lines and compared those to the 45 primary retinoblastoma samples. Furthermore, retinoblastoma cell cultures have been extensively selected for proliferation by in vitro culturing and might reveal focal SCNAs driving retinoblastoma proliferation that remained undiscovered in Only NUP205 showed a significant association (FDRadjusted p-value 2.62E-5) between copy numbers and expression and was also identified as a candidate in the primary tumor data set (S3 Table). Gain of chromosome 19q was not observed in any of the cell lines. The mean number of genes altered in cell lines (5813, S.E.M 1021) was significantly higher than in Q1 (p-value = 2.2E-06), Q2 (p-value = 3.7E-06) and Q3 (pvalue = 1.3E-04), while not different from Q4 (p-value 0.65). This analysis indicates that retinoblastoma cell lines resemble primary tumors with high total genomic disruption or that they are a representation of the genomic disrupted part of the original tumors. Discussion In our study, SCNA-profiles of 45 primary retinoblastoma samples were determined, which were analyzed together with SCNA profiles reported in ten published studies. In addition, the copy number data was integrated with publicly available matching gene expression data to further aid driver discovery. Candidate driver genes included CRB1, NEK7 (1q), MYCN (2p), SOX4 (6p), RB1 (13q) and numerous genes for 16q. By dedicated subset analysis with the highresolution platforms and by using alternative analysis algorithms, MIR181 (1q) and DEK (6p) were identified additionally. Furthermore, our study shows examples of tumors with SCNAs do not exceed change of one copy, despite little non-cancer cell contamination, indicative for intra-tumor heterogeneity. Also, our meta-analysis allowed for a comprehensive association of retinoblastoma genotypes to the clinical phenotypes, which furthers our understanding of retinoblastoma. Candidate genes reported in previous studies Several studies aimed to identify genetic alterations promoting retinoblastoma development beyond loss of RB1. While most studies have focused on genetic alterations, it was also shown that epigenetic alterations might be important for retinoblastoma carcinogenesis [16]. Yet, the main focus of our current study is genetic alterations. Some of the previous retinoblastoma copy number alteration studies limited their discussion of SCNA-profiles to correlations with total genomic disruption (Mairal, van der Wal, Zhang), while other studies also provided suggestions for putative candidate genes beyond focally altered MYCN or RB1 (Chen, Herzog, Lillington, Zielinski, Gratias, Sampieri, Mol). Although our study showed that the detected genome-wide SCNA-profiles showed good agreement between these studies (Fig 2), there was a clear variability in the suggested candidate genes between different studies (Fig 7). Table) and MDM4 and REN were located in the GISTIC peaks (S3 Fig, S5 and S6 Tables). In the meta-analysis, the most frequently gained genes with a significant gene dosage effect at chromosome 1q were ZBTB41, CRB1 and NEK7. The NEK7 gene showed the highest mean expression among these three candidates (S5 Table). The NEK7 gene is part of the NIMA-related mitotic kinase gene family. In agreement, it was found that malignant retinoblastoma fields consist of mitotically active cells, in contrast to retinoma fields [21]. Also, overexpression of family member NEK6 was shown to antagonize p53-induced senescence in human cancer cells [22]. Interestingly, benign precursor retinoma lesions stained positively for senescence-associated proteins p16 INK4A and p130 [5]. Possibly, gain of NEK7 and subsequent protein overexpression can antagonize p53-induced cellular senescence and causes benign retinoma cells to progress though the cell cycle. Furthermore, the oncogenic potential of NEK[6/7] has been recognized in various cancers, including breast cancer [23], gallbladder cancer [24], Wilm's tumors [25] and head and neck cancers [26]. Using our integrative genome-wide approach, our study now also identified NEK7 as novel 1q candidate gene potentially driving retinoblastoma progression. Next to NEK7, the second most highly expressed gene in peak 1, CRB1, is a noteworthy candidate driver gene as well. CRB1 is involved in the development of photoreceptor cells [27], which are suggested to be the cells of origin of retinoblastoma [28]. Expression of CRB1 interrupts naturally occurring apoptosis and photoreceptor apoptosis required for proper retinal morphogenesis [29]. Mutations in the CRB1 gene cause abnormally thick retina with abnormal lamination, in particular in the photoreceptor-dense are at the fovea [29]. Possibly, overexpression of CRB1 driven by copy number gains accelerates photoreceptor-derived retinoblastoma. Candidate genes identified by the meta-analysis Good concordance between our results and previously proposed candidate genes was observed for chromosome 6p gains. Out of 13 previously proposed candidates, E2F3, ID4, and SOX4 are located in the candidate peak 2 region. For ID4, no significant gene dosage effect was observed in this study. To compensate for possible residual RB1 activity, gain of E2F3 might be important for retinoblastoma to develop. However, SOX4 is also an interesting candidate gene. In hepatocellular carcinoma it was shown that SOX4 over-expression led to a significant repression of p53-induced Bax expression and subsequent repression of p53-mediated apoptosis induced by gamma-irradiation [30]. Possibly, gain of SOX4 in retinoblastoma could be a relevant hit beyond loss of RB1, allowing RB1-inactivated cells to better escape p53-induced apoptosis or senescence. Similarly to NEK7, SOX4 has been identified as an important oncogene in a variety of other cancers including endometrial cancer [31], brain cancer [32,33], breast cancer [34], bladder cancer [35], ovarian cancer [36], colorectal cancer [37], liver cancer [30] and leukemia [38]. Therefore we suggest that SOX4 should be considered as a serious candidate gene of the 6p gain region. In case of chromosome 7q and 19p gains and 16q losses assignment of the driving gene is more speculative. Since focal SCNAs are not observed at these genomic loci, the resulting candidate peak regions contain numerous candidate genes. Several previous studies proposed RBL2 (protein p130) as a candidate gene for the 16q loss (Fig 7) which is often seen in patients diagnosed at later age. Since p130 is primarily expressed in G 0 -cells restricting them for cell cycle entry [39], loss of p130 could prevent cellular senescence and promote the benign-tomalignant transition. Our study showed that copy numbers of RBL2 are commonly reduced and are associated with decreased expression. Therefore, also in our analysis, it remains one of the many candidates for 16q loss. For chromosome 7q gains, out of 10 proposed candidates Copy Number Profiling of Retinoblastoma based on primary retinoblastoma samples, only NUP205 is included in a focal gain observed in cell line RB191. In lung cancer cell lines it was shown that through TMEM209 stabilization of NUP205, protein levels of MYC were increased and promoted cell growth. Attenuation of TMEM209 stabilization corresponded to blocked growth, indicating the TMEM209-NUP250 complex might play a role in cell proliferation [40]. Limitations of candidate gene identification By integrating data from our DNA profiling study with published studies and integration with gene expression data, we present a comprehensive effort to identify retinoblastoma driving genes. A potential disadvantage of data pooling is that the resolution of the pooled data is lower than the resolution of the three high-resolution SNP-array studies. Another disadvantage is that SCNA amplitudes are not available for all studies. This meant that in the pooled analyses, SCNA amplitudes were not taken into account. Therefore, the pooled analysis was complemented by a subset GISTIC analysis using the high-resolution Mol, Zhang and Kooi datasets only (S3 Fig, S5 and S6 Tables). While GISTIC analysis confirmed the significance of 1q, 6p and 16q alterations and peak 2 (2p24.3, including MYCN) from the pooled analysis, the peak regions at 1q and 13q were slightly shifted in GISTIC analysis. For 1q, GISTIC even identified multiple regions to be significantly altered. Also when a different segmentation algorithm was used (genoCN, S2 Fig and S5 Table), the peaks at 1q and 6p slightly shifted, in this case towards the MIR181 and DEK genes respectively. Except for 2p where only MYCN is included in all gained regions, multiple candidate genes remain for the commonly altered chromosomes. Therefore, functional assays are needed to empirically determine the oncogenic potential of the described candidate genes. Models explaining the association between total genomic disruption and age at diagnosis Previous studies showed that retinoblastoma tumors have profound variability in total genomic disruption. It was unclear whether this variability is dichotomous or gradual, suggestive of two subtypes or gradual progression respectively. The study of van der Wal et al. suggested that variability in total genomic disruption was bimodal, although this was based on a small study (N = 13) and was not substantiated by any statistics. In the study of Mol et al. (N = 21), it was shown that unsupervised hierarchical clustering divided retinoblastoma samples into three branches with increasing total genomic disruption. Our study conclusively shows that total genomic disruption is gradual and co-occurrence and/or mutual exclusivity of SCNAs is not apparent. Increasing total genomic disruption was related to decreasing differentiation grades and suggests a de-differentiation process. Since our data includes indications for intra-tumor heterogeneity of genomic alterations, possibly there was also heterogeneity in differentiation grades between cells within tumors. In agreement, examples have been described where fields of differentiated cells lie adjacent to undifferentiated cells [21]. It is interesting why tumors with much genomic disruption and poorly differentiated cells were particularly observed in patients diagnosed at late age. Possibly, in tumors where the second RB1 hit occurred at later age the resulting precursor lesion was less proliferative than lesions that developed more early in retina development. When SCNAs occurs in these lateonset precancerous lesions, the initial SCNA-devoid cells are easily overgrown by the progressed proliferative cells. The hypothesis that the proliferative consequence of RB1 inactivation in the retina is age-dependent is underscored by the fact that retinoblastoma does not occur after the retina is fully developed. Additionally, diagnosis could have been delayed in patients with older age at diagnosis and thereby allowed the tumors more time to acquire SCNAs and progress. In conclusion Our integrated approach allowed us to refine and improve the lists of putative retinoblastoma driving genes. This limited set of genes can serve as leads for future studies on retinoblastoma progression and precision medicine. However, we also found that for at least a subset of tumors, abnormal gene copy numbers were not always present in all tumor cells. Therefore, a multi-target treatment strategy might be required for efficient retinoblastoma treatment. Tissue collection Tumor samples were obtained from retinoblastoma patients after primary enucleation and peripheral blood samples were collected at initial presentation before treatment. In the Netherlands, all patients are referred to the VU University Medical Center. Hence a well-documented cohort of unselected primary enucleated eyes was available for molecular studies. Tumor samples were snap frozen in liquid nitrogen and stored at -80°C until further analysis. All patient samples and clinical and histopathological features were collected and stored according to local ethical regulations. All patients gave consent verbally, as this was the standard in the time the included patients were diagnosed. Since genetic analyses of our study focused on tumor DNA and not germ line DNA, waiver of informed consent was specifically given for genetic analyses by The Medical Ethics Review Committee of the VU University Medical Center which is registered with the USA OHRP as IRB00002991. The FWA number assigned to VU University Medical Center is FWA00017598. A cohort description including clinical and histopathological information is given in S2 Table. RB cell lines RB1021, RB383, and RB247 [41] were kindly provided by the laboratory of Brenda Gallie and cell lines RB191, RB176, and RB381 [42] by the laboratory of David Cobrinik. DNA isolation Genomic DNA from frozen tumor retinoblastoma specimens was isolated with the NucleoSpin Tissue kit (Macherey-Nagel, Düren, Germany) or Wizard Genomic DNA Purification Kit (Promega, Madison, USA). DNA quality was analyzed for high molecular bands >20 Kb by agarose gel electrophoresis. DNA concentration and OD 260/280 ratio was determined with the Nanodrop ND-1000 spectrophotometer (NanoDrop Technologies, Wilmington, USA). DNA yields and quality were within the same range for all samples. SNP-arrays Microarray-based DNA genotyping experiments were performed at ServiceXS (ServiceXS B.V., Leiden, The Netherlands) using the HumanOmni1-Quad BeadChip (Illumina, San Diego, USA), according to the manufacturer's instructions. The BeadChip images were scanned on the iScan system and the data was extracted into Illumina's GenomeStudio software v2010.1. The software's default settings were used with the cluster file as developed by Illumina for genotype calling. Resulting copy number estimates (Log2-ratios between tumor and matched blood) and B Allele frequencies were normalized with tQn normalization [43] and segmented with DNAcopy [44] with a minimum segment length of five markers. For loss of heterozygosity detection, segmentation of converted B allele frequencies (mBAF) was performed using BAFsegmentation using a mBAF threshold of 0.8 [45]. A minimum of five consecutive markers was used for segmentation together with a minimum mBAF-amplitude of 0.6. In parallel to DNAcopy segmentation, genoCN segmentation was used with default parameters to infer copy number states gain, loss and unchanged of the SNP-array datasets. To identify significantly altered regions, GISTIC analysis was performed (q-value < 0.05) using the combined segmentation (by DNAcopy) results of the Mol, Zhang and Kooi data sets. Gene expression data is available at GSE59983 (primary samples) and GSE77094 (cell lines). DNA copy number data is available at EGAS00001001715. Data collection and analysis By manual google scholar search, studies profiling SCNAs by CGH, array-CGH, SNP-array or NGS were identified (Fig 1). Studies that reported SCNAs by cytoband location were digitalized by manually looking up the current genomic coordinates of the reported cytobands (hg19). Careful examination of sample identifiers was performed to prevent duplicate records, yet no duplicate records were identified. In case SCNAs were reported in genomic coordinates, they were converted to hg19 coordinates by the UCSC liftover tool [46]. For studies where raw data was available [13,15,16], copy number estimates were segmented with DNAcopy as described above. Since biopsies are uncommon in retinoblastoma, the included samples are possibly biased for later staged tumors that had to be enucleated. Furthermore, earlier studies used lower resolution profiling compared to more recent studies and therefore more subtle genomic alterations might have been more readily identified by later studies. All published SCNAs were concatenated with SCNAs detected by our study. SCNAs were called in three states; loss, normal and gain using copy number thresholds 1.8 (Log2-ratio = -0.15) for losses and 2.2 (Log2-ratio = 0.14) for gains for segments with a p-value < 0.05 (S6 Fig). For each official HGNC gene, the segmentation mean was calculated by overlapping genomic coordinates of the genes with detected SCNAs using BEDOPS [47]. For hierarchical clustering, Ward's agglomerative clustering was performed using Euclidean distances. Statistical analysis and visualization was done in R (Pumpkin Helmet, version 3.1.2). For hypothesis testing where both independent and dependent variables are numeric, Wilcoxon signed-rank tests were used. For 2-level categorical independent variables and numeric dependent variables, Wilcoxon rank-sum tests were used. For independent categorical variables with more than 2 unordered levels and numeric dependent variables, Kruskal-Wallis tests were used. For independent categorical variables with more than 2 ordered levels (e.g. differentiation grade low < medium < high) and dependent numerical variables, linear-by-linear association tests were used implemented by the "coin" R-package. Two-sided p-values below 0.05 were considered statistically significant. Extreme p-values lower than 2.2E-16 could not be calculated and are reported as <2.2E-16. For the determination of gene-dosage effects, linear regression was performed of continuous copy number estimates (segmented Log2-ratios) on RMA-normalized expression estimates. The linear regression slope was tested for significance, and Benjamini-Hochberg multiple-testing corrected p-values were calculated. Genes with a positive regression slope (the more DNA copies, the more gene expression) and multiple-testing corrected p-values < 0.05 were considered to display a gene-dosage effect. Table. (TIFF) S4 Fig. Correlations between SCNA peaks. Pearson correlation matrix testing for co-occurrence and mutual exclusivity between peak regions containing retinoblastoma-driving candidate genes. The lower-left triangle is a color-coded (blue = mutual exclusivity, red = cooccurrence) representation of the upper-right triangle which gives the Pearson correlations. Peak regions showed no mutual exclusivity and weak co-occurrence. The best correlation (0.42) was found between 1q gain and 16q loss, both events often observed in patients diagnosed at late age. (TIFF) S5 Fig. Within-tumor heterogeneity of 16q-loss. An example of a tumor sample without non-cancer cell contamination (100% LOH at RB1, chromosome 13), but with incomplete LOH of 16q. Only SNPs that were heterozygous in the matching blood sample were used for this analysis. (A) Overview of mirrored B-allele frequencies (mBAF) segmented with BAFsegmentation. This sample displayed 100% LOH of 13q illustrated by mBAF~1 indicating that this sample does not contain any detectable amounts of non-cancer cells. On the contrary, mBAF of 16q was segmented at mBAF 0.65, indicating that this sample contained cells with 16q-LOH (mBAF 1) and cells without 16q-LOH (mBAF 0.5). (B) B-allele frequencies of SNPs that were heterozygous in the matched germ line sample of chromosome 13 (complete LOH) and 16 (mixture of LOH and normal). Note that no data is available for the 13p-region since the DNA sequence of this region remains to be determined. Table. Description of SCNA studies. For each study included in the meta-analysis it is described how SCNAs were determined and integrated in our meta-analysis. (XLSX) S2 Table. Cohort description. Statistics about the number of included samples per platform and patient phenotype variables. (XLSX) S3 Table. Determination of candidate driver regions by meta-analysis. For chromosomes that were altered in at least 10% (31/310) of the pooled cohort, peak regions were identified. The peak regions were defined by the maximum SCNA gain-loss index for that chromosome with peak boundaries at 1% deflection from the peak. The peak regions were annotated with genes that displayed a gene-dosage effect and were considered candidate genes driving retinoblastoma oncogenesis. (XLSX) S4 Table. Copy number and expression values. For 56 samples from the Mol and Kooi studies, both SNP-array and gene expression profiling was performed. For each gene that was profiled by both methods, the Log2-ratio (DNA) and Log2-transformed normalized expression (RNA) is given. (XLSX) S5 Table. Per-gene SCNA-gain-loss values. For each approved HGNC gene, the SCNA-gainloss difference is given together with gene dosage effect testing. In a subset analysis on the Mol, Zhang and Kooi dataset, the gain-loss difference was also determined using Hidden Markov-Model based segmentation (genoCN). Additionally, GISTIC analysis was performed in this subset and the per-gene q-values are given, indicating the per-gene significance of alterations. (XLSX) S6 Table. GISTIC subset analysis results. Regions that were determined to be significantly (S3 Fig, q- Table. B-allele frequencies of RB1. For the Mol study and the current study, SNP-arrays were used and conventional DNA diagnostics for RB1 was available. Using this data, the tumor cellularity can be estimated. In case the second hit was LOH (40 tumors), B allele frequencies of the RB1 allele should often (34/40 tumors) exceeded 0.95 indicating non-cancer contamination at maximum was 10% for these tumors. (XLSX)
8,047.2
2016-04-26T00:00:00.000
[ "Biology", "Medicine" ]
First star survivors as metal-rich halo stars that experienced supernova explosions in binary systems The search for the first stars formed from metal-free gas in the universe is one of the key issues in astronomy because it relates to many fields, such as the formation of stars and galaxies, the evolution of the universe, and the origin of elements. It is not still clear if metal-free first stars can be found in the present universe. These first stars are thought to exist among extremely metal-poor stars in the halo of our Galaxy. Here we propose a new scenario for the formation of low-mass first stars that have survived until today and observational counterparts in our Galaxy. The first stars in binary systems, consisting of massive- and low-mass stars, are examined using stellar evolution models, simulations of supernova ejecta colliding with low-mass companions, and comparisons with observed data. These first star survivors will be observed as metal-rich halo stars in our Galaxy. We may have identified a candidate star in the observational database where elemental abundances and kinematic data are available. Our models also account for the existence in the literature of several solar-metallicity stars that have space velocities equivalent to the halo population. The proposed scenario demands a new channel of Introduction The first stars in the universe draw much attention for the understanding of the star formation history in the early Galaxy. Such attention involves the critical questions: do they still exist? how can they be found? Answers to these questions have a strong impact on the study of how our Sun was formed and where the terrestrial elements originated. This is because the first stars give a hint towards the formation and evolution of stars, and the synthesis of elements in stars, starting from elemental abundances equivalent to those in the Big Bang nucleosynthesis. The existence of the first stars in the current universe is still controversial. The basic understanding is that they are typically too massive to exist for a period of more than 10 Myr from their birth. The preference for the formation of massive stars is due to the lack of elements that work as a coolant to compress star-forming gas (Bromm & Larson 2004). However, there are some arguments about the initial mass of the first stars. There is a possibility of the formation of a star with initial mass below the solar mass that has a lifetime comparable to or larger than the age of the universe (Clark et al. 2011;Susa et al. 2014;Stacy et al. 2016). In particular, recent simulation studies of the first stars focus on the formation of such low-mass first stars around massive stars (Susa et al. 2014) or the formation of the first binary systems (Stacy et al. 2016;Sugimura et al. 2020). The effort to find the first stars in our Galaxy has revealed hundreds of stars with [Fe/H] −3 (Bond 1980;Beers et al. 1985;Ryan et al. 1991;McWilliam et al. 1995;Ryan et al. 1996;Fulbright 2000;Norris et al. 2001;Carretta et al. 2002;Johnson 2002;Cayrel et al. 2004;Cohen et al. 2004;Honda et al. 2004; Barklem et al. 2005;Aoki et al. 2007;Lai et al. 2008;Caffau et al. 2013;Norris et al. 2013;Roederer et al. 2014b;François et al. 2018). These stars are called extremely metal-poor (EMP) stars; among them the currently most iron-poor star has [Fe/H] < −7 (Keller et al. 2014). The origin of EMP stars has been controversial mainly in explaining the large fraction of carbonenhanced metal-poor (CEMP) stars. Previous scenarios can be classified into three theoretical models, all of which try to explain the abundance patterns of CEMP stars. In the three major scenarios, CEMP stars are formed (1) by binary mass transfer from the former AGB stars (Suda et al. 2004), (2) from the gas in the interstellar medium (ISM), contaminated by the ejecta from the first-generation supernova (Umeda & Nomoto 2003), and (3) from the gas ejected from fast-rotating massive stars (Meynet et al. 2006). In this paper, we explore another possibility for the formation of the first stars that can be verified by observations, i.e., the evolution of the first stars in a binary system consisting of a massive star and a low-mass star. In such a system, the ejecta of a supernova explosion in the massive star collide with the low-mass companion, which will either strip away or be gravitationally confined to the surface of the companion, or both may happen due to a wide range of the speed of the ejecta. This scenario provides a new pathway to look for evidence of polluted first stars among known halo stars in the Milky Way Galaxy, where there are more than 500 stars with detailed chemical abundances derived from high-resolution spectra. The paper is organized as follows. In the next section, we provide the overview of our scenario. The details of the models are described in section 3. The results of our simulations are discussed in section 4. Section 5 provides the implications from our scenario. Conclusions follow in section 6. New scenario to identify the survivors of the first stars We propose that some low-mass first stars should have survived supernovae in binary systems. They can be observed in our Galaxy as metal-rich halo stars if they were contaminated by the supernova ejecta in close binary systems. A sufficiently small binary separation is required for surviving stars to change their surface chemical composition by the accretion of the ejecta. This is not necessarily guaranteed because typical massive stars evolve to red supergiants, having radii up to ∼1000 R before the core-collapse. If the separation between two stars is shorter than the radius of the evolved massive star, the binary system will undergo the common envelope phase where the companion star will be embedded in the envelope of the massive star. Such a system will have a short life span due to the mass transfer and cannot be a candidate for the first-star binaries. The first stars can avoid the evolution to red supergiants due to the initial lack of metals in the centre, more specifically CNO elements. The final radius of a metal-free massive star is reported to be of the order of 10 R (Heger & Woosley 2010;Tanikawa et al. 2020). This result is supported even if stellar rotation is taken into account (K. Takahashi & T. Yoshida 2020 private communication). . The data points are compiled from the abundance data from selected literature using the SAGA database (see appendix 1). However, we made corrections to the abundances by considering the effect of dilution by the convective envelopes in subgiants and red giants to compare models and observations with lithium abundances at the main sequence phase. The plotted data are classified and counted (shown by the numbers next to the labels) according to the evolutionary status as discussed in the text. The data points with circles denote the model results in table 1. (Color online) The upper boundary of the metal content of stars not to evolve into red supergiants depends on the initial mass, where the threshold value is [Fe/H] ∼ −7 for 12 M and ∼ −2 for 20 M (see subsection 3.1). It is to be noted that less massive stars are more abundant than more massive stars, and hence metal-free stars are dominant among massive stars with small final radii. The evidence of the first supernova binaries can be checked observationally by the lithium and iron abundances of companion stars. Figure 1 shows the lithium abundances of known metal-deficient stars as a function of metallicity. Lithium is a good tracer of the change of the surface chemical composition because it is easily destroyed at layers with temperatures above 2.5 × 10 6 K, which means that the total lithium content is small. Also, the depletion or enhancement of lithium is easily identified by observations because the stars in the main sequence phase have a typical lithium abundance value of A(Li) = 2.1, which is called the Spite plateau (Spite & Spite 1982). Once the surface chemical composition is altered by external pollution such as stripping and/or accretion due to the collision of supernova ejecta, the surface lithium abundance is depleted by factors of several or more from the original value. Observationally, a non-negligible fraction of metal-poor stars shows lithium depletion, which is not explained by the standard stellar evolution models. In this study, we explore the possibility of lithium depletion. Simulations of the collisions between supernova ejecta and a binary companion are performed using a 3D smoothed particle hydrodynamics (SPH) code, ASURA (Saitoh et al. 2008). The initial conditions for the simulations are calculated by a 1D Lagrangian hydrodynamics code calibrated by SN 1987A (Shigeyama & Nomoto 1990) so that the model reproduces the light curve of the supernova well. The initial masses and explosion energy are set at 15, 20, and 25 M and 10 51 erg, respectively. The remnant mass of the progenitor is assumed to be 1.3 M , following the results of Heger and Woosley (2010). Stellar mass loss before the explosion is ignored, following the previous studies (Heger & Woosley 2010;Tanikawa et al. 2020). The initial condition for the explosion is mapped on 3D simulations by setting the pressure, density, and kinetic energy in the primary star. The structure of the secondary star is constructed by mapping a 1D model star computed with a stellar evolution code (Suda & Fujimoto 2010). We assumed that the convective zone of the surface of the model mixes the materials homogeneously and does not change its mass and structure by the collision of supernova ejecta. This assumption may affect the estimate of the final chemical composition by the simulations. Stellar rotation is Downloaded from https://academic.oup.com/pasj/article/73/3/609/6249457 by guest on 28 July 2021 ignored for the sake of simplicity. In our simulation setups, binary components should be tidally synchronized so that the effect of rotation is not significant. Only the separation is a relevant binary parameter in our simulations. This is because the orbital velocity is much smaller than the impact velocity of the ejecta and because the simulation time is much shorter than the orbital period. Here we presume that the eccentricity is close to zero to avoid the passage of a companion star into the envelope of a primary. We fixed the supernova model in this study for the following reasons: (1) the simulations by changing the velocity distribution and/or the total kinetic energy of the ejecta will be essentially the same as the models with different binary separations, (2) it will be a big computation cost to run more simulations with different inputs for supernova models, and (3) this project is still in embryo and it will be too much work to test unusual types of supernova such as asymmetric explosions and mixing and fallback models. The impact of the variations in supernova models is addressed later in this paper and such simulations can be a future work. In the next section, we describe the details of the models and assumptions to perform numerical simulations and compare the results with observations. Models and assumptions 3.1 Evolution of massive metal-free and metal-poor stars The characteristics of the evolution of metal-free and metal-poor stars are the smaller radii at the ends of their lives compared with stars that evolve through the red supergiant phase. Metal-deficient massive stars are supported by hydrogen-burning in the centre at the early phase without CNO elements (the p-p chain reactions), which requires higher temperatures than hydrogen burning with CNO catalysts (the CNO cycles). The higher the temperature of the nuclear burning regions, the faster the progress of the nuclear burning stages. Therefore, the final nuclear burning finishes before the stars pass through the Hertzsprung gap where the radii increase rapidly. We have computed the evolution of massive stars with various mass and metallicity, which is displayed in figure 2. We use the same stellar evolution code (Suda & Fujimoto 2010) with the addition of carbon burning reactions. Cross-sections of 12 C + 12 C and 12 C + 16 O are taken from the fitting formulae provided in Caughlan and Fowler (1988). The computations are terminated when the mass fraction of carbon in the centre becomes smaller than The two symbols on the evolutionary tracks correspond to the onset of helium (smaller symbols) and carbon (larger symbols) burning, respectively. Computations were terminated at the end of the carbon-burning phase where the mass fraction of carbon abundance is below 0.02. (Color online) 0.02. The locations of the onset of the helium-and carbonburning phases on the Hertzsprung-Russell (H-R) diagram are shown by smaller and larger circles, respectively. For the sake of simplicity, carbon and oxygen in these reactions are converted to 25 Mg. This approximation is sufficient to see the final location of model stars on the H-R diagram. In some models with low-metallicity stars, we found numerical instabilities caused by the hydrogen ingestion into helium-burning layers where the temperature is the order of 10 8 K. For instance, 15 M models with the metallicity of [Fe/H] ≤ −8 experience the inward extension of hydrogen-burning convection into the helium core before the onset of carbon burning. This results in a hydrogen flash with a hydrogen-burning luminosity exceeding log (L/L ) > 10, which does not provide any convergence in the stellar evolution. It is not clear why this mixing occurs, and it should be studied in a separate paper. The same phenomenon is reported in previous studies for rotating models with M = 160 M (Takahashi et al. 2018) and M > 200 M (Yoon et al. 2012). We tested models with different time steps and rezoning during these evolutionary phases, and we always encountered the same instabilities. We also tested other models using different stellar evolution codes and found the same phenomena. To compromise the computations of these models, we artificially prohibited the mixing of the convection into the helium core. This can be justified for the evolution of the star itself because the overall evolution is controlled by the nuclear burning at the centre, not by the shell burning. The evolutionary tracks in figure 2 are apparently consistent with the previous studies (Heger & Woosley 2010;Tanikawa et al. 2020). The computations were also made for the models of lowmass stars with zero and low metallicities. The model of M = 0.8 M with mass fraction, Z, of elements heavier than helium set at zero was constructed at the age of 10 Myr from the zero-age main sequence, which was used as a target star in the 3D simulations. Other models of M = 0.82 M with various metallicities were computed to check the effect of surface pollution on the low-mass stars. In the following, we present the models of 0.8 M and 0.82 M stars in the same plot, but there are only minor quantitative differences in these models. Figure 3 shows the evolution of the mass of the surface convective zones from the main sequence phase to the beginning of the red giant phase. To confirm the effect of surface pollution, we computed the models by covering metal-rich (Z = 10 −4 and Z = Z ) materials in the surface convective zones. The depth of the convective zone is almost the same as the models without pollution because the structure of the surface convective zone is determined by the nuclear burning in the center. Sample selection from the database The data have been taken from the 2019 December 11 version of the Stellar Abundances for Galactic Archaeology (SAGA) database (Suda et al. 2008(Suda et al. , 2011(Suda et al. , 2017Yamada et al. 2021). The differences in the adopted solar We have removed sample stars that are likely affected by extra mixing during the RGB phase. There is increasing evidence of lithium depletion by extra mixing in red giants above the RGB bump (Charbonnel et al. 2020). These stars are removed from the sample by defining the boundary of the RGB bump, which is estimated by the comparison of stellar models with the observed effective temperature, luminosity, and metallicity. We identified the loca- the majority of extra mixing stars show lithium depletion after the correction with stellar evolution models (see figure 1). SPH simulations We conducted a series of hydrodynamical simulations of binary systems of the first stars, to estimate the evolution of the surface lithium abundances and metallicities. In this section, we describe the models, numerical methods, and some comparison results, which are necessary to select the fiducial models. Table 1 provides the model parameters in this study. The model H15F is the fiducial model in this study. The reason for the choice of our numerical setups can be found in the following subsections and the Appendix. The model names that use "S" (for "separation") are used to investigate the dependence on binary separation. The models named with "R" (for "resolution") refer to resolution studies, and the model with "SSPH" corresponds to the investigation with the standard SPH method. The model with "W," which means "whole," adopts the highest resolution in the whole simulation volume for the ejecta without using anisotropic particle-mass distribution, as described in sub-subsections 3.3.3 and 3.3.4. See the appendix for our feasibility studies on resolution and computational methods, corresponding to models named with "R," "W," and "SSPH." Numerical simulation code Numerical simulations shown here were carried out by a parallel code ASURA (Saitoh et al. 2008(Saitoh et al. , 2009, which was originally developed for the simulations of galaxy formation. The hydrodynamic equations are solved by a SPH method (Lucy 1977;Gingold & Monaghan 1977) with an improved version of the SPH [density-independent formulation of SPH; DISPH ] implemented. The conventional formulation of SPH assumes the smoothness of the density field and cannot handle fluid instabilities due to the unphysical surface tension that appears at contact discontinuities. We employed the simple equation of state (EoS) that assumes ideal and mono-atomic gas (i.e., the specific heat rate is γ = 5/3). Chemical reactions in the ejecta and the star are not taken into account. Self-gravity is solved by the Tree method (Barnes & Hut 1986). We only considered the monopole moment with the tolerance parameter θ = 0.5. The interactions between particle-particle and particle-monopole moments are computed with Phantom-GRAPE (Tanikawa et al. 2012(Tanikawa et al. , 2013. 2.97 * The columns represent the model name, the number of SPH particles for a companion star and for a supernova ejecta, the binary separation, the unbound mass by the stripping, the bound mass by the accretion, and the final iron and lithium abundances without and with lithium-rich ejecta in the surface of the companion star. See section 5 for the details of estimating the abundances. † The accreted mass is not estimated because we stopped the simulation too early to estimate the effect of accretion. Figure 5 shows the schematic picture of our model of a binary system. It consists of a low-mass star and supernova ejecta. 1D models are mapped on to the particle distributions in a 3D volume. We fixed the mass of the low-mass star and three different supernova ejecta named H15, H20, and H25 which represents 15, 20, and 25 M , respectively, from Heger's pre-supernova models (Heger & Woosley 2010). We tested four different separations to investigate the dependence of the stripping and accretion of the surface of the low-mass star by the collision of ejecta. The impact of the initial separation on the evolution is studied for the H15 model by adopting four different separations. The separations of the H20 and H25 models are the minimum allowable in which the low-mass star and supernova progenitor contact with each other. The minimum separation in the H15 model also has this configuration (see sub-subsection 3.3.5 for further details). We assumed that the convective zone of the surface of the model mixes the materials homogeneously and does not change its mass and structure by the collision of supernova ejecta. This assumption may affect the estimate of the final chemical composition by the simulations. We ignored the rotation of a low-mass star with respect to the supernova progenitors because its timescale is much longer than those of the expanding ejecta. In our simulation setups, binary components should be tidally synchronized so that the effect of rotation is not significant. Mapping the 1D stellar model on to the 3D space We used the result of the numerical simulation of 0.8 M with Z = 0 as described above. Figure 6 shows the radial profiles of the 3D model star taken from the 1D model at t = 10 Myr from the zero-age main sequence. To construct a 3D model with particles, we put the particles in individual shells so that the mass contained in (r in , r out ) satisfies the condition rout rin where ρ(r) is the density profile and m(r in ) is the particle mass at r in . The exact position of a particle in the shell is determined by R(r out − r in ) + r in with a random number R ∈ [0, 1]. The values of the density at individual positions are given by linear interpolation between ρ(r in ) and ρ(r out ). The angular positions, the polar and azimuthal angles, are also randomly chosen. The central particle is distributed by finding r out from the above equation for m(r in ) = 0 and m(0) = 32m p, * , where m p, * is the minimum mass of an SPH particle and is set at 3 × 10 −7 M . For the rest of the mass distribution m(r), we adopted the following formula to reduce the computational cost: where R edge ≈ 0.64 R . The outer layers have finer mass resolutions. The total number of gas particles is 1023991. The star has the chemical composition of X = 0.767, Y = 0.233, Z = 0, where X, Y, and Z represents the Downloaded from https://academic.oup.com/pasj/article/73/3/609/6249457 by guest on 28 July 2021 The circle at the top is the low-mass star companion in the binary system. The arrows represent the ejection of matter from the massive star. The initial separation between the two stars is measured by the distance between the centre of the supernova ejecta and the low-mass companion. (b) The initial structure of the star. The inner region consists of coarse particles to reduce the computational cost. (c) The initial structure of the ejecta. The ejecta are divided by the direction so that the region has the finest resolution for colliding angles. mass fraction of hydrogen, helium, and other elements, respectively, at the age of 10 Myr from the onset of hydrogen burning in the center. The internal energy of each particle is computed by the linear interpolation of the temperature profile with the assumption of an ideal gas with the adiabatic index of γ = 5/3 and the mean molecular weight of 0.6. Before starting the simulations of ejecta collisions, we relaxed the particle distribution of a 3D model star following the simulations for Type Ia SNe (Rimoldi et al. 2016). We added the dumping term in the hydrodynamical force and integrated the star for 2 hr, which is sufficient for the relaxation because the dynamical time of the system is estimated to be 30 min. This technique enables us to avoid the oscillations of the stellar surface, which is not suitable for our simulations due to the change of the cross-section of collisional particles. Mapping the 1D ejecta model on to the 3D space The 1D profiles of pre-supernova models with Z = 0 are taken from the literature (Heger & Woosley 2010). We have employed the 15, 20, and 25 M models. Figure 7 shows the radial profiles of the three supernova ejecta models. The outer edges of ejecta are 10, 14, and 24 R , and the corresponding times from ignition are 1747, 2195, and 5705 s, respectively. The model of 15 M is prescribed in the model of SN 1987A (Shigeyama & Nomoto 1990). We have developed a new technique to ensure a high resolution in colliding particles by changing the particle mass depending on the direction of the ejecta particles. Here we define the direction of the ejecta particles by the angle θ from the centre of the ejecta along the z-axis connecting with the centre of the companion star. The particle mass depends on the initial location of the ejecta on the (θ , z) coordinate as follows: where m p,ej is the minimum mass of an ejecta particle. The positions of the particles are determined in the same way as described in the previous subsection. The radial velocities, internal energies, and densities for individual particles are assigned to reproduce the profile in figure 7. We tested four different choices of m p,ej , i.e., 1 × 10 −6 , 3 × 10 −7 , 1 × 10 −7 , and 3 × 10 −8 M . The third choice (H15) is adopted as the fiducial resolution because that result converges with the result using the highest mass resolution (H15Rc). The corresponding numbers of particles for the four models are 479442, 1599777, 4800569, and 16001929, respectively. Those for H20 and H25, with the fiducial mass resolution, are 6487398 and 8189647, respectively. We have confirmed the validity of using particles with different masses by running a test simulation using equal-mass particles and compared it with the directiondependent mass model for the same condition for the collision. It is confirmed that the stripped mass from the companion star and the accreted mass are consistent with each other. We found that we need ≈ 140000000 particles for the equal-mass model to achieve a similar number of effective particles (tan θ < 0.25, z ≥ 0), i.e., our new technique is successful in reducing the number of particles by 1/30. Thanks to our improved method for the assignment of mass in the SPH particles, more than 130000 particles effectively interact with the companion star despite the small visual angle of 3. • 5 in the H15 model. The consistency between 1D and 3D models is confirmed by the total kinetic energy just before the impact of the ejecta. We also checked the stripped mass of the companion with the setup of Type Ia supernovae and compared it with the previous study under the same initial mass and separation (Pakmor et al. 2008). Initial conditions for a binary system In the fiducial model, we adopt the initial separation of 0.1 au (21.5 R ), which is larger than the radius at supernova explosion according to metal-free star models (Heger & Woosley 2010) and the critical radius of the Roche-lobe overflow based on the empirical formula (Eggleton 1983). We also computed the cases with 0.2, 0.4, and 0.8 au to investigate the dependence on the initial separation. To compare various progenitors, we computed the models of 15, 20, and 25 M stars with the minimum separations that are characterized by the size of the progenitor stars. These correspond to 0.048, 0.064, and 0.112 au. The orbital motion of the binary system was ignored in this study for the sake of simplicity. This is reasonably validated by the short timescale (∼10 hr) in our simulations compared with the orbital period (a few days to a few months). The consideration of the orbital motion may have some effect on the amount of accretion at the separation of 0.1 au, while the stripping of the surface layers will not be affected because the stripping is dominated by massive fast ejecta in a shorter timescale (a few hours). It is worth investigating the case of accretion with the motion of a binary orbit taken into account, but this is beyond the scope of this paper. By ignoring the centrifugal force by the orbital motion, a gravitational pull exerts on the ejecta and the star, which Downloaded from https://academic.oup.com/pasj/article/73/3/609/6249457 by guest on 28 July 2021 results in an unwanted motion in the system. Therefore, we filtered a long-range force of gravity for the companion star by ignoring the gravitational interaction with the ejecta at the distance of more than 10 R from the centre of the star. This prescription mimics the orbital motion and is sufficient to avoid the strong gravity caused by the ejecta at the initial position and to consider the effective accretion process after the collision. Mass loss from the progenitor stars are not considered, which is justified by the metallicity dependence of stellar winds. The rotation of the star and the ejecta is not considered. The consideration of rotation would be more realistic, especially for the ejecta, but it is too complicated to implement in the simulations. On the other hand, fast rotation in companion stars will not be mandatory in the binary systems studied. We also employed a particle-split method (Kitsionas & Whitworth 2002;Martel et al. 2006) to retain a high resolution even in large separations. In the run of the H15Sa model, the ejecta particles within 6 R from the center of the low-mass star split into eight smaller particles. In the runs of the H15Sb and H15Sc models, those within 12 R split into eight smaller particles, and those smaller particles divided into eight further smaller particles when they were within 6 R . Figure 8 shows the result of simulations for a binary system consisting of 15 and 0.8 M stars separated by 0.048 au (the H15 model in table 1). The impact of the ejecta on to the binary companion is simulated for 24 hours after the explosion for this model. Once the ejecta material collides with the surface of the companion star, a bow shock is formed near the surface. The basic picture of the outcome of the collision is consistent with previous studies (Pakmor et al. 2008;Liu et al. 2013;Hirai et al. 2018). Results There are two possible impacts on the surface of a binary companion, induced by the collision of supernova ejecta. Fast-moving outer ejecta, with a velocity as much as 10000 km s −1 , strip the surface layers of the companion star, while a part of slowly moving inner ejecta, down to 3000 km s −1 , is accreted by the companion. It is found that the ejecta do not strip the whole of the surface convective zone. Therefore, the surface chemical composition does not change due to this effect (top panel in figure 9). The mass of the accreted matter is estimated by calculating the total energy of individual particles representing the ejecta (bottom panel in figure 9). We found that the particles near the surface of the companion stars experienced a shock heating that gave rise to high temperatures with 10 8 K. At this temperature, 7 Li can be destroyed quickly even for this short timescale event. The density of the shocked region is the order of 10 −4 g cm −3 , which is lower than the density, ∼1 g cm −3 , of the shell where 7 Li burns in the envelope with T ≈ 3 × 10 6 K. On the other hand, the nuclear reaction rates are much larger at 10 8 K than at 3 × 10 6 K, by 16 orders of magnitude (Caughlan & Fowler 1988). The nuclear timescale for lithium destruction by proton-capture reactions is estimated to be ∼300 s. However, the particles affected by shock heating do not accrete on to the surface of the companion star and escape from the system. Therefore it is justified to ignore the change of the surface chemical composition by nuclear reactions in our hydrodynamical simulations. Figure 10 shows the time evolution of the mass of bound ejecta particles grouped by chemical compositions. The inner ejecta, such as the nickel-and silicon-rich layers, result in more efficient accretion than the outer ejecta like the hydrogen-and helium-rich layers. This can be interpreted as strong velocity dependence of the accretion rate, where the amount of accretion is larger for slower ejecta. If the ejecta shells were unmixed and retain their chemical composition, the ejecta shells within the oxygen, neon, and magnesium layers would be accreted on to the companion. This means that the final surface abundances of companion stars would be very metal-rich with unusual abundance patterns. However, supernova ejecta are thought to be well mixed during the shock propagation in the envelope by the Rayleigh-Taylor instability, inferred from the timing of the detection of line gamma-rays and X-rays in SN 1987A (Arnett & Fu 1989;Kumagai et al. 1989). It is to be noted that there are multi-dimensional simulations of the SN 1987A explosions (Utrobin et al. 2019), but the yields of supernova ejecta are not available from their results. Although our models are not able to predict the abundance patterns, it will be interesting to specify the progenitors by characteristic abundance patterns in the first supernovae. If the observed stars reflect abundance patterns by core-collapse supernovae, we may observe deficiencies of odd-Z elements. Also, the abundance ratios of [Na/Mg] or [Ca/Mg] can be diagnoses for pair-instability supernovae (Takahashi et al. 2018), which can be used to discriminate our scenario from the second-generation stars formed out of the ejecta of pair-instability supernovae. The accretion process is subject to uncertainties in estimating the bound mass. We estimated the expected amount Downloaded from https://academic.oup.com/pasj/article/73/3/609/6249457 by guest on 28 July 2021 of the bound mass by considering the balance between the gravitational binding energy and the kinetic energy of the individual particles of the ejecta. This is because the accretion process is complicated and the phenomenon happens over a much longer timescale than the time of the simulations. To estimate the accurate effect of accretion, we iterated to compute gravitational force exerted on the individual particles after removing the particles that are going to escape from the system, until no particles escape any more. It is highly uncertain how the accreted gas particles mix with the convective envelope of the companion star. In our simulations, we do not follow the accretion process, which proceeds in a thermal timescale, due to a much longer timescale than the dynamical process in this study, and the technical difficulty to simulate gas accretion on to a stellar surface. The detailed modelling of the accretion process is poorly understood due to the complexity of the formation of accretion discs and the considerations of radiation pressure, magnetic force, and other physical processes on a stellar surface (Marietta et al. 2000). It is still to be established how the convective envelope is reconstructed after the gas accretion. There are no adequate prescriptions to implement the mixing of matter into the convective envelopes either in 3D hydrodynamical simulations or in a 1D hydrostatic stellar evolution code. To consider the effect of mixing suggested by observations as above, we assumed that the bound ejecta particles are well mixed in the convective envelope of companion stars. We obtained [Fe/H] = −0.2 for well-mixed ejecta in the case of model H15 where the maximum amount of accretion is achieved. The lithium abundance will be reduced by 0.86 dex, from the estimate of the accreted mass and the mass of the surface convective zone for a metal-free 0.82 M model at the time of impact, when 10 7 yr have passed since the onset of hydrogen burning. For longer separations, lithium depletion is much less. In the 15 M case, lithium depletion is negligible at 0.2 au. In larger separations, the amounts of stripping and accretion decrease very rapidly. The results for all the models are summarized in table 1. The lithium abundances will not change after the accretion event since the surface convective zone of low-mass stars recedes in mass during the main sequence evolution (figure 3) and will not mix with the lithium-containing layer. Discussions The values of [Fe/H] mixed in table 1 are calculated by the following equation: where M acc denotes the accreted mass taken from the simulation results in table 1. Other parameters are taken from stellar models: M ini , M rem , and M conv is the initial mass, the remnant mass which is set at 1.3 M , and the mass of the surface convective zone of the low-mass companion which is set at 3.73 × 10 −3 M , respectively. The iron yields from supernovae, Y Fe , are taken from the mixed models with the explosion energy of 1.2 × 10 51 erg in table 6 of Heger and Woosley (2010). The abundance parameters, X Fe, and X H, , are taken from the literature (Asplund et al. 2009). The hydrogen abundance of X H in the surface of low-mass first stars is assumed to be the solar value. The final lithium abundances are estimated by the following equation: where M(Li, A Li , Y Li , M ej , and A Li, ini denote the lithium mass in the surface convective zone, the mass number of 7 Li, lithium yields from a supernova, the total mass of supernova ejecta, and the initial lithium abundance, respectively. The values of Y Li and M ej are taken from the same models as above. The initial value of lithium abundance is assumed to be 2.1. The free parameter f depicts the contribution of the lithium yield in the supernova ejecta. In table 1, we provided the final lithium abundances without lithium yields, A(Li) woLi , using the above equation by setting f = 0 and those with lithium yields, A (Li) wLi , by setting f = 1. It is to be noted that the above two equations do not include the stripped mass. This is because the mass of lithium-containing layers is always smaller than the mass of the surface convective zone of the low-mass companion in our simulations. For instance, the interaction between the ejecta and the companion can strip the envelope with a mass of 3.3 × 10 −3 M at most, as shown in figure 9 and table 1. The stripped mass is smaller than the mass of the surface convective zone of a 0.82 M star at the age of ∼10 Myr, which is the lifetime of a 15 M star. In this case, the stripping event will not reduce the surface lithium abundances, although it depends on how we treat the mixing in the surface convective zones, as there are no reliable models or theories on it. Here we expect that the surface convective zones will be reconstructed after the accretion event rather than the stripping event. The duration between the stripping and accretion is much shorter than the time for the surface convective zone to reconstruct. Therefore, the accretion events of ejecta overwrite the effect of the stripping. Our models predict solar-metallicity stars in the halo of our Galaxy for shorter binary separations. Figure 1 shows the comparisons of lithium abundances. Provided that the first massive stars explode with small radii (figure 2), it is more likely that solar-metallicity stars in the halo are survivors of the first stars in the universe, originally having a chemical composition as a result of big-bang nucleosynthesis. In the case of longer binary separations, our models may account for metal-poor ([Fe/H] ∼ −3) lithium-depleted stars and lithium-rich [A(Li) > 3] stars. Metal-poor ([Fe/H] < −5) stars without lithium depletion are also possible, although we cannot distinguish between our binary scenario and the case with single stars. The abundance patterns of all the measured elements should be checked individually, but they will vary according to the abundances of yields and the degree of mixing in the ejecta. The observed lithium depletion around [Fe/H] = −3 is not explained by our models due to a small amount of lithium depletion for longer separations and too much accretion for shorter separations. Instead, our models predict a small or negligible depletion at [Fe/H] < −5 and the existence of solar-metallicity stars in the halo stars in our Galaxy. To confirm the existence of the first metal-rich stars in the Galactic halo, we need to exclude the possibility of normal metal-rich halo stars. In our proposed scenario, we have excluded the possibility of stars that survived the common-envelope phase because a companion star in the common envelope phase will accrete sufficient mass to become much more massive than 0.8 M . Such stars can be easily excluded from the position on the H-R diagram. Even if there is a Wolf-Rayet star with a low-mass companion, its supernova ejecta must be too fast to be accreted on to its companion due to the absence of the hydrogen-rich envelope in the progenitor. There is also an argument that the evolutionary path of the SN 1987A progenitor is not established. The most promising scenario for the progenitor is that it is a binary merger (e.g., Morris & Podsiadlowski 2007). If this is the case, we need a third star to make our scenario work. This is obviously an unusual case and such stars cannot be a major source of noise in the search for metal-rich halo stars. The formation of solar-metallicity stars in metal-free or metal-poor host halos could be another exceptional case to consider. According to the hydrodynamical simulations of star formation triggered by supernovae with initial conditions taken from cosmological simulations, secondgeneration stars after first supernovae have metallicities up to [Fe/H] ∼ −1 (Wise et al. 2012;Smith et al. 2015;de Bennassuti et al. 2017;Chiaki & Tominaga 2020). It is not realistic that solar-metallicity stars would form with a single supernova in a metal-free or metal-poor cloud in its host cloud. However, it may be possible if the kinetic energy of supernovae is very low and the swept-up mass is correspondingly small. It would be intriguing to compare the abundances of second-generation stars formed from Downloaded from https://academic.oup.com/pasj/article/73/3/609/6249457 by guest on 28 July 2021 low-energy supernovae with those of binary companion stars in close massive binaries. There is a possibility of the formation of solar-metallicity stars in chemical evolution in the progenitors of the Galactic halo. Simulations of metal-enrichment in the Galactic halo by early generation stars predict metallicity of up to [Fe/H] ∼ −0.5 (Komiya et al. 2014;Sarmento et al. 2017;Côté et al. 2018). For example, Côté et al. (2018) found star-forming gas with solar-metallicities at the redshift of z = 7.29 by their simulations of the formation of the most massive galaxy realised by Wise et al. (2012). Sarmento et al. (2017) found a highly-polluted region with gas metallicity of Z = 0.1 Z close to the centre of its host halo by their simulations up to z = 5. These results may imply that the current chemical evolution models do not focus on the formation of the most metal-rich stars in the Galactic halo, i.e., we presume that solar-metallicity stars would not form or survive in the halo. If metal-rich stars are formed in the halo by consecutive star formation, their abundance patterns should be dominated by type II supernovae, not by type Ia supernovae. We will be able to identify the progenitors of such stars by larger [α/Fe] compared with disc stars with the same metallicity, if they exist. It is difficult to follow the formation of close binary systems in a metal-free environment using hydrodynamical simulations. Since our scenario focuses on binaries with a separation of less than ∼1 au, we need simulations with very high resolution. All of the current simulations of first-star binaries deal with protostar binaries much wider than 1 au (see, e.g., Sugimura et al. 2020;Susa 2019). It is not clear if the formation of first-star binaries with small separations are supported by numerical simulations. However, it is interesting that higherresolution studies found more possible close binaries by decreasing the minimum spatial resolution of the simulations from 20 au (Stacy & Bromm 2013) to 5 au (Stacy et al. 2016). We anticipate more applications of our scenarios to the origin of known peculiar stars, namely lithium-enhanced stars. There is an argument concerning the production of lithium in the hydrogen-rich envelope of massive stars by neutrino processes (Heger & Woosley 2010). If we simply apply the well-mixed lithium yield to the accretion on to the low-mass companion, the maximum surface lithium abundance can be A(Li) ∼ 3.0. This may account for some of the extremely lithium-enhanced stars as large as A (Li) ∼ 4.0 among the main-sequence stars (Li et al. 2018). Such lithium-enhanced stars are very rare, comprising 0.1 percent of the total population in almost the entire metallicity range (Kirby et al. 2016). This fact implies that the ejection of lithium-rich ejecta and the accretion on to the companion involve complicated physics. Theoretical models are not able to explain lithium enhancement in the surface of low-mass stars. The transportation of 7 Be and the decay to 7 Li in the envelope require a high temperature at the bottom of the convective envelope, and hence only AGB stars with M 4 M can be A (Li) 3.0 by self-enrichment (Karakas & Lugaro 2016). Low-mass red giants are thought to suffer from extra mixing at the red giant branch bump (see, e.g., Charbonnel et al. 2020) and experience strong lithium-depletion. Even if the parameter for extra mixing is adjusted, it is too difficult to explain lithium-rich giants (Lattanzio et al. 2015). Nova outbursts produce a large amount of lithium (Starrfield et al. 2020). However, there are no known mechanisms which form low-mass stars in the Galactic halo from yields from novae. If there exist metal-rich and lithiumrich halo stars whose ingredients are dominated by nova yields, we may distinguish them by the abundance ratios of CNO isotopes (José & Hernanz 1998). Lithium enhancement in super-solar metallicity is also controversal. Theoretical models predict that lithium abundances are enhanced at super-solar metallicity, while the observations reveal a decreasing trend of lithium abundances. According to the models by Karakas and Lugaro (2016), AGB stars with M ≥ 4.75 M and Z = 0.03 evolve to lithium-rich stars up to A(Li) ∼ 5, while the observations show the decrease in A(Li) at [Fe/H] > 0 (Guiglion et al. 2016). Chemical evolution models for lithium in the Galactic disc can reproduce lithium abundances as high as A(Li) ≈ 3.5 (Prantzos et al. 2017). Therefore, the only possible contaminants are metalrich stars born in the Galactic disc that fly out into the halo. An exploration of a different set of parameters and assumptions could be of interest. For instance, an assumption of a spherically symmetric explosion is not guaranteed. Aspherical explosions or jet explosions will change the amount of stripping and accretion (Tominaga 2009), while there is no convincing theory and observations for the geometry of supernova explosions. Still, there will be a way to produce metal-rich survivors with these models. A more eccentric orbit will have a variation in the stripping and accretion depending on the orbital phase at the explosion, although this effect can be incorporated into the binary separation. It is difficult, and beyond the scope of this study, to estimate the overall uncertainties and these new parameters. Various situations and models may better explain the diversity of the chemical compositions of known extremely metal-poor stars. After the supernova event of the massive star, the binary system will be disrupted by the ejection. The remnant of the supernova will have a velocity of a few 100 km s −1 (Cordes et al. 1993). In the circumstance of the first binary star formation, it may be difficult for the remnants of the supernovae to be confined in the gravitational potential of the low-mass host halo. The low-mass companion star can remain in its host halo if the orbital velocity is lower or comparable to the velocity dispersion of the host halo. Currently, there is no direct observational evidence of metal-free and metal-poor binary systems as we proposed because they are too old for massive primaries to survive. Therefore, the search for observational counterparts must be made for nearby, metal-rich stars. We searched for low-mass companions in OB star binaries (Moritani et al. 2018). The analysis of the 10 target stars is still ongoing, among which HD 164438 is reported to have an intermediate-mass star with the mass ratio of q = M 2 /M 1 = 0.1-0.2 (Mayer et al. 2017), where M 1 and M 2 are the initial masses of the primary and secondary, respectively. Although we have not yet established the population of binary systems in the solar vicinity, we can expect some fraction of stars that are metal-rich counterparts of the first-generation massive binaries. There are no reliable models or observations of binary star formation for the entire range of metallicity for OB stars, and hence we encourage the exploration of the binary star formation. To explore the possibility of observational counterparts in the solar vicinity, we have compiled a catalogue of OB stars from the literature to find binaries where the masses of the primary star and the secondary star are 10-20 M and less than 1 M , respectively, with a separation less than 1 au. This criterion is translated into a mass ratio q < 0.1 and a binary period of a few days to a few months. We investigated more than 20 binary catalogues and surveys to compile the existing observational data. These data include spectroscopic binaries (Pourbaix et al. 2004), eclipse binaries (Malkov et al. 2006), astrometry binaries (Mason et al. 2001) and visual binaries (Sixth Catalog of Orbits of Visual Binary). 1 We have collected binary parameters for the stars classified as Galactic OB stars using the online catalogue (Skiff 2014). Figure 11 shows the distribution of mass ratio taken from our compilation of 332 OB stars with known binary mass ratios and periods (1 yr or shorter) from the observations. There is an observational bias in the distributions of both periods and mass ratios in our sample. In particular, binaries with small mass ratios are difficult to identify, and hence the number of such binaries is underestimated. In the figure, no corrections for the distribution are considered, as discussed in other studies (Moe & Di Stefano 2017). Apart from the literature study, we have conducted a radial velocity monitoring to specify binary systems that are the metal-rich counterpart of those relevant to the massive binary scenario. Medium-resolution spectra have been obtained with the Medium And Low-dispersion 1 http://www.usno.navy.mil/USNO/astrometry/optical-IR-prod/wds/orb6 . Fig. 11. Mass ratio distribution function from the literature data for the binaries whose periods are 1 yr and shorter. Long-slit Spectrograph (MALLS) (Ozaki & Tokimasa 2005) on the Nayuta 2.0 m telescope at the Nishi Harima Astronomical Observatory (NHAO). Also, high-resolution spectra has been obtained using two spectrographs, High Dispersion Echelle Spectrograph (HIDES) (Izumiura 1999;Kambe et al. 2013) on the 188 cm telescope at the Subaru Telescope Okayama Branch Office (OBO), and the Gunma Astronomical Observatory Echelle Spectrograph (GAOES) (Hashimoto et al. 2006) on the 1.5 m telescope at the Gunma Astronomical Observatory (GAO). The details of the observations and analyses are on-going and an initial report is given in a separate paper (Moritani et al. 2018). Candidates for the survivors of the first stars exist in our Galaxy. A massive binary scenario presented here predicts the existence of metal-rich ([Fe/H] ∼ 0) and lithium-depleted, or perhaps lithium-rich [A (Li) ∼ 3], stars in the Galactic halo. We find 78 stars with [Fe/H] > −1 and six stars with [Fe/H] > −0.5 that have the space velocities of halo stars in the SAGA database, although none of them have reported lithium abundances (table 2). Among them, one star, BS 17587-021 is apparently a metal-rich halo star from its metallicity ([Fe/H] = 0.93, Lai et al. 2008) and its motion is different from the Galactic disc components, estimated using the data from the Gaia Data Release 2 (DR2) (Gaia Collaboration 2018). We need a more careful inspection of this star because only one iron line at 670.015 nm was used to determine its metallicity. There are also a number of solar-metallicity stars among halo stars kinematically selected to search for high-velocity stars ejected from the Galactic thick disc (Hawkins et al. 2015). The majority of such halo stars with −0.7 < [Fe/H] < −0.2 are argued to be the remnant of the thick disc component as a result of the last major merger event (Belokurov et al. 2020). These studies imply that the stars with [Fe/H] > −0.2 have different origins and have experienced a very efficient metal enrichment process, like the scenario proposed here. However, it is unlikely that such an event produces even more metal-rich stars than the thick disc stars. To identify the observational counterparts of the proposed scenario, we performed a cross-match of Galactic stars between the SAGA database and Gaia DR2 (Matsuno et al. 2019). Figure 12 shows the sample stars on the Toomre diagram that have space velocities larger than 180 km s −1 . The velocity data are calculated from the proper motion data taken from Gaia DR2 (Gaia Collaboration 2018). The boundary between the disc and halo stars are defined by the heliocentric velocity, V total = √ U 2 + V 2 + W 2 = 180 or 220 km s −1 as shown by the dashed lines in the figure, following the prescription in the previous studies (Nissen & Schuster 2010;Bonaca et al. 2017). The positions of the stars are calculated based on the right ascension and declination and the distance from parallax in Gaia DR2 where we imposed the criterion for the data accuracy by parallax_over_error >5. The total number of data with V total > 180 km s −1 is 1332. The massive binary scenario is a supplementary scenario to the existing three major scenarios for the origin of the known EMP stars, i.e., the scenario is consistent with the current framework. The three scenarios-(1) the binary mass transfer scenario, (2) the mixing and fallback in the first supernovae scenario, and (3) the fast-rotating massive star scenario-can be tested by surface lithium abundances, especially in scenarios (2) and (3). The degree of the contribution of a previous generation star to the abundance patterns of an observed star is measured by the dilution factor (Meynet et al. 2010), which is defined by the mass ratio of the ISM to the ejecta in the previous-generation star. The massive binary scenario can reproduce a variation in the dilution factor, by estimating the ratio of the envelope mass to the accreted mass. Therefore, the models of peculiar supernovae and possibly rotating massive stars may fit with the proposed scenario to explain all the abundance patterns, including lithium. In particular, the effect of winds from rotating massive stars is worth investigating because the winds are much slower than supernova ejecta and hence the companions can accrete more materials from the winds. A connection with the binary mass transfer scenario is intriguing. One of the most iron-poor stars, HE 0107−5240, belongs to a binary system whose binary period is more than 10 yr (Arentsen et al. 2019), which is consistent with the theoretical prediction of the binary mass transfer scenario (Suda et al. 2004). If long-period binaries are responsible for CEMP stars at low metallicity, a variety of CEMP stars and other metal-poor stars are expected, depending on binary parameters. Also, we may imagine a more complex scenario for the origin of CEMP stars. If we try to apply the massive binary scenario to HE 0107−5240, we will need a triple system to complete the abundance pattern. The accretion of iron and lithium, initiated by the collision of a supernova explosion on to the observed stars, is followed by the accretion of carbon and possibly s-process elements, keeping the abundance of lithium and iron-group elements on the surface. Considering the third star to reproduce all the abundances, we need a fine-tuning of model parameters, while the formation of a triple system is not so rare, as seen from the observations of nearby main-sequence stars (Moe et al. 2019). The formation of binary systems in the early universe will modify the current framework of the history of star formation and chemical evolution in the early universe. So far, only the first supernovae are thought to have contributed to the early chemical enrichment. There are many arguments for the formation of second-generation stars associated with the simulations of the first supernovae in the early universe. The major concerns are the metal mixing of supernova ejecta with the ISM (Whalen et al. 2008;Smith et al. 2015;Chiaki & Tominaga 2020), the formation of the low-mass, second-generation stars with dust cooling (Yoon et al. 2016;Chiaki et al. 2017), the initial mass function of the first-and second-generation stars (Hirano et al. 2014), and the chemical yields from the first supernovae (Heger & Woosley 2010;Tominaga et al. 2007). All of these topics will influence the proposed scenario in favor or disfavor of it. Our proposed scenario also influences the argument of the initial mass function of the first-generation stars. Our scenario is consistent with most of the recent simulations on the formation of first-generation stars in which massive stars are dominant. It will be more important to consider the formation channel of low-mass companions around massive stars. It is in line with the argument that some low-mass first-generation stars are formed in the disc of the central massive stars (Susa et al. 2014). The formation of intermediate and massive stars in the early universe is also favored to explain the large fraction of CEMP stars among EMP stars under the binary mass transfer scenario (Komiya et al. 2007). It is proposed that massive stars dominate the EMP population as a byproduct of the need for many intermediate-mass stars to pollute low-mass companions. The proposed scenario may provide an important clue to the early chemical enrichment of neutron-capture elements, like barium in EMP stars. We argue that the first massive stars are the first polluters with metals of lowmass companions, which may include the enrichment of neutron-capture elements. Observationally, the majority of observed EMP stars exhibit detectable barium absorption lines. This means that EMP stars will possess the evidence of neutron-capture processes such as the r-process and the s-process in any metallicity range. Recent observations and theories prefer neutron-star mergers as the site of the r-process (Abbott et al. 2017) in reproducing the early chemical enrichment of neutron-capture elements in dwarf galaxies in the Local Group (see, e.g., Ji et al. 2016;Roederer et al. 2016 for observations andHirai et al. 2015;Safarzadeh & Scannapieco 2017;Tarumi et al. 2020 for simulations). However, it is not sufficient to reproduce the even lower abundances of neutron-capture elements at the lowest metallicity range (Tsujimoto et al. 2017). Because the s-process is not expected in supernovae, we may need the production of r-process elements in supernovae as argued earlier (Wanajo 2013). Conclusions We have proposed a new scenario to find the survivors of the first stars in the local universe through the simulations of the collision between the ejecta of a first supernova and a low-mass star in a close binary system. The supernova ejecta do not have a significant effect on the stripping of the surface layers of low-mass stars at a distance of 0.1 au. The effect of the accretion of ejecta on to the companion star strongly depends on its binary separation. If the separation is small enough, namely 0.1 au, the companion star can be very metal-rich, although it depends on the chemical composition of the yield and how the supernova ejecta mix with the convective envelope of low-mass stars. We have argued that survivors of the first stars can be found among metal-rich stars in the Galactic halo. This is due to the small radii in the first massive stars, which enables them to contaminate low-mass companion stars with supernova ejecta in binary systems. The existence of these stars in former massive binaries is in line with some theoretical models of the formation of the first stars. We also investigated the literature dealing with binary stars for OB stars in the Galactic disc to confirm the existence of short-period binaries consisting of massive stars and lowmass stars. Although nearby OB stars are metal-rich and the formation mechanism of binary systems must be different, we find potential candidates of the metal-rich counterparts of the progenitor binaries that match with our proposed scenario. We also looked for metal-rich halo stars in the sample of known Galactic stars with kinematic information taken Downloaded from https://academic.oup.com/pasj/article/73/3/609/6249457 by guest on 28 July 2021 from the Gaia DR2. There are several stars with [Fe/H] > −1 that have space velocities of a typical halo population. The available data suggest that we may already have a number of the first stars in our Galaxy, the surfaces of which are contaminated by the ejecta of the first supernovae. The exploration of various possibilities of binary formation in the early universe, first supernovae and/or other sources of ejecta from massive stars, like a wind from a fast-rotating massive star, and dynamical process of the collision between the ejecta and stars, is encouraged. The detailed chemical abundances and kinematic information of metal-rich halo stars will provide a better constraint on the proposed models. Funding This work is supported by a Grant-in-Aid for Scientific Research (KAKENHI) (JP16K05287, JP15HP7004, JP16H02166, JP16H06341, JP19HP8019, JP20HP8012) from the Japan Society for the Promotion of Science. Figure 13 shows the time evolution of the mass of lithiumcontaining layers, which represents the unbound mass by the collision of supernova ejecta, for four models with different mass resolutions in their ejecta. The three best highresolution runs (H15W, H15Rc, and H15F) produce very similar results, and hence the H15F model is selected as a fiducial model. We attempted using the highest-resolution run (H15W), but it took too much time, and we stopped computation at ∼4 hr into the simulation time. The mass of lithium-containing layers for the H15W model agrees well with the other two high-resolution models, although the comparison can be made only for the early phase of the simulation. This confirms that anisotropic particle-mass distribution is a good prescription to simulate the collision of ejecta with a target object. In the lowest-resolution case, the unbound mass is larger compared to the other runs, which may come from the coarse sampling of the ejecta. Therefore, we adopted the run with N p,ej = 4800569 and m p, ej , i.e., 1 × 10 −7 M , as the fiducial resolution in this study. Appendix 3. Dependence on the SPH scheme Previous studies point out that the conventional formulation of SPH is not suitable to handle the contact discontinuity and is not able to detect some fluid instabilities (Agertz et al. 2007;Ritchie & Thomas 2001;Okamoto et al. 2003). To overcome the difficulties in treating the shock properly, many efforts have been made to improve the prescription and/or to invent a new scheme of SPH Price 2008;Read et al. 2010;Hosono et al. 2013;Hopkins 2013;Yamamoto et al. 2015). We compared the results using different SPH schemes, i.e., the conventional SPH and DISPH, under the same configuration. Figure 14 compares the conventional (standard) SPH (SSPH) and the DISPH with the same initial condition. The amount of unbound mass due to the stripping is less in the conventional SPH model than in the DISPH model. This can be interpreted as a consequence of introducing an unphysical surface tension at the contact surface between the star and the ejecta, which suppresses the stripping of the SPH particles.
14,784
2021-03-25T00:00:00.000
[ "Physics" ]
Population Structure and Selection Signatures of Domestication in Geese Simple Summary The goose is an economically important waterfowl and is one of the first domesticated poultry species. However, population structure and domestication in goose are understudied. In this study, we found that Chinese domestic geese, except Yili geese, originated from a common ancestor and exhibited strong geographical distribution patterns and trait differentiation patterns, while the origin of European domestic geese was more complex, with two modern breeds having Chinese admixture. In both Chinese and European domestic geese, selection signatures during domestication primarily involved the nervous system, immunity, and metabolism, and genes related to vision, skeleton, and blood-O2 transport were also found to be under selection. In particular, we identified that two SNPs in EXT1 may plausibly be sites responsible for the forehead knob of Chinese domestic geese, and that CSMD1 and LHCGR genes may associate with broodiness in Chinese domestic geese and European domestic geese, respectively. Our study provides new insights into the population structure and domestication of geese. Abstract The goose is an economically important poultry species and was one of the first to be domesticated. However, studies on population genetic structures and domestication in goose are very limited. Here, we performed whole genome resequencing of geese from two wild ancestral populations, five Chinese domestic breeds, and four European domestic breeds. We found that Chinese domestic geese except Yili geese originated from a common ancestor and exhibited strong geographical distribution patterns and trait differentiation patterns, while the origin of European domestic geese was more complex, with two modern breeds having Chinese admixture. In both Chinese and European domestic geese, the identified selection signatures during domestication primarily involved the nervous system, immunity, and metabolism. Interestingly, genes related to vision, skeleton, and blood-O2 transport were also found to be under selection, indicating genetic adaptation to the captive environment. A forehead knob characterized by thickened skin and protruding bone is a unique trait of Chinese domestic geese. Interestingly, our population differentiation analysis followed by an extended genotype analysis in an additional population suggested that two intronic SNPs in EXT1, an osteochondroma-related gene, may plausibly be sites responsible for knob. Moreover, CSMD1 and LHCGR genes were found to be significantly associated with broodiness in Chinese domestic geese and European domestic geese, respectively. Our results have important implications for understanding the population structure and domestication of geese, and the selection signatures and variants identified in this study might be useful in genetic breeding for forehead knob and reproduction traits. Introduction Animal domestication is a process accompanied by many phenotypic and genetic changes. Detecting the selection signatures underlying domestication is important for understanding the genetic basis of phenotypic changes and will ultimately have enormous practical implications in animal breeding. In recent years, comparative population genomics has identified a number of selective signatures in sheep, chickens, ducks, and other livestock [1][2][3][4][5]. The goose is an economically important waterfowl in the world and is an excellent model for the study of disease resistance and fatty liver because of its low susceptibility to avian viruses and high susceptibility to fatty liver [6]. It is one of the first domesticated poultry: Chinese domestic goose was domesticated over 7000 years ago [7], and European domestic goose was domesticated approximately 5000 years ago [8]. It seems to be an indisputable fact that there are two origins for domestic geese [9][10][11]: Chinese domestic geese (except Yili geese) originate from the swan goose (Anser cygnoides), and European domestic geese and Yili geese originate from the greylag goose (Anser anser). However, these results are not conclusive because there are still many goose breeds not included in these studies. In fact, the origin of domestic geese is not so straightforward. It has been known that many modern European domestic breeds have admixed background with Chinese domestic goose [12]. Despite the goose's important and long history of domestication, genome-wide selection signatures during its domestication are still unclear. Compared to their wild ancestors, domestic geese exhibit changes in morphology, behavior, and physiology. For instance, a protuberant knob on the forehead is a prominent characteristic of Chinese domestic geese whereas it is very small or almost absent in their ancestors; meanwhile, both the swan goose and greylag goose exhibit broodiness behavior, but after the long span of domestication, this behavior is absent in some domestic breeds. These changes make the goose a good model for identifying the genetic basis of these phenotypes. Here, we sequenced whole genomes of geese from two wild ancestral populations, swan goose and greylag goose, five Chinese domestic breeds, and four European domestic breeds to investigate population-level genetic structure and identify selection signatures during goose domestication. Moreover, we employed comparative population genomics to study the genetic basis underlying the forehead knob trait and broodiness behavior. Sampling and Sequencing A total of 63 geese representing two wild species, five Chinese domestic breeds, and four European domestic breeds were collected for whole genome resequencing. The five Chinese domestic breeds are typical indigenous breeds: Huoyane goose (HY; n = 5), Wulong goose (WL; n = 5), Taihu goose (TH; n = 5), Lion Head goose (ST; n = 5), and Yili goose (YL; n = 7). The four European domestic breeds represent the very famous breeds: Roman goose (RM; n = 5), Rhine goose (RI; n = 5), Sebastopol goose (SV; n = 5), and Landaise goose (LD; n = 5). These domestic breeds represent various geographic breed origins and phenotypical diversity (Table S1). Samples were also collected from two wild species, the swan goose (SW; n = 5) and greylag goose (GR; n = 8). Genomic DNA was extracted from blood or feather samples following the standard phenol-chloroform extraction protocol. For each individual, at least 5 mg genomic DNA was used to construct a paired-end library with an insert size of 400 bp according to the manufacturer's instructions (Illumina, San Diego, CA, USA) and was then sequenced on the Illumina HiSeq platform. Sequence Mapping and SNP Calling Filtered reads were mapped to the goose reference genome (GooseV1.0) using BWA-MEM (version 0.7.12-r1039) with default parameters [13]. Sequencing data in SAM files were sorted using SortSam and duplicated reads were removed using the Picard software package (version 1.107). To enhance alignment around indels, sequences were locally realigned using the IndelRealigner tool from the Genome Analysis Toolkit (GATK) (version 3.8) [14]. SNPs were called using the Unified Genotyper implemented in GATK and filtered using the hard filtering process recommended by GATK. Population Genetic Analysis PCA based on whole-genome SNPs for all individuals was performed using GCTA v.1.24.2 [15]. A maximum likelihood (ML) phylogenetic tree was built for all samples using RAxML (version 8.2.10) [16]. Population structure analysis was performed using ADMIXTURE (version 1.23) with default settings [17], and the number of assumed genetic clusters ranged from 2 to 10 (K = 2 to 10). Identification of Divergent Regions To identify divergent regions between populations, we searched the genome for regions with high F ST and θπ ratio in 40-kb sliding windows with a 10-kb step size using VCFtools [18]. The average F ST and θπ ratio were calculated for the SNPs in each window. Genomic regions with the top 5% F ST and θπ ratio values were considered to be divergent regions. Functional classification according to GO categories and KEGG pathways was performed using the Database for Annotation, Visualization, and Integrated Discovery (DAVID, v6.8) [19]. Genotype Validation of Candidate Variations Genotypes of candidate variations were validated in another 62 individuals representing three Chinese indigenous breeds, Zhedong goose, Panshi grey goose, and Yongkang grey goose, and the swan goose. Target variations were amplified using PCR as follows: 5 min at 95 • C; 35 cycles of 95 • C for 30 s, 55 • C for 30 s and 72 • C for 40 s; and a final extension at 72 • C for 5 min. Primers used in the PCR are listed in Table S2. The anticipated PCR bands were purified using a Gel Extraction Kit (Qiagen, Hilden, Germany), and sequenced in 3730XL (Applied Biosystems, Foster City, CA, USA). Finally, results were analyzed using Sequence Scanner software (Applied Biosystems, Foster City, CA, USA). Genetic Variation from Genome Resequencing We performed whole−genome resequencing of 63 geese from two wild populations (swan goose and greylag goose), five Chinese domestic breeds, and four European domestic breeds (Figure 1a), with an average coverage depth of~9.74× for each individual (Table S3). Aligning the reads against the goose reference genome identified a total of 2,505,100 SNPs, with an average of 2.2 SNPs per kilobase. Functional annotation of the SNPs in protein coding regions identified 68,279 (2.73%) nonsynonymous SNPs and 149,646 (5.97%) synonymous SNPs. Independent Origins of Chinese and European Domestic Geese To explore the genetic relationships among the 63 individuals, we performed phylogenetic analysis using the maximum likelihood (ML) approach. The phylogenetic tree clearly separated into two clusters: one cluster comprising swan geese and Chinese domestic geese except Yili geese, and the other cluster comprising greylag geese, Yili geese, and European domestic geese (Figure 1b), confirming that European domestic geese and Chinese domestic geese (except Yili geese) were independently domesticated. The non-Yili Chinese domestic geese were further split into two sub-clusters that exhibited strong geographical distribution patterns and trait differentiation. Meanwhile, European domestic geese exhibited more complicated genetic relationships: Landaise geese, Roman geese, and Chinese Yili geese clustered together, separate from greylag geese. Additionally, there were two independent clades: one corresponding to Rhine geese, and other to Sebastopol geese. This phylogenetic pattern was also supported by principal component analysis (PCA) (Figure 1c). To explore population structure among the 63 individuals, we also conducted a structure analysis by using ADMIXTURE [10]. Partitioning these individuals into two groups gave the K value closest to true (K = 2) (Figure 1d, Figure S1), and clearly separated the samples into: (i) swan geese and non-Yili Chinese domestic geese, which were termed the Chinese group, and (ii) greylag geese, Yili geese, and European domestic geese, which were termed the European group. This is consistent with the results from phylogenetic analysis and PCA. Independent Selection Signatures in Chinese and European Domestic Geese In order to detect selection signatures associated with goose domestication, we searched the goose genome for regions with extreme coefficients of nucleotide differentiation (F ST ) and high differences in genetic diversity (θπ ratio) between populations of wild and domestic geese. As Chinese domestic geese and European domestic geese are derived from different origins, we analyzed selection signatures in each group separately. In the Chinese group, a total of 829 regions covering 397 genes were identified as having top 5% F ST and θπ (θπ(wild/domesticated)) values and were considered potential selective regions (Table S4, Figure S2). The genomic region NW_013185722.1: 52,001-56,001 stood out as the strongest candidate due to having the highest level of population differentiation (Figure 2a). This region contained 44 SNPs, most of which showed different genotypes between swan geese and domestic breeds ( Figure 2b). That is, most of these SNPs showed homozygous mutant genotype in swan geese but were fixed for homozygous reference genotype in all domestic breeds, suggesting this region to have been under hard selection during domestication. The region includes two genes, KIAA2022 and RLIM. KIAA2022 is reportedly associated with the nervous system [20], and RLIM is part of the "Innate Immune System" KEGG pathway. We selected the top 100 genes with high population differentiation, first selecting the top 50 genes by F ST values, and then selecting the top 50 genes by θπ ratio without overlapping with genes selecting using F ST method. Annotation of the top 100 genes revealed over-representation of functions associated with metabolism, immunity, and the nervous system (Table 1). It is worth noting that we also observed enrichment of genes functionally related to vision, the skeleton, and the hematological system. Functional enrichment analysis of all the 397 genes using Gene Ontology and KEGG identified overrepresentation of GO terms related to the nervous system and behavior, along with one KEGG pathway associated with reproduction (Table S5). In selection analysis of the European group, Rhine geese and Sebastopol geese were excluded due to those breeds comprising independent clades. In total, 736 putative selective regions covering 494 genes were identified as having top 5% values for both F ST and θπ ratio (Table S6, Figure S2). The strongest candidate region (NW_013185683.1: 4,620,001-4,660,001) was found within the gene Teneurin transmembrane Protein 2 (TENM2) (Figure 2c), which has been reported to control brain development and neuronal wiring [21]. This region contained 17 SNPs, of which 13 presented genetic diversity in greylag geese but were fixed in the domestic breeds (Figure 2d). Inspection of the top 100 selected genes with high population differentiation, selecting using the method described above, revealed similar enriched categories of gene function as in the Chinese group. That is, metabolism, immunity, and the nervous system were the primary functional categories, and genes associated with bone development, vision, and hematopoiesis were also over-represented (Table 2). Functional enrichment analysis of these 494 genes revealed significant enrichment for GO terms involved in the nervous system, hemostasis, and muscle development (Table S6). Meanwhile, pathway analysis identified over-representation of three pathways, neuroactive ligand-receptor interaction, starch and sucrose metabolism, and calcium signaling (Table S7). Comparative analysis of candidate genes between Chinese and European groups identified only 22 genes shared by the two groups. These genes had functions associated with immunity, metabolism, nervous development, growth, and reproduction (Table S8). Selection Signatures Controlling Protuberant Knob Compared to their wild ancestors, Chinese domestic goose other than Yili goose has a protuberant knob on the forehead (Figure 3a). To identify candidate genes responsible for this trait, we inspected 397 genes selected in Chinese domestic breeds, of which two candidate genes caught our attention. The first was calcium voltage-gated channel subunit alpha1 I (CACNA1I), which was in the top 0.5% for both F ST and θπ ratio values (Figure 3b). CACNA1I is an important paralog of CACNA1H, which was previously reported to relate to protuberant knob in geese [22]. In the genomic region of CACNA1I, we identified four SNPs, three intronic and one exonic, that exhibit genotype differentiation between Chinese domestic breeds and their wild counterpart, the swan goose (Figure 3c). However, all four SNPs were excluded as candidate sites because their genotypes did not segregate with the phenotype when examined in another 62 individuals representing swan geese and three indigenous Chinese breeds (Table S9), suggesting that CACNA1I is not in fact associated with the protuberant knob trait. The other candidate gene was an osteochondroma-related gene, Exostosin glycosyltransferase 1 (EXT1), which also showed a relatively high level of population differentiation (Figure 3d). Allele frequency analysis of all SNPs in the selected region where EXT1 was located revealed that there were 15 SNPs with significant differences in allele frequency between Chinese domestic breeds and swan geese (Table S10). Genotype screening of these 15 SNPs identified four intronic SNPs in EXT1 to present genotype differentiation between populations (Figure 3c). Linkage analysis of the four SNPs revealed that two SNPs (NW_013185721.1: 4,792,818 and 4,793,508) were in complete linkage disequilibrium (LD, r2 = 1.0; Figure 3e). Genotype analysis of the four SNPs in another 62 individuals revealed the two linked SNPs to have perfect genotype segregation with protuberant knob ( Table 3, Table S9), suggesting that these two SNPs may be associated with the trait. Finally, from among the 397 candidate genes, another four selected genes, DIO3, PDGFD, TSHR, and FRZB, were previously identified to be associated with knob [22,23]. These mutations and genes provide candidates for genetic discovery of the protuberant knob trait in geese. Genetic Signatures Related to Broodiness Behavior To identify genetic signatures associated with broodiness behavior, we separately searched Chinese and European goose genomes for regions with high F ST and θπ ratio between populations exhibiting broodiness and non-broodiness. In the Chinese group, this analysis highlighted 695 regions covering 438 genes (Table S11). The highest level of population differentiation was observed for the region NW_013185662.1: 9,880,001-9,920,001, which contained 22 SNPs and was sited within the gene CUB and Sushi Multiple Domains 1 (CSMD1) (Figure 4a), previously implicated in chicken egg production [24]. Allele frequency analysis of all the 22 SNPs revealed that there were 12 SNPs with significant differences in allele frequency between populations (Table S12). Genotype analysis of the 12 SNPs indicated an A to C intronic mutation (NW_013185662.1: 9,881,517) which displayed perfect genotype segregation with the phenotype (Figure 4b). Additionally, the gene follicle-stimulating hormone receptor (FSHR), which was previously reported to be associated with broodiness behavior, also showed differentiation between broody and non-broody populations (Table S11). In the European group, signature analysis identified 461 regions covering 326 genes that showed a high level of population differentiation (Table S13). The region with the highest degree of differentiation overlapped with the luteinizing hormone/choriogonadotropin receptor (LHCGR) gene (Figure 4c), which is an important paralog of FSHR; coincidentally, FSHR was also identified as a candidate gene for this phenotype (Table S13). Allele frequency analysis of all SNPs in this region revealed that there were six SNPs with significant differences in allele frequency between populations (Table S14). Genotype analysis of the six SNPs identified that they presented genetic diversity in broody populations but were almost fixed in non-broody populations (Figure 4d). Linkage analysis of the six SNPs revealed that three SNPs (NW_013185792.1: 1,032,601, 1,032,941 and 1,032,971) were in complete linkage disequilibrium (LD, r2 = 1.0; Figure 4e), which suggest that they may candidate variations for broodiness behavior but this still needs further validation. Discussion The goose was one of the first domesticated poultry species, and is still economically important. In this study, we sequenced whole genomes of 63 geese from two wild populations, five Chinese domestic breeds, and four European domestic breeds, explored the population structure and domestication of Chinese and European domestic geese, and further explored genes associated with broodiness behavior and a protuberant forehead knob. Our study provides important implications for understanding the population structure and domestication of geese. Our population genetic analysis and selection analysis show that Chinese and European domestic geese are two separate groups, providing genomic evidence that Chinese domestic geese and European domestic geese were derived from different origins. Chinese domestic geese other than Yili geese are genetically closely related to swan geese, while Yili geese and two European domestic geese (Landaise geese and Roman geese) are genetically close to greylag geese; these findings suggest that Chinese domestic geese (except Yili) may originate from swan geese while Yili geese, Landaise geese, and Roman geese may originate from greylag geese. This is supported by previous population analyses [9][10][11]. In the European group, we note that the Rhine goose had almost half admixed background with Chinese domestic geese, and it constituted a separate population but was genetically closely related to Chinese domestic breeds. The Rhine goose is an improved breed developed by the French Creamer company. We speculate that the Chinese domestic goose may have been introduced in the earliest formation of the Rhine goose, but it is also possible that the Rhine goose was crossed with Chinese domestic geese after being introduced to China. After all, the samples we collected came from populations introduced to China, not from the breed's country of origin. We noted that Sebastopol geese also constituted a separate population and had admixed background with Chinese domestic geese, confirming that this breed is a hybrid between Chinese and European domestic geese, consistent with a previous study [12]. By contrasting domestic with wild samples, we identified 397 and 494 candidate genes that are under selection in Chinese and European groups, respectively. Functional annotation of the top 100 candidate genes revealed the nervous system as the most overrepresented functional category associated with domestication in both groups. In particular, strong selection signatures located in or within KIAA2022 and TENM2 implied intense selection relating to the nervous system. Selection signatures for the nervous system have been observed in many species, such as sheep, dingoes, and ducks [2][3][4], indicating that the nervous system is the first to be affected during domestication, leading domesticated animals to exhibit prosocial behaviors. In addition, immunity and metabolism were also identified as primary functional categories subjected to selection. Selection signatures for metabolism and immunity have been observed in other animals such as sheep, dogs, and ducks [1,3,25]. This may relate to adaptation to a new environment in the forms of diet and immune system changes. It is worth noting that a few selected genes were found to correlate to vision, the skeleton, and blood-O2 transport. Bird flight demands a high rate of O2 consumption; as an extreme example, the O2 consumption of bar-headed geese steadily flying in a wind tunnel at sea level ranges from 10-to 15-fold above resting levels [26]. Evolution of genes involved in blood-O2 transport in support of environmental adaption has been well documented in animals living in hypoxic high-altitude areas [27,28]. After being domesticated, geese live in a captive environment, leading to the most prominent of their phenotypic changes, namely, loss of flight ability. That genes involved in blood-O2 transport, such as BACH1, ABCB7, and HBS1L, are under selection in domestic geese suggests a genetic adaptation to the captive environment, which may be an adaptation to the loss of flight ability. Similarly, the process of domestication results in significant morphological changes to the skeleton, with key examples being a decline in skeletal robusticity, reduction in cranial size, shortening of limbs, reduction in molar size, and changes in body size [29]. In geese, domestication has caused larger body size, stronger leg bones, and shorter and broader wing bones [30]. In this study, genes related to the skeleton such as PAPPA2, TRAPPC3L, WWOX, and NT5DC1 were found to have undergone selection in domestic geese. This may be correlated with adaptation to the captive environment or directional selection by humans for body size. Compared to their wild ancestors, domestic animals exhibit many phenotypic changes, but a particularly interesting one is their comparatively weaker vision. Markedly weaker visual acuity relative to wild ancestors has been reported in dogs, horses, and chickens [31][32][33]. Like other domestic animals, domestic geese also harbor reduced visual acuity as compared to the swan goose or greylag goose. In this study, vision-related genes including APAF1, GRIK1, and ALDH8A1 were found to be under selection in domestic geese, which might have contributed to their reduced visual acuity. A knob on the forehead is a prominent trait of Chinese domestic geese, whereas it is very small or almost absent in swan geese and absent in greylag geese and European domestic geese [34]. It is characterized by thickened skin and protruding bone, and its size mainly depends on the breed, age, and sex of the goose. Morphology of the cranial appendage is tightly correlated with the physiology and reproduction of animals [35,36]. For example, Shelducks with large knobs have more advantages in competing for mates and territorial protection [37]. In chicken, rose-comb was found to be associated with reduced male fertility [35]. In production, a goose with a large knob seems to exhibit a higher social rank, better health status, and higher breast muscle weight [34]. Therefore, a welldeveloped knob is preferred by customers and has become one of the main breeding targets for geese in China. However, in stark contrast with the popularity of the knob phenotype, little is known about its genetic basis. In this work, we identified through population differentiation analysis that an osteochondroma-related gene, EXT1, exhibits a relatively high level of population differentiation between Chinese domestic geese and swan geese. Further genotype analysis in an expanded population revealed two EXT1 SNPs (at positions 4,792,818 and 4,793,508 in scaffold NW_013185721.1) to show genotype segregation with the knob trait. In humans, EXT1 has been linked to tricho-rhino-phalangeal syndrome (TRPS) and multiple osteochondromas (MO) [38,39]; TRPS is characterized by skeletal and craniofacial abnormalities, and MO is characterized by multiple cartilage-capped bony outgrowths of the long bones, resulting in a variety of complications such as skeletal deformity. Skeletal abnormality is also observed in the goose knob, which is obviously protruding. Therefore, the two SNPs may plausibly be sites responsible for the trait, and may be useful in genetic breeding for this trait. Broodiness behavior seriously affects egg production. To identify genes associated with this economically important trait, we performed comparative population genomics in the Chinese group and European group separately. In the Chinese group, we found CSMD1 to show the highest level of differentiation between broody and non-broody populations. This gene has been proposed to relate to reproduction; Csmd1 knockout in mice reduced fertility through altered regulation of spermatozoa production [40]. In the chicken, CSMD1 is considered potentially related to egg production [41]. Interestingly, we found an intronic mutation (NW_013185662.1: 9881517, A < C) in CSMD1 that displayed perfect genotype segregation with broodiness behavior. However, there is still need of further correlation between this SNP and the phenotype. Meanwhile, in the European group, an 8.2-kb region in LHCGR exhibited the highest differentiation between broody and non-broody populations. LHCGR is an important paralog of FSHR, a G-protein coupled receptor for follicle-stimulating hormone that plays a major role in reproduction; loss of its function results in pronounced disturbance of spermatogenesis and folliculogenesis [42,43]. It is very interesting that FSHR also showed differentiation between broody and non-broody populations in both the Chinese group and European group, suggesting that this gene may correlate with broodiness behavior. FSHR has been previously reported to correlate with broodiness in the chicken. Conclusions In this study, whole genome resequencing of geese from two wild populations, five Chinese domestic breeds, and four European domestic breeds was performed. It is the first selection analysis of geese domestication at the genome-wide level. Chinese domestic geese originate from a common ancestor, while the origin of European domestic geese was more complex, with two modern breeds having Chinese admixture. We also discovered many selection signatures of domestication, which primarily involved the nervous system, immunity, and metabolism. In particular, two intronic SNPs in EXT1 were found to be possibly associated with knob, and CSMD1 and LHCGR genes may associate with broodiness in Chinse domestic geese and European domestic geese, respectively. Collectively, these findings provide new insights into the population structure and domestication of geese, and the selection signatures and variants identified in this study might be useful in genetic breeding for forehead knob and reproduction traits.
6,127.8
2023-03-31T00:00:00.000
[ "Biology", "Agricultural and Food Sciences" ]
Monitoring Tools and Strategies for Effective Electrokinetic Nanoparticle Treatment Nanoparticles are increasingly being used by industry to enhance the outcomes of various chemical processes. In many cases, these processes involve over-dosages that compensate for particle losses. At best, these unique waste streams end up in landfills. This circumstance is inefficient and coupled with uncertain impacts on the environment. Pozzolanic nanoparticle treatments have been found to provide remarkable benefits for strength restoration and the mitigation of durability problems in ordinary Portland cement and concrete. These treatments have been accompanied by significant particle losses stemming from over-dosages and instability of the colloidal suspensions that are used to deliver these materials into the pore structure. In this study, new methods involving simple tools have been developed to monitor and sustain suspension stability. Turbidity measurement was introduced to monitor the progress of electrokinetic nanoparticle treatment. This tool made it possible to amend a given dosing strategy in real time while it remains possible to make effective treatment adjustments. By monitoring the particle stability and using pH and electric field controls to avoid suspension collapse, successful electrokinetic treatment dosage strategies were demonstrated using 20 nm NALCO 1056 alumina-coated silica particles. These trials indicated that turbidity measurements could track the visually imperceptible phenomena of particle flocking early on at the inception of its development. Suspensions of these nanoparticles were successfully delivered into 5 cm diameter by 10 cm tall hardended cement paste (HCP) specimens by monitoring fluid turbidity along with the specific gravity and using these values to guide the active management of the treatment dosage and pH. Under this new strategy, these losses were reduced to 5% as compared to the 80% losses associated with other treatment approaches. The relationship between the turbidity and the specific gravity was found to be linear. These plots indicated regions of turbidity and specific gravity that were associated with particle flocking. The tools, guidelines, and strategies developed in this work made it possible to manage efficient (low-particle-loss) electrokinetic nanoparticle treatments by signaling in real time when adjustments to electric field, pH, and particle dosage increments were needed. Background 1.Theory of Electrokinetic Nanoparticle Treatment Electrokinetic treatment can be utilized to transport charged species into or out of porous materials [1].A given treatment process may exhibit several phenomena including electro-osmosis, electrophoresis, and ion migration, among others [2].Many of these electrokinetic processes are observed when an electrical potential gradient is applied throughout a porous material that is saturated with a conductive liquid.This applied voltage tends to cause a current to flow through the fluidic pathways of the circuit.The overall current in the circuit consists of the drift of charged species (ions and colloidal particles) traveling between the electrodes that are provided for treatment.This drift current passes through the pores of the material and into the electrochemical reactions that may occur at each electrode.To achieve a successful and efficient treatment application, emphasis needs to focus on maintaining particle stability.In general, nanoparticle stability is governed by several particle and suspension fluid properties.However, electrokinetic treatment processes can change some of these parameters and thus tend to destabilize the system. In general, the stability of a given EN (electrokinetic nanoparticle) treatment is relatively sensitive to process parameters.Providing appropriate treatment settings for these parameters (electric field, pH, temperature, and others) would tend to enhance the effectiveness of particle transport, since the transport process is mainly governed by electrophoresis.Particle transport can also be influenced by electro-osmosis, diffusion, and pressure flows [1,3].Regarding electrophoresis, several factors including the ionic strength of the fluid, surface potential of the particle, and dielectric constant of the fluid play important roles regarding the transport of suspended particles in an EN treatment.In this fluid suspension, the particles are surrounded by a cloud of ions attracted to an array of charges that are present on the nanoparticle surfaces.These conditions cause particles to exhibit a net charge.As the nanoparticles wander about due to Brownian motion, they tend to repel each other due to this net charge.This repulsion phenomenon is a critical stabilization mechanism that derives from the electrostatic interaction of these respective ion clouds.In theory, these ion clouds exhibit what is referred to as a double-layer structure that in turn exhibits a zeta potential, as shown in Figure 1 [4,5].This zeta potential can be taken as a measure of how forcefully and effectively these particles repel each other.According to Derjaguin-Landau-Verwey-Overbeek (DLVO) theory, the stability of particles was found to be dictated by a balanced force system that includes attractive Van Der Waals forces and repulsive electrostatic forces associated with surface charges [6,7].The electrostatic repulsion from the surface charges on the particles helps them remain stable and separated in the suspension [8].If two particles come relatively close together, attractive Van der Waals forces can overcome the repulsive electrostatic forces, thus causing the particles to stick together [5,9].If this sticking behavior becomes widespread in a given system, it can lead to the unstable collapse of the particle suspension.This phenomenon can be a significant cause of charge carrier loss and the associated rise in circuit resistance during a given treatment.This can correlate to a lower effective dose of particles reaching the target area of a given treatment. each electrode.To achieve a successful and efficient treatment application, emphasis needs to focus on maintaining particle stability.In general, nanoparticle stability is governed by several particle and suspension fluid properties.However, electrokinetic treatment processes can change some of these parameters and thus tend to destabilize the system. In general, the stability of a given EN (electrokinetic nanoparticle) treatment is relatively sensitive to process parameters.Providing appropriate treatment settings for these parameters (electric field, pH, temperature, and others) would tend to enhance the effectiveness of particle transport, since the transport process is mainly governed by electrophoresis.Particle transport can also be influenced by electro-osmosis, diffusion, and pressure flows [1,3].Regarding electrophoresis, several factors including the ionic strength of the fluid, surface potential of the particle, and dielectric constant of the fluid play important roles regarding the transport of suspended particles in an EN treatment.In this fluid suspension, the particles are surrounded by a cloud of ions attracted to an array of charges that are present on the nanoparticle surfaces.These conditions cause particles to exhibit a net charge.As the nanoparticles wander about due to Brownian motion, they tend to repel each other due to this net charge.This repulsion phenomenon is a critical stabilization mechanism that derives from the electrostatic Interaction of these respective ion clouds.In theory, these ion clouds exhibit what is referred to as a double-layer structure that in turn exhibits a zeta potential, as shown in Figure 1 [4,5].This zeta potential can be taken as a measure of how forcefully and effectively these particles repel each other.According to Derjaguin-Landau-Verwey-Overbeek (DLVO) theory, the stability of particles was found to be dictated by a balanced force system that includes attractive Van Der Waals forces and repulsive electrostatic forces associated with surface charges [6,7].The electrostatic repulsion from the surface charges on the particles helps them remain stable and separated in the suspension [8].If two particles come relatively close together, attractive Van der Waals forces can overcome the repulsive electrostatic forces, thus causing the particles to stick together [5,9].If this sticking behavior becomes widespread in a given system, it can lead to the unstable collapse of the particle suspension.This phenomenon can be a significant cause of charge carrier loss and the associated rise in circuit resistance during a given treatment.This can correlate to a lower effective dose of particles reaching the target area of a given treatment.For a given EN treatment, the applied electric field (E-field) will generate an electrical potential gradient located within and adjacent to the treatment subject, which drives the charged particles into the target material [10,11].At the time the electrode functions as the anode in a specified treatment, the prevailing electrolysis reaction is represented as follows: This reaction generates H + ions (hydrogen ions), contributing to a reduction in pH in the proximity of the anode.Conversely, the typical reaction transpiring at the cathode can be expressed as shown below: Resultantly, hydrogen gas (H 2 ) and hydroxide ions (OH − ) are produced at the cathode, leading to heightened concentrations of these species in the vicinity of the cathode. The benefits achieved from a given electrokinetic treatment depend upon how successfully the driven nanoparticles can enter into the porous material [12].Ben-Moshe and other researchers observed that a notable parameter that affects the electrokinetic mobility of a nanoparticle is the ionic strength of the suspension [13,14].The composition of the ions in the suspension fluid also influences this mobility.Together, the composition and ionic strength strongly influence the zeta potential.A relatively large zeta potential can cause a particle to exhibit high mobility.Both the mobility and stability of a particle suspension can easily be manipulated by a wide range of external factors.These factors include the driving electrical field strength used during a treatment, the pH, the particle concentration, the chemical composition of the suspension, and the temperature [15,16]. Particle Destabilization Mechanisms Several studies have investigated various particle destabilization mechanisms [15,17].Zhou and others examined suspended particle gelling behavior, which is also referred to as coagulation.The collapse of a particle suspension (noted earlier) often results in the formation of a coagulated gel.In these studies, the pH and ionic strength of the suspensions greatly influenced the nanoparticle gelling rates [18,19].In other work, the authors observed that a high electric field strength can drive suspended particles into close mutual proximity as they approach a path bottleneck, such as pore openings on an ordinary Portland cement surface.When forced close together, they are now under a high risk of colliding and sticking together.By extension, should a large system of particles fall subject to this collision risk as they approach a porous surface, they may form an electro-coagulated gel [15].Prior to suffering coagulation, the particles may become concentrated at a porous surface to which they are being driven (as they wait their turn to enter a pore) but remain stable for a short period of time.During this waiting period, pH changes may diminish the zeta potentials of these particles, causing them to approach each other, collide, and gel.Particles lost to a gel are considered lost since they can no longer be transported electrokinetically. Another study examined the behavior of polymeric particles that were configured with smaller particles adsorbed onto their surfaces, resulting in flocculation [20,21].These flocs remained suspended.This demonstrated that particle losses caused by the flocculation may not lead directly to coagulation; however, flocked particles tend to be too large to penetrate the surface pores of the cement during a given treatment.In work conducted by Hotze and Phenrat, it was observed that the larger surfaces associated with flocked nanoparticles led to a higher tendency for collisions [22]. Turbidity of a Suspension Turbidity analysis entails the investigation of optical phenomena leading to the scattering and absorption of light in water, deviating it from a rectilinear transmission path.Turbidity manifests as opaqueness or diminished clarity in water [23].The direction of transmitted light is altered upon interaction with particles within the water column.The quantification of suspended particle concentration, such as silt, clay, algae, organic matter, and microorganisms, within water is facilitated by the detection of light that is scattered by these entities [23].To detect changes in suspension characteristics, other work in hydrology and geomorphology employed turbidity measurements to quantify suspended sediment [24,25].Numerous studies have shown that the clarity of a suspension expressed in terms of suspended sediment concentrations can be predicted by using turbidimeter measurements [26][27][28]. The International Organization for Standardization (ISO) gave the most recent definition of turbidity as 'the reduction of transparency of a liquid caused by the presence of undissolved matter' in 2019 [29].Turbidity results can be impacted by the particle size, shape, and composition in addition to watercolor [30][31][32]. The forward light-scattering meter was widely used for turbidity measurement [33,34].The principle of operation of this type of meter involves measuring the ratio of LED light, which is scattered over a range of angles with respect to forward transmitted light.These values are calibrated against the same ratios for a standard suspension of Formazine [31,32]. Methodology and Experiment Setup The works contained in this section focused on examining the outcomes of several dosing strategies.These strategies were applied to electrokinetic nanoparticle treatments on concrete cylinders.HCP specimens were used in this study. Batching and Curing The binder used in this study consisted of low-alkali, ordinary Type I/II Portland cement (Ash Grove Cement Company, Little Rock, AR, USA) and deionized water in a 0.48 water-to-cement ratio.Portland cement is commonly used in constructing modern structures worldwide [35].The compositions of the cement power are shown in Table 1.The dimension of the cylindrical HCP specimens conducted in this study is illustrated in Figure 2. Several electrokinetic test treatments were set up and conducted with cylindrical specimens as shown in this figure as well.Mixed-metal-oxide-coated titanium wire was embedded in each of the specimens.This wire is 1/16 inch (1.5875 mm) in diameter.It is manufactured by Corrpro (AEGION Corp, St. Louis, MO, USA) for cathodic protection applications.To provide an electric field that was relatively uniform throughout each part of the specimen, the length of the embedded wire was limited to 2 inches (50.8 mm).This constraint provided equivalent distances between the wire and both the bottom and the side surfaces of each specimen. The batching process complied with ASTM C 305 to make a relatively uniform performance batch [36].A low-speed mixer (Kitchen Aid, Classic Model, Whirlpool Corporation, Greenville, Ohio) was used for batching.The specimens were demolded after 24 h and moist cured in lime water (2.5 g/L Ca(OH) 2 solution) for 2 weeks. The nanopozzolan particle used in this study was the NALCO 1056 colloidal suspension (NALCO Water, Bedford Park, IL, USA).It is a positively charged, 24 nm, aluminumcoated silica particle sol.NALCO 1056 has been studied by Cardenas and others, and researchers observed positive results in enhancing strength and providing chemical resistance.The silica and alumina content of this particle system exhibits pozzolanic reactivity that yields binder phases that are chemically similar to the binder material that is native to Portland cement [1][2][3][4]15,16].The dosage was designed in terms of the volume percentage of particles available in a given treatment bath.This was conveniently managed in terms of the weight percentage (wt%) of particles available in the treatment fluid by simply monitoring the specific gravity of this fluid.Dosage values started at 0.04 wt% particle content (0.03% volume concentration).The applied electric field strength was kept under the electro-coagulation threshold value of the NALCO 1056 nanoparticles, 0.4 V/cm [15].The batching process complied with ASTM C 305 to make a relatively uniform performance batch [36].A low-speed mixer (Kitchen Aid, Classic Model, Whirlpool Corporation, Greenville, Ohio) was used for batching.The specimens were demolded after 24 h and moist cured in lime water (2.5 g/L Ca(OH)2 solution) for 2 weeks. The nanopozzolan particle used in this study was the NALCO 1056 colloidal suspension (NALCO Water, Bedford Park, IL, USA).It is a positively charged, 24 nm, aluminumcoated silica particle sol.NALCO 1056 has been studied by Cardenas and others, and researchers observed positive results in enhancing strength and providing chemical resistance.The silica and alumina content of this particle system exhibits pozzolanic reactivity that yields binder phases that are chemically similar to the binder material that is native to Portland cement [1][2][3][4]15,16].The dosage was designed in terms of the volume percentage of particles available in a given treatment bath.This was conveniently managed in terms of the weight percentage (wt%) of particles available in the treatment fluid by simply monitoring the specific gravity of this fluid.Dosage values started at 0.04 wt% particle content (0.03% volume concentration).The applied electric field strength was kept under the electro-coagulation threshold value of the NALCO 1056 nanoparticles, 0.4 V/cm [15]. pH and Turbidity Measurement Each treatment trial was run for up to 4 days.The daily dosage particle increments varied for different trials.The pH of the suspension was monitored and adjusted by the addition of hydrochloric acid (HCL).PH monitoring was conducted at four locations, A-D, as shown in Figure 3.All treatments were performed at standard laboratory temperature (20 °C). Visual observation of treatment fluid appearance was recorded daily.Three other suspension stability parameters were monitored as well.The turbidity was measured via a Hach 2100p turbidimeter (Hach Co., Ltd, Loveland, CO, USA).The pH of the suspension was monitored using an OAKTON pH 11 series meter (Cole-Parmer LLC, Vernon Hills, pH and Turbidity Measurement Each treatment trial was run for up to 4 days.The daily dosage particle increments varied for different trials.The pH of the suspension was monitored and adjusted by the addition of hydrochloric acid (HCL).PH monitoring was conducted at four locations, A-D, as shown in Figure 3.All treatments were performed at standard laboratory temperature (20 • C). Results and Discussion To determine an effective treatment approach that could minimize particle losses, a series of tests were conducted using the NALCO 1056, aluminum-coated silica sol. Treatment Approaches Examination Figure 4 shows the visual and parametric indications of particle treatment progress.This figure contains "top views" of beakers with the treated HCP specimens removed from the setup (see the setup in Figure 2 for reference).The beakers show the appearance Results and Discussion To determine an effective treatment approach that could minimize particle losses, a series of tests were conducted using the NALCO 1056, aluminum-coated silica sol. Treatment Approaches Examination Figure 4 shows the visual and parametric indications of particle treatment progress.This figure contains "top views" of beakers with the treated HCP specimens removed from the setup (see the setup in Figure 2 for reference).The beakers show the appearance of the particle suspension fluid as it changed during a 5-day treatment.The treatment was run continuously with a 0.4 v/cm electric field.The entire particle dosage was provided on the first day.During the first 2 treatment days, the turbidity of the suspension transitioned from cloudy to gradually clear.It was observed that the Nephelometric Turbidity Unit (NTU) value of the fluid was decreasing during this treatment period from 116 to 101.The specific gravity values were also decreasing from 1.009 to 1.007.At Day 3, the rate of decrease in specific gravity was slowing down.Some evidence of particle flocking was observed in the fluid and will be examined in a later section.Meanwhile, the turbidity exhibited an increase.The visually inspected transparency on this day (Day 3) was cloudier than on Day 1. Results and Discussion To determine an effective treatment approach that could minimize particle losses, a series of tests were conducted using the NALCO 1056, aluminum-coated silica sol. Treatment Approaches Examination Figure 4 shows the visual and parametric indications of particle treatment progress.This figure contains "top views" of beakers with the treated HCP specimens removed from the setup (see the setup in Figure 2 for reference).The beakers show the appearance of the particle suspension fluid as it changed during a 5-day treatment.The treatment was run continuously with a 0.4 v/cm electric field.The entire particle dosage was provided on the first day.During the first 2 treatment days, the turbidity of the suspension transitioned from cloudy to gradually clear.It was observed that the Nephelometric Turbidity Unit (NTU) value of the fluid was decreasing during this treatment period from 116 to 101.The specific gravity values were also decreasing from 1.009 to 1.007.At Day 3, the rate of decrease in specific gravity was slowing down.Some evidence of particle flocking was observed in the fluid and will be examined in a later section.Meanwhile, the turbidity exhibited an increase.The visually inspected transparency on this day (Day 3) was cloudier than on Day 1.The sequence involved the same voltage and particle dosage as the case shown in Figure 5 but without pH control.ρ is the specific gravity of the suspension fluid.In this case, the turbidity was rising while the specific gravity was dropping during treatment.Both flocking and gelling became increasingly evident as the 5-day treatment progressed. The increased turbidity in the midst of decreasing particle content on Day 3 indicates that the interaction between light photons and particle surfaces was changed.A likely source of change was probably due to the rising pH during this period.As pH rises, the magnitude of the zeta potential would tend to decrease, which would also decrease the electrostatic repulsion among the particles.This lower repulsion enabled more particles to collide and form flocks. Photons interacting with small flocks would tend to reduce the transmission of light, since there would now be additional surfaces associated with each of these collisions. Flocking Behavior Observation As noted in the Background section, the embedded electrode of the cement specimen (see setup in Figure 2) was connected to the negative pole of the power supply to attract positively charged particles.As a result, OH − ions were being produced at the cathode and then diffusing into the surrounding fluid of the beaker shown in Figure 4.This continuous production of OH − ions would be sufficient to cause the pH value of the suspension to rise.On Day 3, the measured pH values were approaching the particle suspension collapse threshold of 5.5.The threat of suspension collapse comes from the negative impact that a pH shift can have on the zeta potential of the particles.A zeta potential of reduced magnitude would tend to diminish the repulsive electrostatic force that keeps particles separated.When that separation is diminished, some of the particles could approach each other (due to Brownian motion), start colliding, and then form flocks of relatively large suspended agglomerations.The combined surface charges of these flocks would have allowed them to remain suspended in the treatment fluid for a limited period.The minimum size of two flocked particles would be approximately 40 nm.This happens to be about the size of a relatively large capillary pore in HCP [37].For this reason, even small flocks could tend to be too large for pore entry.The flocks of particles were effectively considered "lost" because they were too large to penetrate the cement pores. The turbidity meter measured the ratio of the side-scattered light intensity to that of the forward-transmitted light intensity.With the development of particle flocks, the incident light was probably blocked more effectively because the flocks would tend to be large and more closely spaced than in the original particle suspension.This could have resulted in the increasing NTU values of the turbidity measurement observed on Days 3-5 (of Figure 4) when the specific gravity was decreasing.Based on these observations, it appears that flocculation became indicated as the turbidity started increasing rapidly during the treatment period, even as the specific gravity (and thus particle content) was declining. This treatment associated with Figure 4 was halted on Day 5.The specific gravity showed a decline (from 1.006 to 1.005) over the last 48 h.During the treatment, the turbidity increased from 165 to 270 (with 116 being the initial value).The alphabet-lettering sheet located beneath the beaker was no longer visible on Day 5.During the course of the treatment period, the pH increased from 3.5 to 5.8.This ending value was above the pH-induced coagulation threshold (5.5 for NALCO 1056). In general, when the suspension pH is above this threshold value, all the particles would tend to exhibit flocking that would be soon followed by coagulation (collapse).In this case, the treatment was halted on Day 5, since there was significant evidence of severe flocking that would tend to prevent successful pore penetration.Because the pH of the suspension was above the threshold value, a negative impact on the zeta potential of the particles was expected to be significant.A drop in zeta potential would tend to allow an increase in particle flocking, both in terms of occurrence and flock size.As noted earlier, the reduced zeta potential would tend to cause these "large" flocks to be more closely spaced and thus more effective in blocking light transmission.It would not be surprising if these trends would tend to cause turbidity to increase significantly due to flock formation and growth.The large turbidity increase observed from Day 3 to Day 5 (in Figure 4) appeared to support this notion.It was thus evident that preventing a rise in pH and thus flocking would be expected to benefit the efficiency of particle transport.To achieve an efficient treatment and avoid particle loss due to flocking or coagulation, pH adjustments appear to be necessary to support the stability and effectiveness of a given electrokinetic nanoparticle treatment. pH Control Approaches Similar to Figure 4, the observations of Figure 5 show measurements of suspension stability involving the same nanoparticle (NALCO 1056).This treatment was also conducted with the same threshold electric field strength (0.4 V/cm) and the same single dosage that was applied initially.The only difference in this case was that active pH control was applied.During the treatment period, the transparency of the suspension ranged from cloudy to nearly clear, and the turbidity value of the fluid decreased over this period from 116 to 41 NTU.The specific gravity values also decreased from 1.009 to 1.004 during this period.Following the treatment, visual examination of the suspension fluid and the specimen (not shown) indicated that no coagulation or flocking of particles had occurred. With the pH adjustment applied, the electric field apparently drove stable, suspended particles that were apparently penetrating the pore openings.This was evident from declining values for turbidity and specific gravity, which indicated that these suspension particles left the treatment fluid and entered the HCP as expected.Since there were no unstable (flocked or coagulated) particles observed throughout the treatment period, this delivery process was considered efficient and successful, as the particle delivery rate (based on declining specific gravity values) increased by 25%.Based on these observations, it is evident that an effective and efficient treatment that exhibits successful particle transport into cement pores can be identified by the absence of particle loss (via flocking and coagulation) as well as declining specific gravity and turbidity values.kinetic nanoparticle treatment. pH Control Approaches Similar to Figure 4, the observations of Figure 5 show measurements of suspension stability involving the same nanoparticle (NALCO 1056).This treatment was also conducted with the same threshold electric field strength (0.4 V/cm) and the same single dosage that was applied initially.The only difference in this case was that active pH control was applied.During the treatment period, the transparency of the suspension ranged from cloudy to nearly clear, and the turbidity value of the fluid decreased over this period from 116 to 41 NTU.The specific gravity values also decreased from 1.009 to 1.004 during this period.Following the treatment, visual examination of the suspension fluid and the specimen (not shown) indicated that no coagulation or flocking of particles had occurred.With the pH adjustment applied, the electric field apparently drove stable, suspended particles that were apparently penetrating the pore openings.This was evident from declining values for turbidity and specific gravity, which indicated that these suspension particles left the treatment fluid and entered the HCP as expected.Since there were no unstable (flocked or coagulated) particles observed throughout the treatment period, this delivery process was considered efficient and successful, as the particle delivery rate (based on declining specific gravity values) increased by 25%.Based on these observations, it is evident that an effective and efficient treatment that exhibits successful particle transport into cement pores can be identified by the absence of particle loss (via flocking and coagulation) as well as declining specific gravity and turbidity values. Specific Gravity Monitoring and Comparison Figure 6 indicates that during the first 2 days of the treatment, the specific gravity dropped in value from 1.009 to 1.005.A slower rate of decrease was observed during the remaining 3 days.During the overall treatment period, the pH was monitored.Active pH control became necessary at Day 3 because the pH value of the suspension was Figure 5.Each of these three images shows a top view of a treatment beaker after the cement specimen was removed.The sequence shows how turbidity, measured in NTUs, changed as the treatment delivered nanoparticles that were driven by a field of 0.4 V/cm.The particles used here were 24 nm, alumina-coated silica sol (NALCO1056).ρ is the specific gravity of the suspension fluid.No flocking of particles was observed during the 5-day, pH-controlled, treatment period.Since OH − ions were being produced at the cathode (within the HCP specimen), it caused a rise in the pH of the suspension during the first 2 days.As shown in the previous case (Figure 4), an uncontrolled pH would be expected to keep rising beyond the particle suspension collapse threshold (5.5) within a matter of days.To prevent this negative impact on suspension stability, active pH adjustment was required to stabilize the zeta potential and thus preserve the electrostatic repulsion needed to sustain the particle suspension.The relationship between the pH, the zeta potential, and the particle coagulating behavior has been well studied and established by Xiaoying and others [38].Since the pH adjustment maintained the suspension stability, the treatment progressed well for the re- Since OH − ions were being produced at the cathode (within the HCP specimen), it caused a rise in the pH of the suspension during the first 2 days.As shown in the previous case (Figure 4), an uncontrolled pH would be expected to keep rising beyond the particle suspension collapse threshold (5.5) within a matter of days.To prevent this negative impact on suspension stability, active pH adjustment was required to stabilize the zeta potential and thus preserve the electrostatic repulsion needed to sustain the particle suspension.The relationship between the pH, the zeta potential, and the particle coagulating behavior has been well studied and established by Xiaoying and others [38].Since the pH adjustment maintained the suspension stability, the treatment progressed well for the remaining 3 days and ultimately exhibited a relatively clear fluid.This fluid exhibited the lowest specific gravity observed (1.004) during this treatment; see Figure 5. Based on these observations, it appears that early pH adjustment could prevent treatment suspension instability by delaying the pH rise that can cause flocking problems and suspension collapse. Specific Gravity Monitoring and Comparison As shown in Figure 6, after Day 3, the specific gravity of the suspensions stopped changing significantly.This trial was halted after the fifth treatment day, since the specific gravity and the turbidity values appeared to stop responding to continued treatment.Possible reasons for this behavior will be explored in a later section.For the pH-adjusted case, the fluid looked nearly clear on the last day of treatment.At the time that these trials were stopped, the turbidity values indicated that some particles had remained in the suspension as opposed to entering the HCP pores.The final specific gravity value of 1.004 indicates that the remaining particle concentration after Day 5 was less than 50% of the starting value at Day 0. At the end of this treatment, it was apparent that visual clarity inspection was not definitive for indicating treatment completion, since nearly 50% of the particles were still in the fluid that appeared to be clear.The visual difference between two distinct (yet relatively low) particle concentrations was barely distinguishable visually.Using such a low resolution, the visual criterion could cause residual particles to remain unutilized after the treatment has ceased.In contrast, turbidity measurements such as the NTU values of 101 and 41 from Figure 5 exhibited a significant distinction between these otherwise similarlooking fluids.These NTU numbers clearly indicated that a significant proportion of the particles had not yet entered the HCP pores.This notable difference was further confirmed by the specific gravity values of 1.007 and 1.004 that were also observed for these respective cases.Based on these observations, it appears that while a visual inspection is a convenient way for assessing the progress of transport, it is recommended that including turbidity measurement could more definitively express important suspension content changes and the associated treatment progress. As noted earlier, it was evident visually and confirmed by turbidity and specific gravity measurements (Figure 5) that not all particles were entering the pores over a 5-day treatment.The existence of particles remaining in the treatment fluid could be due to the low electric field being used.Since the particle concentration was reduced by half after the 2nd day of treatment, the relatively low electric field might not be strong enough to force the remaining particles toward the specimen.Suspended nanoparticles tend to wander randomly due to Brownian motion.Given the relatively low particle concentration and the low field, the associated chemical and electrical gradients may not have been sufficient to overcome the Brownian motion as needed to impose a net drift into the HCP pores. Another possible reason may explain this limited transport.At the time these treatments started, the particle concentrations inside versus outside of the HCP specimens were significantly different.The initial chemical gradient alone would have been sufficient to support transport into the HCP.As the treatment progressed, particle concentrations on each side of the specimen surface would have tended to balance.Eventually, the particle concentration inside the specimen would have become greater than on the outside.At that point, the chemical concentration gradient would then have been working against the electrical gradient.Under these conditions, the particle penetration rate would likely have been slowing down or possibly stopping, as evident in Figure 6.Meanwhile, some degree of pore openings could have been irreversibly occupied by particles during the treatment.These blockages would not have permitted particle transport in either direction.Blockages occurring in the early stages of treatment would tend to cause fewer open pores to be available, and more particles would need to wait to penetrate the pores.These conditions could have been delaying entry, increasing the population of the dense cloud of delayed particles and thus increasing the chances of instability at the HCP surface.These circumstances may explain why these remaining particles were not able to successfully penetrate the specimen surface pore openings.In future work, it is recommended that treatments be dosed with the same particle concentration on each day to overcome the development of disruptive chemical gradients. The preceding recommendation leaves in place the fact that any given treatment will conclude with particles left behind in the treatment fluid.It is conceivable that some beneficial use could be obtained from these leftover particles.To unitize these remaining particles expediently and to further benefit the treatment results, conducting an induced electro-coagulation may be an appropriate option.Applying a relatively high electric field at the end of the treatment would tend to produce a relatively dense, coagulated particle skin on the specimen surface.In both laboratory and field tests, this skin was found to be difficult to remove and exhibited the capacity to significantly reduce the HCP surface permeability and increase the surface hardness.It is recommended that as an additional cost-efficiency measure, an electric-field-induced coagulation should be considered to utilize the leftover treatment particles and to maximize the benefit to the HCP surface properties. Flocking Behavior Plot of NALCO 1056 Particles As noted earlier, the development of increased turbidity during treatment can indicate a flocking problem.It is important to be able to detect this problem before it advances to a costly extent.With this concern in mind, Figure 7 was plotted to determine the region where the flocking behavior would be expected to appear.To investigate the transition from a stable suspension to a flocked suspension, the original particle colloid (NALCO 1056) was diluted into various particle concentrations.The key parameters plotted for this array of diluted suspensions were the specific gravity and the turbidity.As shown in Figure 7, broad range dilutions of the NALCO 1056 particle suspension were created and analyzed.The region of anticipated flocking behavior is located above the trend line that correlates the specific gravity to the turbidity.In Figure 7, it was observed that the turbidity of the suspension tended to increase as the specific gravity increased.This makes sense, because increasing the number of particles would be expected to inhibit the transmission of light progressively.The dashed line representing the linear regression fit for the data is shown in Figure 7.The R 2 value of 0.96 indicated a good correlation between the specific gravity and the turbidity with a 99% confidence interval.Based on these findings, the relationship between the turbidity and In Figure 7, it was observed that the turbidity of the suspension tended to increase as the specific gravity increased.This makes sense, because increasing the number of particles would be expected to inhibit the transmission of light progressively.The dashed line representing the linear regression fit for the data is shown in Figure 7.The R 2 value of 0.96 indicated a good correlation between the specific gravity and the turbidity with a 99% confidence interval.Based on these findings, the relationship between the turbidity and the specific gravity for this particle suspension was approximately linear. The flocking region identified in Figure 7 is the range in which the particles would tend to flock.The gap in between the flocking region and the trend line can be referred to as a confidence interval gap.This gap was determined by the size of the error bars calculated for the data set.These error bars represent the uncertainty of the expected value corresponding to a 90% confidence interval for each turbidity measurement.Five trials were used to establish the mean value of each measurement.The calculation of the uncertainty of the expected value involves utilizing the mean and the standard deviation of each measurement as follows [39]. This plot could provide a convenient means of assessing the stability of an ongoing treatment.For example, the treatment case presented in Figure 4 may be considered.If one were to plot the Day 1 data (NTU = 101, ρ = 1.007) onto Figure 7, this point would be located in the confidence interval gap, just below the flocking region.This gap represents a region in which the treatment suspension is potentially unstable.In contrast, the visual observation on this day in Figure 4 was that of a clear treatment fluid that appeared stable.Since the figure indicates the potential for instability; this could be a good point to intervene by adjusting the pH of the suspension to avoid flocking or particle losses that were not yet visually evident.For Day 3, the parameter data (NTU = 165, ρ = 1.006) would plot to a point that is within the bottom edge of the flocking region.The flocking behavior had clearly appeared visually by this time (as shown in Figure 4) but was not relatively serious.This visual inspection indicated a slightly increased cloudiness as compared to Day 1.At this time, even though the flocking threshold had been crossed, some adjustment to the pH would still have been possible.This would have stopped the flocking trend and thus reduced the potential for additional particle loss.The Day 5 data of Figure 4 (NTU = 270, ρ = 1.005) plots to the middle of the flocking region (in Figure 7), which would predict flocking.This correlates as expected to the dense cloudy fluid exhibited in Figure 4.As mentioned earlier, the size of flocking particle pairs (44 nm) was approximately about the size of a relatively large capillary pore opening (50 nm).At this time, these flocked particles were presumably lost, since they would have been too large to enter most of the HCP pores.These findings show that identifying the flocking region of a given particle suspension may provide a convenient benchmark for assessing the risk of particle loss during a given treatment. Flocking Behavior Plot of Grace CL Particles Another commercially available nanoparticle suspension was evaluated as a potential treatment candidate.This relatively low-cost, positively charged, 20 nm particle (manufactured by W.R. Grace) was evaluated following the same procedures as the previous case (NALCO 1056 of Figure 7).The trade name for this alternative suspension is Grace CL.The relationship between turbidity and specific gravity for Grace CL was determined as shown in Figure 8.The lightly shaded region of expected flocking behavior is located above the trend line that correlates the specific gravity to the turbidity. As shown in Figure 8, the turbidity of the suspension increased as the specific gravity increased.The R 2 of 0.95 in this case revealed that the relationship between turbidity and specific gravity was linear.This linear trend as well as the location of the flocking region, positioned above the trend line, was similar to the behavior of the NALCO 1056 particles (see Figure 7).The error bars were calculated under the same equation, as shown in Section 3.5 for Figure 7, utilizing a coefficient applied to the standard deviation of the turbidity measurement for a 90% confidence interval.Based on these measurements and comparisons, the relationship between the specific gravity and the turbidity was approximately linear for the Grace CL (silica) particles and thus similar in nature to the NALCO 1056 (alumina-coated silica) particles. sion may provide a convenient benchmark for assessing the risk of particle loss during a given treatment. Flocking Behavior Plot of Grace CL Particles Another commercially available nanoparticle suspension was evaluated as a potential treatment candidate.This relatively low-cost, positively charged, 20 nm particle (manufactured by W.R. Grace) was evaluated following the same procedures as the previous case (NALCO 1056 of Figure 7).The trade name for this alternative suspension is Grace CL.The relationship between turbidity and specific gravity for Grace CL was determined as shown in Figure 8.The lightly shaded region of expected flocking behavior is located above the trend line that correlates the specific gravity to the turbidity.As shown in Figure 8, the turbidity of the suspension increased as the specific gravity increased.The R 2 of 0.95 in this case revealed that the relationship between turbidity and specific gravity was linear.This linear trend as well as the location of the flocking region, positioned above the trend line, was similar to the behavior of the NALCO 1056 particles (see Figure 7).The error bars were calculated under the same equation, as shown in Section 3.5 for Figure 7, utilizing a coefficient applied to the standard deviation of the turbidity measurement for a 90% confidence interval.Based on these measurements and comparisons, the relationship between the specific gravity and the turbidity was approximately linear for the Grace CL (silica) particles and thus similar in nature to the NALCO 1056 (alumina-coated silica) particles. Conclusions and Discussion In this study, it appears that particle flocking became evident when turbidity started increasing rapidly during a given treatment period while the specific gravity (and thus the particle content) was actually declining. (1) To achieve an efficient treatment and avoid particle loss due to flocking or coagulation, pH adjustments appear to be necessary to support the stability and efficiency of a given EN treatment.(2) The effective and efficient treatments obtained in this work exhibited successful particle transport into cement pores, which was identified by declining specific gravities and turbidities while the treatment particles remained in stable suspension.(3) This work confirmed that periodically adjusting the pH of a particle suspension back to the starting level (during a long-term treatment period) may prevent treatment suspension instability by delaying the pH rise that can cause flocking and suspension collapse.(4) While visual inspection is a convenient way for assessing particle transport progress, it is recommended that utilizing turbidity measurements could more definitively identify important particle suspension changes that can confirm acceptable treatment progress.(5) Identifying the flocking region of a given particle suspension may provide a convenient benchmark for assessing the risk of particle loss during a given treatment.(6) The relationship between the specific gravity and the turbidity was approximately linear for the Grace CL (silica) particles and thus similar to the NALCO 1056 (aluminacoated silica) particles. Figure 1 . Figure 1.Illustration of the double-layer structure and zeta potential of a given particle. Figure 1 . Figure 1.Illustration of the double-layer structure and zeta potential of a given particle. Figure 2 . Figure 2. Treatment circuit setup for electrokinetic nanoparticle treatment of hardened cement paste cylinder specimens.The cylinders were 3 inches (76.2 mm) tall by 2 inches in diameter (50.8 mm). Nanomaterials 2023, 13, x FOR PEER REVIEW 6 of 14 IL, USA).A CORALIFE DEEP SIX Hydrometer (Central Garden & Pet Co., Ltd, Franklin, WI, USA) was used for rapid specific gravity monitoring. Figure 3 . Figure 3. pH monitoring locations, A-D.Visual observation of treatment fluid appearance was recorded daily.Three other suspension stability parameters were monitored as well.The turbidity was measured via a Hach 2100p turbidimeter (Hach Co., Ltd., Loveland, CO, USA).The pH of the suspension was monitored using an OAKTON pH 11 series meter (Cole-Parmer LLC, Vernon Hills, IL, USA).A CORALIFE DEEP SIX Hydrometer (Central Garden & Pet Co., Ltd., Franklin, WI, USA) was used for rapid specific gravity monitoring. Figure 4 . Figure 4.Each of these four images shows the development of pH-induced particle flocking during treatment.The sequence involved the same voltage and particle dosage as the case shown in Figure Figure 4 . Figure 4.Each of these four images shows the development of pH-induced particle flocking during treatment.The sequence involved the same voltage and particle dosage as the case shown in Figure5but without pH control.ρ is the specific gravity of the suspension fluid.In this case, the turbidity was rising while the specific gravity was dropping during treatment.Both flocking and gelling became increasingly evident as the 5-day treatment progressed. Figure 5 . Figure5.Each of these three images shows a top view of a treatment beaker after the cement specimen was removed.The sequence shows how turbidity, measured in NTUs, changed as the treatment delivered nanoparticles that were driven by a field of 0.4 V/cm.The particles used here were 24 nm, alumina-coated silica sol (NALCO1056).ρ is the specific gravity of the suspension fluid.No flocking of particles was observed during the 5-day, pH-controlled, treatment period. Figure 6 Figure 6 indicates that during the first 2 days of the treatment, the specific gravity dropped in value from 1.009 to 1.005.A slower rate of decrease was observed during the remaining 3 days.During the overall treatment period, the pH was monitored.Active pH control became necessary at Day 3 because the pH value of the suspension was approaching the particle collapse threshold value of 5.5.HCL was used to adjust the pH back to the starting value of 3.5.Nanomaterials 2023, 13, x FOR PEER REVIEW 9 of 14 Figure 6 . Figure 6.Specific gravity observations for a single dosage EN treatment applied under a controlled electric field and pH.The field was maintained at 0.4 V/cm.The pH was maintained in the range of 3.5-4.8.A parallel case was run with no pH adjustment (see Figure 4 for pH values). Figure 6 . Figure 6.Specific gravity observations for a single dosage EN treatment applied under a controlled electric field and pH.The field was maintained at 0.4 V/cm.The pH was maintained in the range of 3.5-4.8.A parallel case was run with no pH adjustment (see Figure 4 for pH values). Figure 7 , Figure 7, broad range dilutions of the NALCO 1056 particle suspension were created and analyzed.The region of anticipated flocking behavior is located above the trend line that correlates the specific gravity to the turbidity. Table 1 . Mill test result of Type I/II (low alkali) cement powder used in this study *. Component CaCO 3 SiO 2 Al 2 O 3 Fe 2 O 3 CaO SO 3 Na 2 O K 2 O * Cement manufactured by Ash Grove Cement Company, Little Rock, AR, USA.
10,878
2023-11-29T00:00:00.000
[ "Environmental Science", "Engineering", "Materials Science" ]
A Review on Homogeneous Charge Compression Ignition and Low Temperature Combustion by Optical Diagnostics Optical diagnostics is an effective method to understand the physical and chemical reaction processes in homogeneous charge compression ignition (HCCI) and low temperature combustion (LTC) modes. Based on optical diagnostics, the true process on mixing, combustion, and emissions can be seen directly. In this paper, the mixing process by port-injection and direct-injection are reviewed firstly.Then, the combustion chemical reactionmechanism is reviewed based on chemiluminescence, natural-luminosity, and laser diagnostics. After, the evolution of pollutant emissions measured by different laser diagnostic methods is reviewed and the measured species including NO, soot, UHC, and CO. Finally, a summary and the future directions on HCCI and LTC used optical diagnostics are presented. Introduction Homogeneous charge compression ignition (HCCI), as a new combustion mode in internal combustion engines, has been widely studied in recent 20 years.At first, the HCCI means a homogeneous charge formed by port-injection or in-cylinder early-injection is autoignited as the temperature and pressure are high enough in the cylinder.Noguchi et al. [1] investigated the HCCI combustion process by a spectroscopic system in 1979 and found that the combustion chemical radicals were detected subsequently.For example, CHO, HO 2 , and O radicals were first detected, followed by CH, C 2 , and H radicals, and finally was the OH radical.This combustion process was different to the conventional gasoline engines where all radicals were observed nearly at the same time.This study work confirms that the HCCI should be initiated by the autoignition of premixed mixture due to the compression.After that, with the development of HCCI, more optical diagnostic technologies are applied to study this new combustion process.Meanwhile, researchers find that although HCCI can achieve low NO x and soot emissions and high efficiency, the operation range is limited and the control on autoignition timing is difficult compared to conventional diesel and gasoline engines.Therefore, some new strategies, such as active stratification on temperature and charge, changes of fuel properties, and different injection strategies, are used to solve the disadvantages of HCCI.More new combustion models, such as premixed charge combustion ignition (PCCI) and diesel low temperature combustion (LTC), have been developed.In fact, all these new combustion modes are dominated by the chemical reaction kinetics, and the combustion emits low NO x and soot emissions, but high UHC and CO emissions. In previous HCCI and LTC review papers, such as papers by Yao et al. [2], Dec [3], Musculus et al. [4], and Komninos and Rakopoulos [5], they have introduced that how to extend the HCCI and LTC operating range and to control the autoignition timing.In this paper, we will focus on the physical and chemical reaction processes in HCCI and LTC by measurements of optical diagnostics, which will help readers to understand the combustion processes in HCCI and LTC and to use different optical techniques to study new combustion models.Figure 2: Chemiluminescence images with different mixing process [8]. Optical Diagnostics for In-Cylinder Mixture Formation The mixture formation of fuel and air is physical process, but it has large effect on combustion chemistry subsequently.Therefore, the mixing process is reviewed firstly based on both port-injection and in-cylinder direct-injection.Finally, mixture formation combined by port and in-cylinder injection will also be reviewed. Mixture Formation by Port Injection. Although the fuel distribution in the HCCI engine is homogeneous in macroscopically due to a quite long premixed time, the inhomogeneity in fuel distribution and temperature is lying in microscopically and thus may affect the autoignition and subsequent combustion process.Richter et al. [6] investigated the images of fuel/air mixture by using planar laser induced fluorescence (PLIF) in a HCCI engine.Two different premixing procedures were used to obtain different degrees of homogeneity of the fuel/air charge.One was a standard port injection to form the premixed charge, and the other was an additional preheated mixing tank of 20 liters to prepare a more homogenous charge.The PLIF measurement confirmed that different fuel preparation strategies affected the fuel/air homogeneity and the spatial variations of the combustion process.In the further study [7], Richter et al. found that even if the PLIF results presented a high degree of homogeneity, they were still lying in local inhomogeneous fluctuations by the measurements of Raman scattering which was caused by cycle-to-cycle variations.Kumano et al. [8] investigated the effects of charge inhomogeneity on the HCCI combustion process.The chemiluminescence images were obtained by using a framing camera on an optical engine and dimethyl ether (DME) was used as a test fuel.The designed device was fixed into more upstream of intake manifold to form more homogeneous charge as shown in Figure 1, which was used to compare with the inhomogeneous charge.The whole combustion processes under homogeneous and inhomogeneous mixture have been shown in Figure 2. It could be seen that the combustion duration got longer at inhomogeneous mixture and thus resulted in a moderate heat release and lower maximum pressure rise rate.However, the homogeneous charge formed a very fast combustion process.Therefore, they concluded that the HCCI needed a local moderate combustion but not overall combustion in the cylinder. Mixture Formation by In-Cylinder Injection.In fact, more optical diagnostics for in-cylinder mixture formation is focus on the direct injection.For extending the HCCI operating range at high load and controlling the autoignition timing, some researchers introduce stratification in the cylinder but do not form quite homogeneous charge.In order to distinguish the HCCI, some new terms, such as stratification charge compression ignition (SCCI) [9][10][11][12] and premixed charge compression ignition (PCCI) [13][14][15][16][17][18], are used.Meantime, in recent 10 years, high EGR dilution low temperature combustion (LTC) [19][20][21][22][23][24][25][26] have been studied widely in diesel engines due to the fact that it is more practical than HCCI.All in all, all of these combustion modes need direct-injection and thus the mixture preparation is more complicated than that of port-injection. Musculus [19] investigated the in-cylinder spray and mixing processes at LTC conditions and the oxygen concentration was 12.7%.The optical engine operated at low load of 4-bar indicated mean effective pressure (IMEP).The start of injection (SOI) was set to −22 ∘ CA ATDC and both naturally aspirated and low boost pressure at 1.34 bars were tested.Mie scattering was used to present liquid-fuel penetration, while fuel fluorescence was used to measure the vapor jet.The results have been shown in Figure 3.It can be seen that the maximum liquid-fuel penetration was between 45 and 50 mm for the naturally aspirated condition and 40 and 45 mm for the low-boost condition.However, the typical liquid-fuel penetration was about 25 mm at conventional diesel conditions [27,28].In this work, the early-injection conditions resulted in lower ambient gas density and temperature than that of near top dead center (TDC) injection in the conventional diesel combustion.The longer penetration made the fuel impinge on the piston bowl and resulted in wetting of the piston. Kashdan et al. [29] investigated the in-cylinder mixture distribution in an optically accessible direct-injection HCCI engine.A high-pressure common-rail injection system supplied 1100-bar injection pressure.The nozzle has 6 holes nozzle with a narrow angle (less than 70 ∘ ).Planar laser induced exciplex fluorescence (PLIEF) imaging was used in this study, which allowed qualitative visualization of the mixture (liquid and vapor phase) distribution within the piston bowl through the use of exciplex forming dopants.They found that as the start of injection (SOI) was −40 ∘ CA ATDC, liquid fuel typically appears 2 ∘ CA later.At −33 ∘ CA ATDC, the liquid fuel impinges on the piston face whilst the corresponding vapor phase images acquired at this crank angle degree.At −30 ∘ CA ATDC, a certain degree of fuel stratification and a fuel rich region was seen in the center of the piston bowl due to fuel impingement.Further, this stratification trend was intensified with the retard of injection timings. Fang et al. [30][31][32] investigated the liquid spray evolution process by Mie scattering and the combustion processes in a high-speed direct inject (HSDI) diesel engine.Keeping the IMEP constant, the injection timing was changed from −40 ∘ to −80 ∘ CA ATDC for both conventional wide angle injector and narrow angle injector to form the homogeneous charge.At −40 ∘ CA ATDC injection, the air density and temperature were higher and liquid spray tip impinged on the bowl wall and there was only a little fuel film on the bowl wall and thus the poor fire area was quite small.However, at −80 ∘ CA ATDC injection, the liquid spray impinged on the piston top and some fuel collided with the cylinder liner and then flowed into the crankcase without combustion, which would worse fuel economy and dilute oil.Although the narrow angle injector could reduce the fuel deposited on the liner, the narrow angle injector could also lead to fuel-wall impingement on the bowl wall and subsequent pool fires.The similar wall wetting was also observed by other study works, such as Liu et al. [33] and Kiplimo et al. [34]. Steeper and de Zilwa [35] investigated two gasoline direct injection (GDI) injectors on a HCCI engine at the stratified low-load condition.One injector has 8 holes with 70 ∘ spray angle and the other is a 53 ∘ -degree swirl injector.The Mie scattering and LIF were used to measure the spray development and fuel distributions and the results showed that probability density function (PDF) statistics of equivalence ratio distribution were similar for two injectors, but the 8hole injector produced smaller and more numerous fuel packers than that of swirl injector. Liu et al. [36] investigated spray penetration under different ambient temperatures (700-1000 K) covering both conventional diesel combustion and LTC conditions.Results showed that the liquid penetration lengths were reduced due to the heating caused by the downstream combustion flames.Compared to higher ambient temperatures, the lower ambient temperature had smaller effects on liquid penetration length, as shown in Figure 4. Furthermore, compared to soybean biodiesel, n-butanol spray only had a little change on liquid penetration length, which should be due to the longer soot lift-off for n-butanol spray flames. Mixture Formation Combined by Port and In-Cylinder Injection.Recently, the dual-fuel injection combined by port and in-cylinder has been studied widely to achieve high efficiency and clean combustion [37][38][39][40][41][42].By this dual-fuel injection, the homogeneous mixture can be formed by port injection using high volatility fuels, while the in-cylinder injection is used to form different stratification in the cylinder by changing injection timings.In addition, in dualfuel injection system, two fuels with opposite autoignition characteristics, such as one high octane number and the other low octane number fuel, can form different fuel reactivity in the cylinder, which can also control autoignition and extend operating range of high efficiency and clean combustion.The optical diagnostics on mixing formation in dual fuel injection are limited and Figure 5 presents charge stratification and reactivity stratification studied in [37]. From what has been discussed, it can be concluded that the direct-injection strategy has more advantages than that of port-injection for HCCI autoignition control and operating range extending.However, as using early direct-injection strategy, it helps to form a more uniform air-fuel mixture before ignition but fuel can impingement on the piston head or the cylinder liner and results in wall-wetting and the dilution of oil.Some optimized methods have been carried out, such as using the narrow angle injector [30][31][32], 2stage or multistage injection [18,43,44], and super high injection pressures [45,46] and the reader can find detailed improvement for mixing processes based on these references. Optical Diagnostics for Chemical Reaction Processes 3.1.Chemiluminescence Imaging/Natural-Luminosity and Spectral Analysis.As stated in [47], chemiluminescence often starts from low temperature combustion due to relaxation of the excited combustion radicals to their ground states, which indicates the start of exothermic chemical reaction and heat release.Generally speaking, natural flame emission from conventional diesel combustion includes two parts: chemiluminescence and soot luminosity.For diesel combustion, chemiluminescence often comes from the visible and near ultraviolet bands due to OH, CH, CH 2 O, and C 2 radicals [48].However, the chemiluminescence signal is quite weak in diesel combustion and the ICCD camera is needed to capture these nonluminous flames. It should be noted that chemiluminescence exists on the whole diesel combustion process, but it is overwhelmed by strong radiation from luminous flame after soot is generated in the flame.The soot luminosity in the GDI engine is also very strong and thus the chemiluminescence from interesting species produced in combustion processes will be disturbed.The similar problem can also be found in the spectral analysis.The spectral analysis has been used as an in-cylinder diagnostics for many years [49].However, due to the strong black body radiation from soot particles, the signal to noise ratio is usually too low for detecting the specific species if the flame includes a large amount of soot particles. Most researches involved with spectral analysis were applied to the conventional gasoline engines or diesel engines with low sooting fuels such as dimethyl ether (DME).But for the new combustion models, such as HCCI, PCCI, and LTC, they only emit very low soot emissions.Therefore, the chemiluminescence images and spectral analysis are more suitable to these new combustion modes.In this part, the chemiluminescence imaging and spectral analysis will be introduced in these new combustion modes.And the soot luminosity optical diagnostics will be introduced in the next section. 3.1.1.Chemiluminescence Analysis for HCCI.Hultqvist et al. [50] investigated the HCCI combustion process using chemiluminescence images and spectra fueling the blends of n-heptane and isooctane.Cool flames were found at about −20 ∘ CA ATDC with a weak and homogeneous distribution in the visible area, which was called as low temperature heat release (LTHR).After cool flames, no luminosity could be captured until the main heat release started.During high temperature heat release (HTHR), the fuel/air mixture begins to autoignition simultaneously at arbitrary points throughout the visible area.The peak light intensity at HTHR is one order of magnitude greater than that of LTHR.Kim et al. [51] investigated HCCI combustion with dimethyl ether in a single cylinder engine using spectra analysis.Results showed that the cool flames in LTHR was derived from HCHO according to Emeléous's bands while the CO-O recombination spectra was the main emission during HTHR and a strong correlation was obtained between high temperature heat release and the CO-O recombination spectra.Augusta et al. [52] investigate the effects of different engine operating parameters on the chemiluminescence spectra in a HCCI engine and the changes of operating parameters including the intake temperatures, fuel supply methods, and engine loads.Results found that the changes of engine operating parameters led to different autoignition timings but these operating parameters did not affect the reaction pathways of HCCI combustion once the combustion started.Several distinct spectra peaks emitted by CHO, HCHO, CH, and OH could be observed and all these spectra were superimposed on the CO-O continuum.The similar results have also been obtained in the study works of Liu et al. [53] and Murase et al. [54]. Mancaruso and Vaglieco [55] investigated the autoignition and combustion processes of HCCI in a diesel engine with high-pressure common-rail injection system.By using common-rail injection system, the total fuel mass per cycle was split into five injections.The chemiluminescence images and spectra showed that the HCO and OH were homogenously distributed in the visible area.Since a large amount of OH radicals were captured in the visible area, it suggested that OH radicals should be contributed to the soot reduction in the cylinder.The OH radicle was a suitable tool to identify the start of HTHR and phase the rate of heat release. All in all, the HCCI combustion process can be described as following.At LTHR, a homogeneous weak light can be observed throughout the chamber, which is caused by the HCHO chemiluminescence.At HTHR, more strong luminosity derived mainly from CO-O continuum and OH is a mark of the start of high temperature reaction.Between LTHR and HTHR, no luminosity can be captured. Chemiluminescence/Natural-Luminosity Analysis of Stratified HCCI.Dec et al. [56,57] investigated the HCCI chemiluminescence imaging on a single-cylinder optical engine by a high-speed intensified camera.Isooctane, as a surrogate of gasoline, was used as the test fuel and the start of injection was set to −320 ∘ CA ATDC.High-speed chemiluminescence images show that the HCCI combustion has a progressive process from the hot region to cold region even as the fuel and air are fully premixed before intake occurs, as shown in Figure 6.This result demonstrated that the HCCI combustion was not homogeneous and they thought that the inhomogeneities should be derived primarily from naturally thermal stratification caused by heat transfer during compression and turbulent transport in the cylinder.And these inhomogeneities could slow the pressure rise rate (PRR) and thus had more advantages on the high-load extending.It should be noted that this propagation is derived from autoignition but does not take place through flame propagation because the global propagation speed is much higher than some very fast turbulent hydrocarbon flames [58,59].Furthermore, the similar HCCI combustion processes have also been found by Hultqvist et al. [60].Therefore, the HCCI combustion process also includes the temperature or thermal stratification caused by the heat transfer in the cylinder.If we can strengthen the charge or thermal stratification through some active methods, such as different injection strategies, internal or external EGR, the geometry of combustion chamber, and the modulated intake temperatures, the HCCI operating range will be extended further and the combustion phasing should be controlled. Vressner et al. [61] investigated the effects of turbulence on HCCI combustion and the turbulence was formed by two different combustion chamber geometries: one disc shaped and the other a square bowl in piston.The chemiluminescence images demonstrated that the combustion began in the square bowl and propagated to the squish volume.The combustion process was more stratified in the square bowl geometry because of temperature inhomogeneities.The piston with a square bowl can form stronger turbulence than that of disc shaped piston, and then the variation of turbulence intensity will form the temperature stratification in the cylinder.Therefore, 2-stage combustion including in and out the square bowl was observed and led to a lower PRR compared to the disc shaped combustion chamber where the turbulence and temperature were more homogeneous and thus the autoignition occurred simultaneously in the chamber. Liu et al. [62,63] formed different charge and temperature stratification on the HCCI combustion by modulating injection timings, intake and coolant temperatures, and combustion chamber geometries.Figure 7 showed the chemiluminescence images with different temperature stratifications.The higher intake temperature of 125 ∘ C and lower coolant temperature of 55 ∘ C formed larger temperature stratification in the visible area and thus the combustion presented more inhomogeneous than that of intake temperature of 95 ∘ C and coolant temperature of 85 ∘ C where the in-cylinder had lower temperature stratification.The larger temperature stratification resulted in lower heat release rate and had the potential to extend the operating range to higher loads.Figure 8 presented the HCCI combustion process with different combustion chamber geometries.Various squish lip configurations as shown in Figure 9 generated different turbulence motion in the chamber and therefore the autoignition location for V-type and H-type geometries was more dispersive and near to the chamber wall, while the autoignition of A-type geometry always started in the center of the chamber due to the fact that high turbulence intensity in the bowl resulted in larger heat loss through the chamber wall.Therefore, the A-type geometry induced higher turbulent kinetic energy and led to larger temperature inhomogeneities, which had more advantages on reducing PRR and heat release rates.This proves that the change of piston geometry can induce different turbulence or temperature stratification, which will affect the HCCI combustion processes although it is generally thought that HCCI is controlled by chemical kinetics. Aleiferis et al. [64] generated charge and thermal stratification under HCCI conditions by different injection timings and by both inlet air heating and residual gas trapping (internal EGR).Combustion images showed that the larger temperature inhomogeneities in the cylinder would lead to slower autoignition front moving speed.These temperature inhomogeneities were derived from the difference in injection timings without EGR conditions or from the mixing between the fresh fuel/air and the trapped residual gases in cases with IEGR. Berntsson and Denbratt [65] investigated the effect of charge stratification on combustion and emissions under HCCI operating conditions.Port injection was used to create a homogeneous charge in the cylinder, while a GDI injector was used to form charge stratification.They compared the early autoignition process on both homogeneous and stratified conditions.From autoignition appearing to reactions taking place throughout the combustion chamber, the HCCI with homogeneous conditions would spend 4 ∘ CA, while the stratified condition spent 8 ∘ CA.Furthermore, the combustion images showed that the combustion duration was enlarged because the local variation of equivalence ratio can moderate the rate of heat release and thus can further extend HCCI operating range.Kook and Bae [66] investigated the premixed charged compression ignition (PCCI) combustion by two-stage injection strategy in a diesel engine.The first injection (10 mm 3 ) was set to −200 ∘ CA ATDC to generate homogeneous and complete mixture between diesel and air.The second injection (1.5 mm 3 ) was set to −15 ∘ CA ATDC as an ignition promoter and to control the autoignition process.The injection pressure was controlled at 120 MPa.Meanwhile, the conventional diesel combustion was also tested in comparison with PCCI, of which total fuel (11.5 mm 3 ) was injected into the cylinder at −15 ∘ CA ATDC directly.The luminous flame could be observed due to the thermal radiation from soot as shown in Figure 10.However, for the PCCI, the luminous flames were quite weak and the distribution was also quite limited and only located at heterogeneous combustion regions of the second injection.Finally, the authors concluded that the first injection timing needed to be advanced earlier than −100 ∘ CA ATDC for the homogeneous and nonluminous flames (Figure 11). Based on above reviews on charge or thermal stratification through some active methods, it can be found that the stratification can reduce maximum heat release rates and pressure rise rates and thus may extend HCCI operating range.The combination between port-injection and direct injection or two-stage direct-injection in the cylinder is effective technological measures to achieve charge stratification.However, for temperature or thermal stratification, the most direct measures are changing the intake and coolant temperatures but this method is very hard to achieve in a real engine.Accordingly, the internal EGR is a more reliable method to form temperature inhomogeneity in the cylinder; however the EGR will affect the HCCI combustion by chemical action, dilution, and temperatures.Therefore, it is very hard to clarify that the temperature stratification caused by IEGR must be a very main reason on affecting HCCI combustion.In addition, a specific piston geometry will also form different turbulence intensity and thus generate the temperature inhomogeneity.Anyway, the charge and thermal stratification are effective methods to control the HCCI combustion. Chemiluminescence/Natural-Luminosity Analysis of LTC. Since the diesel fuel has low volatility, the portinjection is not a practical way without significant change of intake system, such as increasing intake temperature.An early in-cylinder injection strategy, to some extent, can result in a quite homogeneous charge before ignition.However, due to lower charge density, in-cylinder pressure, and temperature, the liquid fuel impingement on the liner wall or piston wall is unavoidable, which leads to high HC and CO emissions and oil dilution.In recent ten years, high EGR dilution low-temperature combustion (LTC) has gained tremendous attention [67][68][69][70][71][72][73][74][75].For LTC, the start of injection is near to top dead center; therefore the injection timing can control the autoignition timing to some extent.Furthermore, the later injection timing will not result in fuel impinge into piston head or cylinder liner.However, the late injection leads to the uncompleted mixing between diesel fuel and air, and thus there is a locally rich region in the mixture which is similar to diesel conventional combustion.But the soot formation can still be suppressed due to the quite low combustion temperature caused by large amounts of EGR which can avoid the soot formation region.Akihama et al. [20] firstly found that high EGR dilution can suppress soot formation on an optical diesel engine in 2001.The soot luminosity was increased firstly with the increase of EGR rates, but with higher EGR rates, soot luminosity was decreased and no luminosity was observed under quite high EGR dilution.Simultaneously, NO x emissions can also be near to zero due to high EGR dilution and subsequent low combustion temperature.In addition, the injection characteristic (including injection pressure, timing, and multiple injections) influences the temperature during the ignition delay period, the peak flame temperature reached, and the premixing improvement.Finally, in order to keep the power density and the combustion efficiency of the engine at high EGR rates, high boost levels are required.Therefore, the control and optimize of EGR rate, injection characteristic, and high boost are the keystone of the LTC.Compared to HCCI strategy, LTC has more benefits such as high efficiency over broad load range, simple control of ignition timing, reduced pressure-rise rates, high-load capability, and so forth, besides low emissions of NO x and soot.This is the reason why LTC is widely studied in recent years. Upatnieks et al. [67,71,72] measured lame lift-off lengths using in-cylinder images of natural luminosity.Results showed that soot incandescence could not be observed even for local fuel-rich mixture, while the similar stoichiometric combustion must lead to soot incandescence without EGR dilution, as shown in Figure 12.Meanwhile, a blue flame could be seen for LTC condition, which was because of the too low flame temperature.Furthermore, the flame lift-off at LTC condition was larger than that of conventional diesel combustion.After that, Musculus et al. also investigated the LTC by different laser diagnostics and proposed the LTC combustion concept in the review paper [4]. Liu et al. [68,73,74] investigated the natural luminosity under both conventional diesel combustion and low temperature combustion by using different fuels such as diesel, soybean biodiesel, n-butanol, ethanol, and the blended fuels.They found that natural flame luminosities were reduced with the decrease of ambient oxygen concentrations and ambient temperatures.Furthermore, the flame distribution or flame area was increased obviously at low oxygen concentration of 10.5% and much flame could be seen near chamber wall regions.However, the difference between high and low ambient temperatures is that even if the natural luminosity was decreased with the decline of oxygen concentrations at 1000 K ambient temperature, soot emissions were increased as shown in Figure 13.But, the natural luminosities and soot emissions were reduced simultaneously at 800 K ambient temperature as shown in Figure 14.Further analysis conducted by Figure 14: Natural flame luminosity and soot distribution for soybean biodiesel at 800 K ambient temperature [68]. that the changes of oxygen concentrations altered the soot formation and oxidation rates and thus resulted in different soot emissions. Based on above reviews on natural luminosity of LTC, it can be found that the combustion flame with larger distribution was more near to cylinder wall, which means that the flame lift-off are larger than that of conventional diesel combustion.With the decrease of oxygen concentrations or with the increase of EGR rates, the natural luminosity reduced monotonously, but the soot emissions increase firstly and then decrease after achieving the peak value. of chemiluminescence or natural luminosity.Even if the direct images are easy to measure, some combustion intermediate species cannot be measured effectively.By the laser diagnostics, the specific species can be captured by adjusting the laser wavelength.Therefore, the laser diagnostics on HCCI and LTC combustion species will be reviewed in this section. Laser Diagnostic Imaging on Chemical Reaction Collin et al. [76] simultaneously measured OH and formaldehyde LIF on an HCCI engine using two laser sources at wavelength of 283 and 355 nm and two ICCD cameras were used to collect LIF signals.The blend of isooctane and n-heptane was used as tested fuel and was injected by intake port and the compression ratio of the HCCI engine was set to 12.The width of the laser sheet was 40 mm, which is nearly a half of the cylinder bore.Results showed that formaldehyde could be captured at the start of the low temperature reactions as shown in Figures 15 and 16.With the progress of the combustion reaction, more formaldehyde was detected in the cylinder and formaldehyde filled the entire visible area after the low temperature reactions ended.At the start of the high temperature reactions, some holes in homogeneous formaldehyde signals could be captured, which demonstrated that formaldehyde was consumed with the progress of combustion processes.At about 6 ∘ CA ATDC, OH-LIF was captured firstly, and the OH-LIF could only be observed in regions where formaldehyde was absent.Under a relatively long period about 9-crank-angle degree, LIF signals of OH and formaldehyde were captured simultaneously but never in same regions for these two intermediate species. The OH-LIF intensity was lagging the rate of heat release (RoHR) by about 8 crank-angle degree, and the maximum OH intensity was captured as the most of fuel was consumed at about 15 ∘ CA ATDC and thus close to in-cylinder peak temperature.Therefore, the autoignition and combustion processes of HCCI can be detected by visualizing the distributions of formaldehyde and OH radicals.For formaldehyde, its formation occurs through low temperature oxidation in an early phase of the ignition process and then is consumed later in the combustion process.Therefore, formaldehyde is an indicator of the autoignition of low temperature heat release in HCCI engine.Meanwhile, it is also a marker for regions with low temperature reactions.For OH radical, it is formed in flame regions with high temperature and there is a strong relationship between maximum combustion temperatures and maximum OH concentrations. Särner et al. [77] simultaneously investigated images of formaldehyde-LIF and fuel-tracer LIF in a direct-injection HCCI engine.The blend of n-heptane and isooctane was used as fuel and toluene was added as fluorescent tracer.LIF of fuel-tracer was excited by a Nd:YAG laser with the wavelength of 266 nm, and the fluorescence was captured by an ICCD camera in the spectral region of 270-320 nm.Formaldehyde-LIF was excited by the other Nd:YAG laser with the wavelength of 355 nm, and the fluorescence was captured by the other ICCD camera in the spectral region of 395-500 nm.An early injection timing (−250 ∘ CA ATDC) was used to form homogeneous charge and the distribution of fuel-tracer and formaldehyde-LIF were quite homogeneous before it was consumed at start of high temperature reaction as shown in Figure 17.However, for a late timing (−35 ∘ CA ATDC), it formed stratified charge and the distribution of fuel-tracer and formaldehyde-LIF were inhomogeneous in the visible area as shown in Figure 18.Images from both early and late injection showed that both toluene and formaldehyde LIF signals have very similar distribution.That is to say, once fuels have higher boiling points and thus no suitable tracer can be used to measure, the formaldehyde-LIF is a good alternative method to fuel-tracer LIF.Zhao et al. [78] investigated formaldehyde-LIF distribution on the HCCI combustion process by fueling different primary reference fuels (PRFs).They found that the formaldehyde formation was mainly affected by the charge temperature, while the fuel concentration had less effect on formaldehyde formation.Even if PRFs had different isooctane ratio, all fuels had similar formaldehyde formation timings to that of pure n-heptane, which meant that the addition of isooctane did not influent the start of low temperature reactions apparently. Kashdan et al. [29] investigated the late-injection diesel fuel HCCI combustion process at 45% EGR dilution.They found that formaldehyde-LIF images could be captured earlier than that of chemiluminescence in early stages of the cool flame.Similar to the homogeneous conditions, formaldehyde was consumed quickly at the start of the high temperature reactions and took place by the presentence of OH-LIF subsequently.Because of the late injection resulted in some local high equivalence ratio regions, soot precursors were also captured, demonstrated by the strong PAH fluorescence. In his further study [79], they investigated the effects of split injection and EGR rates on HCCI combustion.They found that the start of formaldehyde-LIF signals was not affected by EGR rates, but the high temperature heat release was advanced with the decrease of EGR rates and ultimately reduced the formaldehyde lifetime and consequently increased the inhomogeneous state in the cylinder.As split injection was used, formaldehyde-LIF showed locally rich distribution like the reference of 73, which demonstrated that the split injection resulted in larger charge stratification.Furthermore, the lifetime of formaldehyde-LIF was prolonged and the whole combustion duration was also prolonged. Hildingsson et al. [80] investigated formaldehyde-and OH-LIF on a light duty diesel engine with different injection strategies of port-injection HCCI, direct-injection HCCI, and UNIBUS.The formaldehyde formation always began at about 20-25 ∘ CA BTDC no matter what injection strategies were used.But the intensity of formaldehyde-LIF was very fast for port-injection HCCI compared to UNIBUS and late-injection HCCI.This should be due to the fact that port-injection can supply more homogeneous charge than that of direct-injection HCCI and UNIBUS, and thus the whole chemical reaction rates are higher.Formaldehyde-LIF lifetime in the UNIBUS injection strategy was longer than that of port-or direct-HCCI because formaldehyde was formed from the dual injections of the fuel.Berntsson et al. [81,82] investigated the effects of sparkassisted stratified charge HCCI combustion processes.LIF diagnostics on fuel-tracer, formaldehyde, and OH were conducted on an optical single-cylinder direct-injection SI engine with negative valve overlap (NVO) and low lift to increase the thermal atmosphere to ensure the stable HCCI combustion.They found that the charge inhomogeneity was formed in the cylinder and the fuel injection timing and spark-assisted ignition timings were the primary parameters to affect the HCCI combustion phasing.The hightemperature reactions were influenced by injection timings and spark-assisted ignition timings, indicating different amounts of OH-LIF signals.Based on NVO, spark-assisted ignition, and charge stratification, HCCI combustion phasing could be effectively controlled and the operating range could be extended to lower and higher engine loads. Musculus [19] investigated the OH-LIF and chemiluminescence of low temperature combustion at the injection timing of −22 ∘ CA ATDC.He found that a distinct cool flame could be captured and overlapped with the liquid fuel spray, which would increase the rate of fuel vaporization.Compared to conventional diesel combustion, the OH-LIF distributions were different.For conventional diesel combustion, OH radicles could only be captured at the periphery of the diesel jet with a thin sheet structure.However, for LTC conditions, OH radicles could be detected throughout the jet cross section, which demonstrated that there was more complete mixing between liquid jet and ambient air.Furthermore, once autoignition occurred, OH radicle could be detected with broadening distributions, which demonstrated that the LTC process should be the volumetric autoignition and combustion, rather than flame propagation in conventional diesel combustion. Above studies show that the formaldehyde and OH are good markers of the HCCI combustion process at low temperature and high temperature reactions, respectively.Furthermore, the distribution of OH and formaldehyde is never in the same regions even if both of them can be detected simultaneously at a relatively long period.The timing of formaldehyde formation is unaffected by the EGR level, but the formaldehyde lifetime and the degree of homogeneity and subsequent high temperature ignition are influenced by EGR level.For a given EGR rate, a split injection strategy results in the charge stratification and prolongs the HCHO lifetime.Furthermore, the rising rate of formaldehyde-LIF intensity is more quickly under homogeneous conditions than that of stratified conditions.OH distributions in HCCI and LTC combustion processes are more broad than that of conventional gasoline spark-ignition or diesel compressionignition, which indicates that the whole combustion should be more close to volumetric combustion rather than flame propagation.Therefore, the LIF diagnostics are a quite effective method to reveal the HCCI and LTC combustion process with high spatial distributions. Optical Diagnostics for Emissions Evolution Due to the very low emissions of NO x and soot for HCCI combustion with port-injection or early-injection due to the quite homogeneous charge, the researches on emissions are very limited.But if the fuel stratification is introduced by late direct-injection, the emissions of NO x and soot will increase.So, the study on NO x and soot formation process is necessary to reduce them in new combustion modes.In this section, optical researches on these emissions mainly focus on the NO and soot. NO x Optical Diagnostics on HCCI and LTC. The spectroscopic structure of the NO molecule permits a number of excitation detection strategies and some of them have been utilized in engines.However, all of these strategies are more or less susceptible to the interference from oxygen [83], PAH, and CO 2 [84].Also, all techniques in varying degrees are the absorption of laser and signal light mainly by hot CO 2 and H 2 O [85].Furthermore, the signal is dependent on pressure, temperature, and burned gas composition.Advantages and disadvantages of different excitation/detection strategies have been discussed extensively in a series of publications [86][87][88]. NO-LIF images have been developed and applied over the last decade in conventional CI or SI engines and GDI engines by many researchers [88][89][90][91].These researches developed the theory of the NO formation.For example, Dec and Canaan [88] investigated the NO-LIF in a conventional diesel engine and found that NO was not produced by the initial premixed combustion which was fuel-rich but began around the jet periphery just after the diffusion flame formed.Then, NO formation increased progressively and NO was still confined to the jet periphery until the jet structure started to disappear toward the end of heat releases.After that, the LIF signals could also be captured until the end of heat releases, which demonstrated that NO formation continued in hot postcombustion gases.However, in new clean combustion modes, NO emissions are very low due to the quite low combustion temperature, which restricts NO formation.Therefore, there is little research on the NO-LIF in an HCCI combustion processes. Zilwa and Steeper [92,93] predicted the emissions of CO 2 , CO, HC, and NO x from HCCI engines using LIF fuel-distribution measurements.The method is based on the simplifying premise that each individual fuel-air packet burnt as if in a homogeneous mixture at the same equivalence ratio.The relative success of the prediction method indicated a strong correlation between in-cylinder charge distribution and engine emissions.In particular it encouraged the formulation of ideal fuel distributions to guide the development of advanced charge-preparation strategies in HCCI and LTC modes. Soot Optical Diagnostics on HCCI and LTC. Due to the sufficient premixed combustion, the soot emission in HCCI Figure 19: OH (green, OH-PLIF) throughout jet cross section, with soot (red, soot luminosity) only at head of jet [19].can be negligible.But, once the charge stratification was introduced in HCCI, the soot emission will not be neglected in some operation conditions.In this section, the focus will be the soot formation in the new combustion mode, especially for the PCCI and LTC. Singh et al. [94] and Huestis et al. [95] investigated the soot formation and oxidation processes by two-color pyrometry in LTC conditions.Nitrogen gas was used to achieve lower oxygen concentration and different injection strategies including early-injection, late-injection, and double-injection were tested.They all found that the soot thermometry and luminosity images of LTC were lower than that of conventional high temperature combustion.Soot temperatures measured by two-color pyrometry were near to the adiabatic flame temperatures under LTC conditions.The amount of peak soot volume of late-and double-injection was about 1.5 times higher than that of early-injection.For LTC conditions, there was enough time available for diesel fuel to penetrate and mix with the ambient air, and thus sooting combustion occurred mainly near the edge of the bowl.However, soot was formed farther upstream in the fuel jet under high temperature combustion conditions. Musculus [19] investigated the soot luminosity and soot laser-induced incandescence of low temperature combustion at the injection timing of −22 ∘ CA ATDC.He found that the soot formation was only captured in regions without OH radicles, and thus soot and OH should not lie in the same regions.The soot-LII and OH-LIF in conventional diesel combustion have shown that OH radicles could only be captured at the periphery of the diesel jet or the soot cloud with a thin sheet structure at the earlier combustion stage [96,97], as shown in Figure 19.After that, with the progress of combustion, the OH-LIF could be captured with broad distribution, but regions between soot and OH did not overlap spatially [98].That is to say, OH and soot generally did not persist within the same regions.Both soot luminosity and soot-LII images all showed that the soot is first observed far downstream of the spray jet but located at the head of the spray jet near the cylinder liner.As the spray jet continued to penetrate and develop in the cylinder, the soot-LII were mainly located at either "side" of the jet, which was called "head vortex" for spray jet, as shown in Figure 19.Indeed, even if soot-LII could be captured upstream spray, which should be attributed to the impingement of sooting jets, rather than formed by upstream spray jets.Therefore, the soot formation regions and distributions are different between LTC and conventional diesel combustion where soot is formed farther upstream and throughout the jet cross section [27,99], as shown in Figure 20.Furthermore, soot was still formed in regions of head vortex for conventional diesel combustion.Thus, it can be concluded that the upstream soot formation has been eliminated for new LTC modes compared with conventional diesel combustion conditions.The same upstream regions have been shown as the white dotted circle in Figures 19 and 20.And, the soot formation reduction in the regions of head vortex is still a large challenge even if aiming to a relative longer premixed low temperature combustion process. Liu et al. [68,73,74] quantitatively investigated the soot concentration by forward illumination light extinction with a copper vapor laser under both conventional diesel combustion and LTC conditions.Meanwhile, the soot models have been improved to understand the soot evolution [75,100].They found that compared with 21% oxygen concentration, both rates of soot formation and oxidation were increased simultaneously at 18% oxygen; however the higher soot formation rate resulted in the higher soot mass in the combustion process.At 15% oxygen concentration, both rates of soot formation and oxidation were reduced simultaneously; however the soot mass in the combustion process were increased further and the reason should be caused by suppressed soot oxidation rates.With the further decrease of oxygen concentrations, the soot formation was suppressed dramatically and thus the soot emissions were reduced.At 1000 K, the soot mass was increased with the decline of oxygen concentrations, which should be derived from the increased regions of high equivalence ratios and the increased acetylene and soot precursors formation at lower ambient oxygen concentration.At 800 K ambient temperature, however, the soot mass was decreased with the decline of oxygen concentrations, which should be caused by reduced regions of high equivalence ratios and by reduced acetylene and soot precursors formation.The soot distributions have been shown in Figures 13 and 14.Therefore, the authors concluded that soot formation transition from 1000 K to 800 K should be the responsible factor for different soot emissions, because of ambient oxygen dilution in conventional and low-temperature flames.The similar studies about ambient temperatures and oxygen concentrations have also been conducted by Zhang et al. [101,102] by two-color pyrometry and soot luminosity. These optical diagnostics have presented the distribution and mass concentration for soot emissions in LTC modes.Unlike the conventional diesel combustion which forms soot just downstream of the liquid spray and throughout the jet cross section, the soot formation in LTC is much farther downstream of the liquid spray and only at the head of the jet, in the head vortex or near the edge of the bowl.Furthermore, even if the combustion temperature are not low as shown in Figure 13, the soot distributions are still concentrated on farther downstream of liquid spray and near the chamber wall regions.Therefore, it can be concluded that soot specific distributions are caused by quite low oxygen concentrations. Unburned Hydrocarbons and CO Optical Diagnostics on HCCI and LTC. Although HCCI and LTC can achieve very low emissions of NO x and soot, they typically have increased emissions unburned hydrocarbons (UHC) and CO.Musculus et al. [4] investigated the overmixing and unburned hydrocarbon emissions in LTC conditions on a heavy-duty optical diesel engine.The equivalence ratio of mixtures near the injector was measured under without combustion conditions by planar laser-Rayleigh scattering in a constant volume combustion chamber and by LIF of a fuel tracer in an optical engine.The optical diagnostic images indicated that the transient ramp-down of the injector produced a lowmomentum spray penetration at the end of injection and thus formed fuel-lean mixture in the upstream region of the spray jet.Furthermore, the fuel-lean mixture continued until the late of that cycle.Therefore, the upstream fuel-lean mixture likely became too lean to achieve complete combustion, thus contributing to UHC emissions under LTC condition. Ekoto et al. [103,104] and Petersen et al. [105] investigated the UHC and CO distribution on a light-duty diesel optical engine under both early-and late-injection under LTC conditions.The LIF measurements on equivalence ratios, UHC and CO, all showed that most fuel accumulated on inner bowl during high temperature heat release, but much of them transported into the squish-volume with the motion of reverse squish flows.Then, the lean mixtures combined with high heat transfer losses to the wall suppressed the fuel oxidation in squish regions.Therefore, the main distributions on UHC and CO were captured in squish regions. It should be noted that there are also a large amount of UHC and CO emissions in HCCI combustion processes and more studies focus on the formaldehyde-LIF measurements and there is little studies on CO distributions in HCCI.This should be due to the HCCI is controlled by chemical kinetics and the UHC and CO evolution can be explained well by chemical reaction mechanism.For LTC conditions, it is not only controlled by chemical kinetics, but also controlled by mixed process between diesel fuel and air.Under LTC conditions, fuel-lean regions that formed during the period of ignition delays are likely a significant source of UHC and CO emissions for EGR-diluted LTC diesel engines. Summary Optical diagnostics is an effective method to understand the chemical reaction processes in homogeneous charge compression ignition and low temperature combustion modes.Based on optical diagnostics, the true mixing, combustion, and emissions can be seen directly.In this paper, the mixing process by port-injection and direct-injection was reviewed firstly.Then, the combustion chemical reaction mechanism was reviewed based on chemiluminescence, directluminosity, and laser diagnostics.Finally, the evolution of pollutant emissions was reviewed including NO x , soot, UHC, and CO.The main summaries are shown as follows. 5.1.Fuel-Air Mixing Process.It can be found that different port-injection strategies also change the degree of homogeneous charge in the cylinder.Even if a high degree of homogeneity can be seen in the cylinder, there is still lying in local inhomogeneous fluctuations caused by cycle-to-cycle variations.The direct-injection strategy has more advantages than that of port-injection for HCCI autoignition control and operating range extending.However, using early directinjection strategy, it helps to form a more uniform airfuel mixture before ignition but fuel can impinge on the piston head or the cylinder liner and results in wall-wetting and the dilution of oil, which restricts the application of early-injection to some extent.By dual-fuel injection, the stratification on charge and fuel reactivity can be achieved flexibly even if it needs one more fuel tank. Combustion Chemical Reaction Processes.It can be found that the HCCI combustion process can be described as follows.At low temperature heat release, a homogeneous weak light can be observed throughout the chamber, which is caused by the formaldehyde chemiluminescence.At high temperature heat release, more strong luminosity derived mainly from CO-O continuum and OH is a mark of the start of high temperature reaction.Between LTHR and HTHR, no luminosity can be captured.Both charge and thermal stratifications can reduce maximum heat release rates and pressure rise rates and thus may extend HCCI operating range.Optical diagnostics shows that the combination between port-injection and direct injection or two-stage direct-injection in the cylinder is effective technological measures to achieve charge stratification.Changes of the intake and coolant temperatures can form temperature or thermal stratification and affect the combustion chemiluminescence, but this method is very hard to achieve in a real engine.A specific piston geometry will also form different turbulence intensity and thus generates the temperature inhomogeneity.For low temperature combustion, the combustion flame with larger distribution is located near to cylinder wall, which means that the flame lift-off is larger than that of conventional diesel combustion. Although the chemiluminescence or natural-luminosity images present a good time-resolved combustion process in HCCI and LTC, they only provide the results of lineof-sight and without presenting the spatial distributions.Therefore, laser induced fluorescence is used to give spatial distributions on combustion processes and results show that the formaldehyde and OH are good markers of the HCCI combustion process at low temperature and high temperature reactions, respectively.Furthermore, the distribution of OH and formaldehyde is never in the same regions even if both of them can be detected simultaneously at a relatively long period.The timing of formaldehyde formation is unaffected by the EGR level, but the formaldehyde lifetime and the degree of homogeneity, and subsequent high temperature ignition are influenced by EGR level.For a given EGR rate, a split injection strategy results in the charge stratification and prolongs the HCHO lifetime.Furthermore, the rising rate of formaldehyde-LIF intensity is more quickly under homogeneous conditions than that of stratified conditions.OH distributions in HCCI and LTC combustion processes are more broad than that of conventional gasoline sparkignition or diesel compression-ignition, which indicates that the whole combustion should be more close to volumetric combustion rather than flame propagation. Emission Evolution Processes. In HCCI and LTC, NO emissions are very low due to the quite low combustion temperature, which restricts NO formation.Therefore, there is little research on the NO-LIF in an HCCI and LTC combustion processes.There are little studies on soot evolution in HCCI due to the nearly zero soot emissions.In LTC conditions, the soot formation is much farther downstream of the liquid spray and only at the head of the jet, in the head vortex or near the edge of the bowl.Furthermore, even if the combustion temperature is not low, the soot distributions are still concentrated on farther downstream of liquid spray and near the chamber wall regions.Therefore, it can be concluded that soot specific distributions in LTC conditions are caused by quite low oxygen concentrations.There are also a large amount of UHC and CO emissions in HCCI combustion processes and some studies focus on the formaldehyde-LIF measurements to represent UHC distribution in the late cycle.But there are little studies on CO distributions in HCCI, which should be due to the fact that HCCI is controlled by chemical kinetics and the UHC and CO evolution can be explained well by chemical reaction mechanism.For LTC conditions, it is not only controlled by chemical kinetics but also controlled by mixed process between diesel fuel and air.Under LTC conditions, fuel-lean regions that formed during the period of ignition delays are likely a significant source of UHC and CO emissions for EGR-diluted LTC diesel engines. Future Direction. Based on previous works reviews, it can be found that there are some shortcomings in HCCI and LTC chemical reaction processes with optical diagnostics. Firstly, more intermediate species are needed to be measured.In current studies, the main measured intermediate species include formaldehyde, OH, and CO.Meantime, the polycyclic aromatics hydrocarbons (PAHs) and H 2 O 2 have been captured in the cylinder [19,106] or has the potential to distinguish PAHs of different rings [107] even if the study works are limited.Obviously, more intermediate species are detected and more detailed combustion reaction mechanism will be revealed.Therefore, other intermediate species, such as CH, NO, and PAHs with different rings, are needed to be detected in future to further understand the HCCI and LTC. Secondly, high-speed and simultaneous multi-species measurements are needed to improve in future.In current studies, high-speed measurements only focus on chemiluminescence or natural luminosity, but these optical diagnostics have low spatial resolution.However, as using laser diagnostics, it has high spatial resolution but low time resolution.Therefore, the combination between high time and spatial resolution to detect the combustion process is the development direction in future for HCCI and LTC.Meanwhile, the optical diagnostics in HCCI and LTC need to capture more species in the same engine cycle.For example, the simultaneous measurements on formaldehyde, OH, PAHs, and soot will give more detailed and complete mechanism on combustion chemical reaction. Thirdly, the combustion processes in HCCI and LTC mode are primary controlled by chemical kinetics, and thus a large amount of studies are aiming to propose different chemical kinetic reaction mechanism.However, there is little attention on effects of flow or turbulence on combustion processes.For HCCI, even if it is a homogeneous combustion, there is still inhomogeneous charge in local area; therefore how the turbulence affects the local combustion is still an open question.The same question also lies in the LTC conditions.Furthermore, the mixing process has larger effect on LTC processes compared with HCCI conditions; therefore some recent studies have published some works on effects of turbulence on combustion and emissions by Wang et al. [108] and Perini et al. [109].Obviously, more detailed measurements especially on local turbulence are necessary to clarify the effect of turbulence. Figure 1 : Figure 1: The different mixing process in the manifold [8]. Figure 3 : Figure 3: Liquid fuel (blue) and vapor fuel perimeter (green) for naturally aspirated (a) and low-boost (b) conditions (the dashed line is the edge of piston bowl-rim) [19]. Figure 4 : Figure 4: Liquid penetration lengths at different ambient temperatures for n-butanol and soybean biodiesel [36]. Figure 6 : Figure 6: High-speed movie sequence of HCCI (the interval between frames as displayed is 100 s (0.71 CAD), and exposure time is 49 s per frame) [56]. Figure 8 : Figure 8: Chemiluminescence images, cylinder pressure, and rate of heat release with various piston bowl geometries at in = 95 ∘ C, = 85 ∘ C. The number below each image is the crank angle and light intensity [63]. Figure 9 : Figure9: Various piston bowl geometries with the same compression ratio, squish distance, and visible area[63]. Figure 15 : Figure 15: Single-shot images from onset of LTR combustion until the end of the main combustion.Formaldehyde is shown in green and OH is shown in red [76]. Figure 17 : Figure 17: Simultaneous images of formaldehyde and toluene at start of injection of −250 ∘ CA ATDC to ensure the fuel sufficient time to mix with air forming a very homogeneous mixture before ignition [77]. Figure 18 : Figure 18: Simultaneous images of formaldehyde and toluene at start of injection of −35 ∘ CA ATDC to form a stratified mixture before ignition [77].
12,629.4
2015-08-03T00:00:00.000
[ "Physics" ]
Activation of Protein Kinase C Triggers Its Ubiquitination and Degradation ABSTRACT Treatment of cells with tumor-promoting phorbol esters results in the activation but then depletion of phorbol ester-responsive protein kinase C (PKC) isoforms. The ubiquitin-proteasome pathway has been implicated in regulating the levels of many cellular proteins, including those involved in cell cycle control. We report here that in 3Y1 rat fibroblasts, proteasome inhibitors prevent the depletion of PKC isoforms α, δ, and ɛ in response to the tumor-promoting phorbol ester 12-O-tetradecanoylphorbol-13-acetate (TPA). Proteasome inhibitors also blocked the tumor-promoting effects of TPA on 3Y1 cells overexpressing c-Src, which results from the depletion of PKC δ. Consistent with the involvement of the ubiquitin-proteasome pathway in the degradation of PKC isoforms, ubiquitinated PKC α, δ, and ɛ were detected within 30 min of TPA treatment. Diacylglycerol, the physiological activator of PKC, also stimulated ubiquitination and degradation of PKC, suggesting that ubiquitination is a physiological response to PKC activation. Compounds that inhibit activation of PKC prevented both TPA- and diacylglycerol-induced PKC depletion and ubiquitination. Moreover, a kinase-dead ATP-binding mutant of PKC α could not be depleted by TPA treatment. These data are consistent with a suicide model whereby activation of PKC triggers its own degradation via the ubiquitin-proteasome pathway. Tumor promotion by phorbol esters involves the selective amplification of cells previously mutated in an appropriate growth-stimulatory gene (3,17). Phorbol esters exert their effects on the protein kinase C (PKC) family of genes, which consists of genes that encode at least nine distinct isoforms that are responsive to tumor-promoting phorbol esters (9). Phorbol esters first activate phorbol ester-responsive PKC isoforms, but upon prolonged treatment, these isoforms are proteolytically degraded (16). Using a cell culture model system in which cells overexpressing c-Src were transformed by phorbol ester treatment, we recently demonstrated that the tumor-promoting effect of the phorbol ester 12-O-tetradecanoylphorbol-13-acetate (TPA) on these cells was due to the depletion of PKC ␦ (7). These data suggested that PKC ␦ may function as a tumor suppressor. Consistent with this hypothesis, PKC ␦ was inactivated by tyrosine phosphorylation in cells transformed by v-Src (19) and v-Ras (2). Thus, regulation of PKC ␦ at the level of activity and expression may be a very important cell growth control mechanism. PKC ␣ has been reported to become ubiquitinated in response to bryostatin 1, an activator of PKC that prevents tumor promotion in mouse skin by TPA (6). The ubiquitin-proteasome pathway is a nonlysosomal degradation system that controls the timed destruction of cell cycle-regulatory proteins, including the tumor suppressor p53; the cyclin-dependent kinase inhibitor p27; the cyclins; the oncogene products c-Myc, c-Jun, and c-Fos; and the transcription factors NF-B and E2F (reviewed in reference 13). This pathway involves the covalent tagging of proteins with ubiquitin, followed by proteasomemediated degradation of tagged proteins. Conjugation of ubiquitin to substrate proteins requires three enzymes: a ubiquitinactivating enzyme (E1), a ubiquitin-conjugating enzyme (E2), and a ubiquitin ligase (E3). Both the E2 and E3 proteins belong to large families of proteins, and it is believed that different combinations of E2 proteins with different E3 ligases define a high substrate specificity. In this study, we have investigated the role of the ubiquitin-proteasome pathway in the downregulation of PKC isoforms in response to the tumorpromoting phorbol ester TPA. MATERIALS AND METHODS Cells and cell culture conditions. Rat 3Y1 cells or rat 3Y1 cells expressing either v-Src or c-Src were maintained in Dulbecco's modified Eagle medium supplemented with 10% bovine calf serum (HyClone). Cell cultures were made quiescent by growing them to confluence and then replacing the medium with fresh medium containing 0.5% newborn calf serum for 1 day. Cells expressing the kinase-dead PKC ␣ were generated as described previously (7). The kinase-dead PKC ␣ clone was generated by a mutation to the ATP-binding site as described previously (15). Materials. The PKC inhibitors staurosporine, bisindolylmaleimide II, rottlerin, and Go6976 were obtained from Calbiochem. Monoclonal antibodies for PKC ␣, ε, and were obtained from Transduction Laboratories; a polyclonal antibody for PKC ␦ was obtained from Santa Cruz. A monoclonal antibody for ubiquitin was obtained from Zymed. Cell lysate preparation and subcellular fractionation. Cells grew to approximately 90% confluence in 100-mm-diameter culture dishes and were then shifted to Dulbecco's modified Eagle medium containing 0.5% serum for 24 h. Cells were washed three times with ice-cold isotonic buffer (phosphate-buffered saline, containing 136 mM NaCl, 2.6 mM KCl, 1.4 mM KH 2 PO 4 , and 4.2 mM Na 2 HPO 4 , pH 7.2). For subcellular fractionation, cells from 100-mm-diameter dishes were washed and then scraped into 2 ml of homogenization buffer (20 mM Tris-HCl [pH 7.5], 5 mM NaCl, 1 mM EDTA, 5 mM MgCl 2 , 2 mM dithiothreitol, 200 M phenylmethylsulfonyl fluoride, 10 g of aprotinin per ml, 10 g of leupeptin per ml). Cells were then disrupted with 20 strokes in a Dounce homogenizer (type B pestle), and the lysate was centrifuged at 100,000 ϫ g for 1 h. The supernatant was collected as the cytosolic fraction. The membrane pellet was suspended in the same volume of homogenization buffer with 1% Triton X-100. After incubation for 30 min at 4°C, the suspension was centrifuged at 100,000 ϫ g for 1 h. The supernatant was collected as the membrane fraction. For whole-cell lysates, cells were treated with 3 ml of homogenization buffer containing 1% Triton X-100 followed by centrifugation at 100,000 ϫ g for 1 h. The supernatant was collected and used as the whole-cell lysate. Immunoprecipitation and Western blot analysis. Extraction of proteins from cultured cells was performed as previously described (7) with a modified buffer consisting of 50 mM Tris-HCl (pH 7.5), 1% Triton X-100, 150 mM NaCl, 1 mM dithiothreitol, 0.5 mM EDTA, 0.1 mM phenylmethylsulfonyl fluoride, leupeptin (12 mg/ml), aprotinin (20 g/ml), 100 M sodium vanadate, 100 M sodium pyrophosphate, 1 mM sodium fluoride, 10 mM ethylmethylmaleimide, and 50 mM hemin. Cell extracts were clarified by centrifugation at 12,000 rpm, and the supernatants (1,500 g of protein/ml) were subjected to immunoprecipitation with anti-PKC ␦, ␣, and ε antibodies. After overnight incubation at 4°C, protein A-agarose beads were added and left for an additional 3 h. Immunocomplexes were then subjected to Western blot analysis as described previously (7). Western blot analysis with antiubiquitin antibody was performed with modifications described by Avantaggiati et al. (1). Proteasome inhibitors block depletion of PKC isoforms. To investigate whether the ubiquitin-proteasome pathway is involved in the downregulation of PKC in response to phorbol esters, we first examined the effect of proteasome inhibitors on TPA-induced PKC depletion. MG101 and MG132, which inhibit proteasome function (11,12), prevented the TPAinduced depletion of the ␣, ␦, and ε PKC isoforms, the only TPA-responsive isoforms present in these cells (Fig. 1). E64, which shares with MG101 and MG132 the ability to inhibit calpain protease, but not the proteasome, had no effect on TPA-induced PKC depletion. We also examined the effect of these compounds on PKC , a PKC isoform that is expressed in these cells but is not responsive to phorbol esters (9). As shown in Fig. 1, neither MG101 nor MG132 had any effect on PKC . These data implicate the ubiquitin-proteasome pathway in the phorbol ester-induced depletion of PKC. PKC isoforms become ubiquitinated upon TPA treatment. The data in Fig. 1 demonstrate that compounds which inhibit proteasome function inhibit TPA-induced downregulation of PKC. Therefore, it is predicted that the affected PKC isoforms should become ubiquitinated in response to TPA. In Fig. 1, it was also observed that the anti-PKC ␦ antibody recognized several higher-molecular-weight species within 30 min after TPA treatment. The appearance of these higher-molecular-weight species of PKC ␦ is consistent with the rapid ubiquitination of PKC ␦ in response to TPA. To investigate directly whether PKC isoforms were being ubiquitinated in response to TPA, we performed Western blot analysis of PKC isoform immunoprecipitations with antiubiquitin antibody. As shown in Fig. 2, ubiquitination of PKC ␣, ␦, and ε, but not PKC , was detected within 30 min of TPA treatment. By 6 h, the ubiquitinated PKC isoforms were no longer detectable. However, when MG101 was used to inhibit proteasome, the ubiquitinated isoforms were still present 6 h after TPA treatment (Fig. 2). Interestingly, 24 h of treatment with MG101 alone resulted in a significant accumulation of ubiquitinated forms to a limited extent for PKC ␣ and substantially for PKC ε (Fig. 2), suggesting that ubiquitination may occur in response to physiological stimuli as well as TPA. These data demonstrate that PKC isoforms ␣, ␦, and ε rapidly become ubiquitinated in response to TPA treatment and that their disappearance is blocked by inhibition of proteasome. Degradation and ubiquitination of PKC are dependent upon PKC kinase activity. To begin to investigate the mechanism for activation of ubiquitination and proteasome deg- Proteasome inhibitors prevent TPA-induced depletion of PKC. 3Y1 cells overexpressing c-Src were treated with TPA (400 nM) for the indicated times, and PKC depletion was monitored by Western blot analysis as described previously (7). The effect of MG101, MG132, or E64 (all at 50 M) was determined by adding these compounds 30 min prior to addition of TPA as shown. The levels of PKC ␦, ␣, ε, and were determined by using antibodies specific for these isoforms. radation, we asked whether the kinase activity of PKC was important for degradation. We first investigated the effect of PKC inhibitors on the TPA-induced PKC downregulation and ubiquitination. In Fig. 3A, it is shown that the PKC inhibitors staurosporine and bisindolylmaleimide II prevented downregulation of PKC isoforms ␣, ␦, and ε. Interestingly, Go6976, which specifically inhibits PKC ␣ (8), prevented TPA-induced downregulation of the ␣ isoform only, and rottlerin, a more specific inhibitor of PKC ␦ (4), prevented TPA-induced downregulation of the ␦ isoform only. We also investigated the effect of the PKC inhibitors on the ubiquitination of PKC isoforms ␣ and ␦, and as expected, the PKC inhibitors also prevented TPA-induced ubiquitination of these PKC isoforms with the same specificity observed for inhibition of downregulation (Fig. 3B). The PKC inhibitors did not inhibit translocation to the membrane of the PKC isoforms (Fig. 3C). Thus, the effects observed in Fig. 3A and B were not due to a lack of membrane association. These data indicate that TPA-induced downregula-tion and ubiquitination of the PKC isoforms require an active kinase activity. Consistent with a requirement for activation of PKC for downregulation, the inactive phorbol ester 4␣-phorbol 12,13-didecanoate, which does not activate PKC (14), did not lead to the downregulation of PKC (Fig. 3D), nor did it result in the ubiquitination of PKC ␦ (Fig. 3E). If PKC kinase activity is required for downregulation, then a kinase-dead PKC mutant should be resistant to downregulation in response to TPA. An ATP-binding site mutant of PKC ␣ (15) that was kinase dead was introduced into the c-Srcoverexpressing cell line, and the ability to downregulate PKC ␣ with TPA was examined. As shown in Fig. 4A, this PKC ␣ mutant was completely resistant to downregulation by TPA. Since the kinase-dead PKC ␣ mutant could still be stimulated to associate with the membrane in response to TPA (Fig. 4B), the lack of degradation was not due to lack of membrane localization. Since PKC ␦ and ε were both activated and downregulated in these cells, activation of the ubiquitin-proteasome pathway by these PKC isoforms was apparently specific for the activated isoforms only. These data further support the conclusion that activation of the kinase activity of PKC is necessary for ubiquitination and downregulation. PKC is ubiquitinated and downregulated in response to DG in a proteasome-and kinase-dependent mechanism. Phorbol esters bind to PKC at the site that binds the physiological activator diacylglycerol (DG) (9). As shown in Fig. 2, the proteasome inhibitor MG101 stimulated an increase in the ubiquitinated PKC isoforms ␣ and ε, suggesting that ubiquitination is a physiological response and not an artifact of phorbol ester treatment. We therefore wished to investigate whether ubiquitination and downregulation of PKC occur in response to DG. As shown in Fig. 5, the ␣ and ␦ isoforms and to a lesser extent the ε isoform were all downregulated in response to the DG dioctoylglycerol (DiC8). This downregulation was sensitive to both proteasome and PKC inhibitors (Fig. 5A). The PKC ␣-specific Go6976 prevented downregulation of the ␣ isoform specifically. We also wished to determine whether DG stimulated ubiquitination of PKC isoforms. We added DiC8 to the 3Y1 cells and examined ubiquitination as in Fig. 2. In Fig. 5B, it is shown that DiC8 stimulated ubiquitination of PKC ␦. The ubiquitination of PKC ␦ was inhibited by the PKC inhibitors staurosporine, bisindolylmaleimide II, and rottlerin but not by the proteasome inhibitor MG101 or the PKC ␣ inhibitor Go6976 (Fig. 5B). These data suggest that PKC isoforms become ubiquitinated and downregulated by the physiological stimulus of DG as well as by the tumor-promoting stimulus of TPA and that downregulation is dependent upon an active kinase. TPA-induced transformation of 3Y1 cells overexpressing c-Src is blocked by proteasome inhibitors. In cells overexpressing c-Src, TPA treatment causes the appearance of transformation that is due to the depletion of PKC ␦ (7). We therefore investigated whether inhibitors of the ubiquitin-proteasome pathway could prevent the transformed phenotype induced by TPA in the c-Src-overexpressing cells by preventing the depletion of PKC ␦. As shown in Fig. 6A, the proteasome-specific inhibitor MG101 prevented the morphological transformation of the c-Src-expressing cells induced by TPA, whereas the nonspecific protease inhibitor E64 did not prevent the morphological transformation induced by TPA. The proteasome inhibitors had no effect on the transformed phenotype induced by v-Src (Fig. 6A). The ability of MG101 to prevent the TPAinduced morphological transformation was not likely due to any effects that proteasome inhibition have upon cell cycle progression (10), since aphidicolin, which blocks cells at the G 1 /S boundary of the cell cycle (5), had no effect on the TPA-induced morphological transformation (data not shown). In addition, MG101 had no effect on the translocation of the PKC isoforms induced by TPA (Fig. 6B). Thus, the effect observed in Fig. 6A is not due to the inability to translocate PKC isoforms to the membrane. These data suggest that PKC ␦ is downregulated by the ubiquitin-proteasome pathway and that this pathway is critical for the TPA-induced tumor promotion, as reported previously (7). DISCUSSION In this report, we have shown that downregulation of PKC in response to tumor-promoting phorbol esters is via the ubiqutin-proteasome pathway. In response to TPA, PKC isoforms ␣, ␦, and ε all became ubiquitinated within 30 min and were degraded within 6 h in 3Y1 rat fibroblasts. Proteasome inhibitors prevented TPA-induced PKC downregulation but not ubiquitination of the PKC isoforms. Ubiquitination and downregulation of PKC isoforms were dependent on an active PKC kinase. We previously demonstrated that the downregulation of PKC ␦ was responsible for the tumor-promoting effects of TPA on 3Y1 cells overexpressing c-Src (7). Consistent with PKC ␦ downregulation being important for the tumorpromoting effects observed previously, the proteasome inhibitor MG101, which prevented PKC ␦ downregulation in response to TPA, also prevented the TPA-induced transformation of the c-Src-overexpressing cells. Thus, the data presented here implicate the ubiquitin-proteasome pathway in phorbol ester-induced tumor promotion. Interestingly, treatment of 3Y1 cells with MG101 induced the appearance of PKC polyubiquitinated forms, especially for PKC ε, which tends to be the most constitutively activated isoform in these cells (18). This suggested that ubiquitination of PKC is a physiological response and is not unique to the response to phorbol esters. Consistent with this hypothesis, ubiquitination and downregulation were observed in response to an exogenously provided DG. DG was less potent than TPA at inducing ubiquitination and downregulation of PKC; however, this was most likely because DG can be metabolically converted to other lipids such as phosphatidic acid and monoacylglycerol. The data presented here do not demonstrate the complete mechanism of activation of the ubiquitin-proteasome pathway; however, it is apparently regulated at the level of ubiquitination. Of special interest is the requirement for the kinase activity of the PKC isoforms. Compounds that inhibit activation of PKC prevented PKC downregulation and ubiquitination in response to TPA. Additionally, a kinase-dead PKC ␣ was completely resistant to TPA-induced downregulation. Since phorbol esters still lead to the activation and downregulation of PKC isoforms ␦ and ε in cells expressing the kinase-dead PKC ␣, ubiquitination is apparently isoform specific and the activation of one PKC isoform does not stimulate ubiquitination and downregulation of other inactive PKC isoforms. Moreover, since the cells expressing the kinase-dead PKC ␣ likely still express wild-type PKC ␣, which would be activated by TPA, it is not likely that PKC ␣ activates a PKC ␣-specific ubiquitination system, because this would result in the degradation of the kinase-dead PKC ␣. Since the defect in the kinase-dead PKC ␣ mutant that was not degraded in response to TPA was in the ATP-binding site, activation of the ubiquitin-conjugating system is likely stimulated by a conformational change in PKC that involves ATP binding or hydrolysis. This suggests a suicide model for regulation of PKC where upon activation, PKC becomes ubiquitinated and thereby targeted for degradation in a negative feedback control mechanism.
3,983
1998-02-01T00:00:00.000
[ "Biology", "Computer Science" ]
Chemosynthetic alphaproteobacterial diazotrophs reside in deep-sea cold-seep bottom waters ABSTRACT Nitrogen (N)-fixing organisms, also known as diazotrophs, play a crucial role in N-limited ecosystems by controlling the production of bioavailable N. The carbon-dominated cold-seep ecosystems are inherently N-limited, making them hotspots of N fixation. However, the knowledge of diazotrophs in cold-seep ecosystems is limited compared to other marine ecosystems. In this study, we used multi-omics to investigate the diversity and catabolism of diazotrophs in deep-sea cold-seep bottom waters. Our findings showed that the relative abundance of diazotrophs in the bacterial community reached its highest level in the cold-seep bottom waters compared to the cold-seep upper waters and non-seep bottom waters. Remarkably, more than 98% of metatranscriptomic reads aligned on diazotrophs in cold-seep bottom waters belonged to the genus Sagittula, an alphaproteobacterium. Its metagenome-assembled genome, named Seep-BW-D1, contained catalytic genes (nifHDK) for nitrogen fixation, and the nifH gene was actively transcribed in situ. Seep-BW-D1 also exhibited chemosynthetic capability to oxidize C1 compounds (methanol, formaldehyde, and formate) and thiosulfate (S2O32−). In addition, we observed abundant transcripts mapped to genes involved in the transport systems for acetate, spermidine/putrescine, and pectin oligomers, suggesting that Seep-BW-D1 can utilize organics from the intermediates synthesized by methane-oxidizing microorganisms, decaying tissues from cold-seep benthic animals, and refractory pectin derived from upper photosynthetic ecosystems. Overall, our study corroborates that carbon-dominated cold-seep bottom waters select for diazotrophs and reveals the catabolism of a novel chemosynthetic alphaproteobacterial diazotroph in cold-seep bottom waters. IMPORTANCE Bioavailable nitrogen (N) is a crucial element for cellular growth and division, and its production is controlled by diazotrophs. Marine diazotrophs contribute to nearly half of the global fixed N and perform N fixation in various marine ecosystems. While previous studies mainly focused on diazotrophs in the sunlit ocean and oxygen minimum zones, recent research has recognized cold-seep ecosystems as overlooked N-fixing hotspots because the seeping fluids in cold-seep ecosystems introduce abundant bioavailable carbon but little bioavailable N, making most cold seeps inherently N-limited. With thousands of cold-seep ecosystems detected at continental margins worldwide in the past decades, the significant role of cold seeps in marine N biogeochemical cycling is emphasized. However, the diazotrophs in cold-seep bottom waters remain poorly understood. Through multi-omics, this study identified a novel alphaproteobacterial chemoheterotroph belonging to Sagittula as one of the most active diazotrophs residing in cold-seep bottom waters and revealed its catabolism. B ioavailable nitrogen (N), an essential element for cellular growth and division, is critical for biological productivity in marine ecosystems (1,2).The production of bioavailable N is controlled by N-fixing organisms (i.e., diazotrophs) through the reduction of dinitrogen gas (N 2 ) to ammonia (NH 3 ), and the key enzymes catalyzing this process are nitrogenases encoded by the nifH and nifDK genes (3).Marine diazotrophs contribute to nearly half of the global fixed N (4) and perform N fixation in various marine ecosystems.While the ecological and biogeochemical importance of diazotrophs in the sunlit ocean and oxygen minimum zones (OMZs; the water column of several restricted regions of the ocean basins where there were low oxygen concentrations) has been well documented (5)(6)(7), recent studies have revealed the phylogenetic and catabolic diversity of diazotrophs in previously overlooked environments such as the deep-sea abyssal plain (8) and cold-seep sediments (9). Cold seeps are extreme deep-sea environments where methane-rich fluids from subsurface reservoirs leak to the seafloor due to gravitational and tectonic forces.These seeping fluids introduce abundant bioavailable carbon (i.e., methane) but little bioavailable N into cold-seep ecosystems, resulting in most cold seeps being inherently N-limited and making them hotspots of N fixation (10).Over the past few decades, thousands of cold-seep systems have been detected at continental margins worldwide (11), highlighting the significant role of cold seeps in marine N biogeochemical cycling. The key diazotrophs in cold-seep sediments are anaerobic methanotrophic archaea (ANME) and their sulfate-reducing bacterial partners (SRB) (9,12).ANME-SRB consor tia perform N fixation while anaerobically oxidizing methane and reducing sulfate.The capability of N fixation in ANME-SRB consortia has been demonstrated through NanoSIMS analysis (12,13), and N fixation rates in cold-seep sediments are almost three times higher than those in background deep-sea sediments (14).Recently, multi-omics approaches have been used to investigate the diversity, distribution, and in situ activity of diazotrophs in cold-seep sediments (9), identifying phylogenetically diverse nitroge nase genes and expanding the diversity of cold-seep diazotrophic lineages.Although approximately 90% of the methane from deep marine sediments is consumed via anaerobic oxidation of methane (AOM) before reaching the seafloor (15), leaking methane in the water column can still reach up to 100 m above the seepage sites (16,17).Methane seepage in bottom waters fuels free-living and symbiotic aerobic methane-consuming microbes, resulting in significantly higher benthic oxygen uptake at cold seeps than non-seeping seafloor (11).In addition, the seepage intensity strongly impacts the community structures of benthic animals and prokaryotes (18).With the continuous input of bioavailable carbon, cold-seep bottom waters can also be N-limited environments that select for diazotrophs.This raises the question of, in the cold seeps, whether the N fixation process is coupled with carbon-related chemosynthesis.However, little is known about the N fixation in cold-seep bottom waters compared with other marine ecosystems. To address this knowledge gap, we investigated the phylogenetic and functional diversity of diazotrophs in the cold-seep bottom waters through metagenomics analysis.We also examined the in situ activity of diazotrophs through metatranscriptomic data. In addition, we compared diazotroph abundance and community among seep sites with different seepage activity, as well as samples from the euphotic and aphotic layers of the water column above the cold seeps, to elucidate key factors controlling diazotroph distribution and identify the niches for cold-seep diazotrophs.Our study highlights that deep-sea cold-seep bottom waters are overlooked hotspots of N fixation and provides insights into the functional adaptation of diazotrophs to cold-seep bottom waters. Sample collection and geochemical analysis We conducted a research cruise at Haima cold seep (16°43′N, 110°28′E) in the South China Sea using R/V Haiyangdizhi VI in May 2022.Bottom waters (~1,400 m depth) were collected from four sites: three seep sites (i.e., high-intensity seepage [HS] site with mussel bed and continuous bubbling of methane gas, medium-intensity seepage [MS] site with live and dead mussels and live tubeworms, and low-intensity seepage [LS] site with clam bed and live tubeworms) and one control site (i.e., non-seepage [NS] site without any cold-seep-specific benthic animals and far from the three cold-seep sites).Water and sediment samples were collected using the remotely operated underwater vehicle (ROV) "Haima." We also collected water samples using a "Sea-Bird 911" conduc tivity-temperature-depth (CTD; General Oceanics, Miami, FL, USA) rosette system from the euphotic (0, 50, and 100 m depth) and aphotic (600, 900, and 1,200 m depth) layers of the water column above the seep sites.For metagenomic samples, approxi mately 8 L of water samples was sequentially filtered onto 3-µm-pore and 0.22-µm-pore polycarbonate membranes (GVS, Roma, Italy) to collect particle-attached and free-living microbes, respectively.For metatranscriptomic samples, approximately 15 L of water samples was filtered onto 0.22-µm-pore polycarbonate membranes (GVS, Roma, Italy).Following filtration, the membranes were flash-frozen in liquid nitrogen immediately and stored at −80°C until further use. To confirm the differences in seepage intensity among the three seep sites, we collected three push cores from each seep site using the ROV "Haima" for geochemical analysis.On board in a cold room at 4°C, subsamples of the water-sediment interface (0-2 cm surface sediment) were separated from the push cores, and the porewater of the water-sediment interface was extracted using Rhizon samplers (Rhizosphere Research Products, Wageningen, Netherlands).The concentrations of methane and sulfide were measured using Agilent 6850 Series II GC (Agilent, Santa Clara, CA, USA) and SmartChem200 Wet Chemistry Analyzer (KPM Analytics, Westborough, MA, USA), respectively.The stable carbon isotopic composition of DIC (δ13C-DIC) was measured using a Delta V Advantage mass spectrometer (Thermo Fisher Scientific, Poway, CA, USA) linked to a GasBench II device (Thermo Fisher Scientific, Poway, CA, USA).The GasBench II device was equipped with a PAL GC autosampler (CTC Analytics AG, Zwingen, Swizer land) and PoraPlotQ (30 m × 0.32 mm) GC Column (Agilent, Santa Clara, CA, USA).The mass spectrometer instrument was run at room temperature (25°C).Yielded CO 2 was carried into the mass spectrometer with the aid of helium gas, and the δ13C value was measured.The helium flow is 0.5 mL/min, and the GC column is held at 70°C.For each sample, five replicates were sequentially injected, and the average value of the last three injections was recorded.The results are expressed in the standard delta (δ) notation per mil (‰).The δ13C values were relative to Vienna Pee Dee Belemnite (VPDB).Two carbonate standards, NBS-18 and IAEA-CO-8, were measured to determine the optimal extraction procedure. Nucleic acid extraction and sequencing Total DNA and RNA were extracted using DNeasy PowerWater Kits (Qiagen, Hilden, Germany) and RNeasy Plus Kits (Qiagen, Hilden, Germany), respectively, according to the manufacturer's protocol.DNA quality was measured using the Qubit dsDNA Assay Kit in a Qubit 2.0 Fluorometer (Thermo Fisher Scientific, Waltham, MA, USA).RNA quality and integrity were measured using a NanoDrop spectrophotometer (Thermo Fisher Scientific, MA, USA) and the RNA Nano 6000 assay kit in conjunction with the Agilent Bioanalyzer 2100 system (Agilent Technologies, CA, USA), respectively.Qualified DNA and RNA samples were assigned for metagenomic and metatrancriptomic sequencing using the NovaSeq 6000 system (Illumina, San Diego, CA, USA), and 150 bp paired-end reads were generated. We collected 18 cold-seep bottom-water samples (four from each seepage site, six from the non-seepage site) and 36 water-column samples (12 from each seepage site) for DNA extraction (see Fig. S1 in the supplemental material).However, qualified DNA was only successfully extracted from some samples, and we eventually assigned 35 qualified DNA samples (16 from bottom waters, 12 from euphotic layers, and 7 from aphotic layers) for metagenomic sequencing (see Fig. S1 in the supplemental material).Due to the high demand for water samples and limited ROV diving opportunities, we could only collect water samples for RNA extraction in one seepage site.Therefore, only three qualified RNA samples from the MS site were assigned to metatranscriptomic sequencing. Profiling diazotroph relative abundance and community The nifH gene has been commonly used as a marker to assess the distribution and community of diazotrophs (7,19,20).However, recent work by Mise et al. (21) has shown that approximately 20% of genomes that contain the nifH gene lack the nifDK genes, which encode essential subunits of nitrogenases (21).This suggests that nifH alone is not necessarily to be an indicator of diazotrophs.To address this issue, we defined a genome as a diazotroph only if it harbored all three nitrogenase genes (nifHDK) in this study. To facilitate the identification of diazotrophs, we developed a pipeline called "Diaiden" (https://github.com/jchenek/Diaiden).In this pipeline, coding sequences (CDS) of genomes would be predicted using Prodigal v2.6.3 with the "-p meta" parameter (22).Then, CDS would be annotated using diamond v2.1.6(23) with parameters "--sensitive -k 1 -e 1e-100 --id 50 --query-cover 75 --subject-cover 75" based on nifHDK sequences retrieved from the Kyoto Encyclopedia of Genes and Genomes (KEGG) database (24).Lastly, genomes would be identified as diazotroph genomes if the three catalytic genes (nifHDK) were detected.We applied the Diaiden pipeline to GTDB release R214, which comprises 85,205 prokaryotic genomes (25), resulting in 3,316 diazotrophs detected.We also collected the 48 diazotroph metagenome-assembled genomes (MAGs) recently recovered by Delmont et al. (7) from the global sunlit ocean (7) and customized a diazotroph database containing 3,364 genomes.Furthermore, we extracted nifH sequences from these diazotrophs and created a nifH database for subsequent analysis.In addition, to determine the abundance of prokaryotes in each sample, we developed a customized 16S ribosomal RNA database by removing chloroplast and mitochondria sequences from the SILVA 16S database v138 (26). We employed Trimmomatic v0.39 (27) to trim the 35 metagenomic data.The resulting clean reads were aligned to the customized nifH and 16S databases using CoverM v0.6.1 (https://github.com/wwood/CoverM)under "contig" mode with parameters "--meth ods reads_per_base --min-read-percent-identity 95 --min-read-aligned-percent 75." To represent the relative abundance of nifH sequences in the prokaryotic community of each sample, we normalized the reads per base value of nifH sequences by the reads per base value of 16S sequences [(reads per base of nifH/reads per base of 16S ) × 10 6 ].Furthermore, we transformed the reads per base value of each nifH sequence into transcripts per kilobase million (TPM) to represent the diazotroph community and visualized it using ggplot2 R package v.3.5.0 (28). Weighted correlation network and statistical analyses We implemented a weighted correlation network analysis (WGCNA) using the WGCNA v.1.71R package (56) with a "signed" network type to identify potential correlations among microbes in cold-seep bottom waters.The input data matrix comprised the relative abundance of recovered MAGs in 16 bottom-water samples.Relative abun dance was calculated using CoverM v0.6.1 (https://github.com/wwood/CoverM)under "genome" mode with parameters "--methods relative_abundance --min-read-percentidentity 97 --min-read-aligned-percent 75." We calculated soft thresholds using the "pickSoftThreshold" function based on a weighted correlation matrix. Shapiro-Wilk test was implemented using the "shapiro.test"function in R software (57) to test whether the data were normally distributed.We applied non-parametric tests using the "wilcox.test" to evaluate the differences among groups with abnormal distribution, and the "t.test" to evaluate the differences among groups with normal distribution. Diazotroph relative abundance and community in cold-seep bottom waters We collected water samples from three sites with varying seepage activities in the Haima cold seep (Fig. 1a).At the HS site, mussel beds and continuous gas bubbling were frequently observed on the seafloor.At the MS site, both live and dead mussels were present, and only a few gas-bubbling points were observed.The LS site had no live mussel, being dominated by clams, with no observed gas bubbling points.In situ images of the cold-seep landscapes can be found in our previous work (58,59).Methane concentrations in the water-sediment interface showed a transparent gradient among the three seepage sites (HS site: 1857.6 ± 1169.3 mg/L; MS site: 484.6 ± 204.9 mg/L; and LS site: 203.1 ± 24.9 mg/L), consistent with the δ13C-DIC values in the water-sediment interface (HS site: −35.6 ± 3.5‰; MS site: −11.9 ± 3.8‰; and LS site: −7.8 ± 1.3‰, VPDB) and bottom waters (HS site: −5.2 ± 2.1‰; MS site: −3.6 ± 1.2‰; and LS site: −2.4 ± 0.8‰, VPDB), indicating that HS site had significantly higher methane-oxidizing activity than the MS and LS sites (P-value < 0.05) (Fig. 1b).Overall, both the landscapes and environmental factors indicated that the three sampling sites could be distinguished by their seepage activities. In cold-seep sites, we compared the bottom waters (BW; n = 11) with the euphotic (Euph; n = 12) and aphotic (Aph; n = 7) layers.In addition, we also compared cold-seep bottom waters (BW;, n = 11) with bottom waters in non-seep sites (NS; n = 5).Our results showed that cold-seep bottom waters had the highest relative abundance of diazotrophs in the prokaryotic community compared with other water layers.(P-value < 0.05) (Fig. 1c; see Fig. S2a in the supplemental material).In addition, the relative abundance of diazotrophs was significantly higher in the HS site than in the MS and LS sites (P-value < 0.05) (see Fig. S2b in the supplemental material).These findings support the hypothesis that carbon-dominated cold-seep environments select for diazotrophs.The diazotroph communities in cold-seep bottom waters were distinct from those in the sunlit ocean.One of the most notable differences was that the archaeal classes Methanosarcinia and Syntropharchaeia, which are absent in the global sunlit ocean (7), were prevalent in cold-seep bottom waters (Fig. 1d).In addition, Gammaproteobacteria mostly dominated the diazotroph communities in the global surface ocean (7,60), while the predominant bacterial diazotroph in cold-seep bottom waters belonged to Alphaproteobacteria (Fig. 1d).Seepage activity also affected the diazotroph communities in cold-seep bottom waters.For example, Alphaproteobacteria only predominated the diazotroph community in the MS and LS sites (contributing to ~1% in the HS site, 27%-30% in the MS site, and 23%-51% in the LS site), while the abundance of archaeal diazotrophs was highest in the HS site (contributing to 66%-81% in the HS site, 34%-42% in the MS site, and 49%-52% in the LS site). For phylogenetic analysis, we retrieved nifH sequences from the three metagenomic assemblies, including one from euphotic layers, one from aphotic layers, and 21 from bottom waters.The phylogenetic tree (Fig. 1e) showed that among the 21 nifH sequen ces from bottom waters, 17 belonged to archaea, 3 belonged to Alphaproteobacteria, and 1 belonged to Desulfobacteria.The 17 archaeal nifH sequences were affiliated with two orders, namely ANME-1 and Methanosarcinales (ANME-2 cluster archaea).Interest ingly, the ANME-1 nifH sequences were grouped into two separate clusters.Cluster-1 was closely related to ANME-2 nifH sequences, while cluster-2 was distant from all other nifH sequences, indicating that the nifH genes in ANME-1 might have different evolutionary origins.The nifH gene of Desulfobacteria showed high similarity to the nifH of strain ETH-SRB1 (61), which frequently forms consortia with ANME.These ANME-SRB consortia are active diazotrophs in cold-seep sediments (9,(12)(13)(14), and their potential roles in oxic cold-seep bottom waters will be discussed below.The three alphaproteo bacterial nifH sequences were affiliated with the genera Bradyrhizobium, Sagittula, and Salipiger.Species from Bradyrhizobium are well-known symbiotic nitrogen-fixing bacteria associated with plants (62).For the genus Sagittula, N-fixation capability has been reported in two strains, namely P11 (63) and MA-2 (64).P11 was isolated from the OMZs off Peru, and MA-2 was isolated from a coastal marine bacterial consortium in which gentisic acid was the sole carbon and energy source.Salipiger strains have been isolated from deep-sea waters (65) and mangrove sediment (66), but their N-fixation capability has been less studied. Multiple lines of evidence have demonstrated that ANME are active N-fixers in cold-seep sediments (9,(12)(13)(14).However, ANME are strict anaerobes whose activity is inhibited in the presence of oxygen (67).The oxygen concentration of bottom waters in the Haima cold seep was approximately 103-109 mM (18), making it an oxic habitat only suitable for aerobes.We compared the ANI among the ANME MAGs retrieved from both the bottom waters and the sediments of identical cold-seep sites (68) and observed a high degree of similarity among these ANME MAGs (Fig. 2).Therefore, the ANME in bottom waters were likely sourced from surface sediments and/or water-sediment interfaces due to water current disturbance and fluids associated with methane seepage.Since ANME were not active in cold-seep bottom waters and the catabolism of ANME has been well documented previously (9, 68), we would not further discuss Seep-BW-D2 and Seep-BW-D3 in this study. Comparative genomic analysis among alphaproteobacterial diazotrophs in ocean The first Sagittula strain, E-37, was discovered from a coastal marine bacterial consortium in 1997 (69) and sequenced in 2018 (70).This strain was characterized by its lignindegrading ability.In 2018, the first complete genome of Sagittula was obtained from strain P11, a diazotroph isolated from OMZs off Peru (63).Recently, another Sagittula strain, MA-2, was isolated from a coastal marine bacterial consortium.This strain grows on gentisic acid as the sole carbon and energy source, and its complete genome was successfully sequenced (64).So far, the Seep-BW-D1 recovered in this study was the only Sagittula genome obtained from deep-sea waters.Based on ANI analysis, the three Sagittula diazotrophs, Seep-BW-D1, MA-2, and P11, were found to be affiliated with the same subspecies due to their high nucleotide similarity (ANI ≥97%) (Fig. 2). In addition to Sagittula relatives, we collected eight alphaproteobacterial hetero trophic bacterial diazotrophs (HBDs) recovered from the global sunlit ocean (7) and conducted comparative genomic analyses among them.The phylogenetic tree showed that none of the eight alphaproteobacterial HBDs from the sunlit ocean were affiliated with the genus Sagittula (Fig. 3; see Table S3 in the supplemental material).This indicates that the Sagittula diazotroph has distinct genomic adaptations that facilitate its predominance in deep-sea cold-seep bottom waters.The closest relative HBDs to Sagittula were HBD-Alpha-02 and HBD-Alpha-07, both belonging to the genus Marinibacterium.In addition, both Sagittula and Marinibacterium belong to the family Rhodobacteraceae. One remarkable genomic feature Sagittula diazotrophs shared was the ability to perform chemosynthesis (Fig. 3 and 4).Sagittula diazotroph genomes encoded enzymes for oxidizing various C 1 compounds (methanol, formaldehyde, and formate), including lanthanide-dependent methanol dehydrogenase (xoxF) for the oxidation of methanol, S-(hydroxymethyl)glutathione synthase (gfa) for the oxidation of formaldehyde, and formate dehydrogenase (fdo and fdw) for the oxidation of formate.Sagittula diazotroph genomes also encoded enzymes involved in oxidizing reduced sulfur compounds (H 2 S, S 2 O 3 2− , and SO 3 2− ), including the sox enzyme complex (soxABCDXYZ) for oxidizing thiosulfate (S 2 O 3 2− ) to sulfate (SO 4 2-); sulfide:quinone oxidoreductase (sqr), cytochrome subunit of sulfide dehydrogenase (fccA), and sulfide dehydrogenase (fccB), mediating the oxidation of sulfide (HS − ) to elemental sulfur (S 0 ); and sulfite dehydrogenase (soeABC), mediating the oxidation of sulfite (SO 3 2− ) to sulfate (SO 4 2− ).In addition, Sagittula diazotroph genomes encoded Ni, Fe hydrogenase (hyaABC) for H 2 oxidation.Hydroge nase could facilitate N fixation in aerobic organisms by acting as an oxygen scavenger to protect nitrogenase from oxygen inhibition, preventing the inhibition of N 2 reduction by H 2 generated by nitrogenase, and recycling H 2 produced by nitrogenase to provide reducing power (71).Considering that cold seeps are typical chemosynthetic ecosystems, Sagittula diazotrophs could benefit from the chemical energy derived from cold seeps via their chemosynthetic capability. We examined the genomic potential for inorganic carbon (CO 2 ) fixation in alphap roteobacterial diazotrophs.Our results showed that most of the tested alphaproteo bacterial diazotrophs did not contain genes encoding enzymes for CO 2 fixation, except for HBD-Alpha-08, which encoded ribulose-bisphosphate carboxylase (rbcL) involved in Calvin-Benson cycle.Therefore, the source of organic carbon is crucial for most alphaproteobacterial diazotrophs.Acetate is a key organic carbon source in marine waters and sediments (72,73).Since cold-seep sediments contain abun dant acetate exported by methane-oxidizing microorganisms that potentially sustain microbial communities (74), the surface sediments and the water-sediment interface can be sources of bottom-water acetate.Our results showed that, compared with other alphaproteobacterial diazotrophs, Sagittula diazotrophs contained a higher copy number of genes encoding acetyl-CoA synthetase (acs) that convert acetate into acetate-CoA (Fig. 5; see Table S3 in the supplemental material).This result indicates that Sagittula diazotrophs may better utilize acetate than alphaproteobacterial diazotrophs residing in the sunlit oceans.In addition, benthic animals can be important organic carbon sources in cold-seep bottom waters.Organic compounds, such as putrescine, spermidine, taurine, glycerol 3-phosphate, and glycerol, can be released into water environments from decaying animal tissues.Our results showed that Sagittula diazo trophs distinctly encoded high-affinity transport systems to uptake these compounds, including spermidine/putrescine transporter (potABCD), taurine transporter (tauABC), glycerol 3-phosphate transporter (upgABCE), and glycerol transporter (glpQSVPT) (Fig. 5; see Table S3 in the supplemental material).Moreover, deep-sea cold-seep ecosystems can also receive organic compounds from the upper ecosystem relying on photosyn thesis (75).The main organic compounds reaching deep-sea seafloor are refractory organics, such as lignin, pectin, and aromatics (76,77).Our results showed that Sagittula diazotrophs encoded extra genes encoding proteins involving benzoyl-CoA degrada tion (boxABC) and pectic oligomer transportation (togABMN and aguEG) (Fig. 5; see Table S3 in the supplemental material).Overall, compared with other alphaproteobac terial diazotrophs, Sagittula diazotrophs have a higher potential to utilize kinds of organic compounds derived from methane-oxidizing microorganisms, cold-seep benthic animals, and refractory organics from surface waters. Transcriptional activity of Seep-BW-D1 in cold-seep bottom waters We aligned metatranscriptomic reads to the Seep-BW-D1 genome to examine its transcriptional activity in cold-seep bottom waters (Fig. 4; see Table S4 in the supplemen tal material).Our results showed that the nifH gene was actively expressed in Seep-BW-D1, indicating that Seep-BW-D1 can fix nitrogen in situ.The nitrogenase encoded by nifHDK carries an iron-molybdenum cofactor (FeMo-co), which is one of the most complex metal cofactors known to date (3).Genes involved in FeMo-co biosynthesis, including nifENB, were encoded by the Seep-BW-D1 genome.The Fe and Mo can be limited factors controlling N fixation in the oligotrophic open ocean, but the limitation can be mitigated in cold-seep bottom waters.This is because cold-seep sediments are rich in Mo and Fe, and substantial amounts of metals can be released into cold-seep bottom waters through seeping fluids (78,79).Each FeMo-co contains one Mo and seven Fe atoms, indicating a higher demand for Fe than Mo in diazotrophs.Our results showed that the iron transporters, including afu and fbp, were actively expressed in Seep-BW-D1, FIG 5 Average copy number of genes among Sagittula diazotrophs and alphaproteobacterial diazotrophs from the sunlit ocean. which may be due to the high cellular Fe requirements.The activity of nitrogenase is inhibited under high intracellular oxygen levels (80).Seep-BW-D1 contained and expressed the gene encoding cytochrome bd terminal oxidase (cydA) (Fig. 4), which can decrease intracellular oxygen levels through uncoupled respiration and protect nitrogenase from oxygen (81).Hence, as revealed by the MAG and metatranscriptome, Seep-BW-D1 is genetically capable of fixing nitrogen with respiratory protection in oxygenated cold-seep bottom waters. We found that various genes for N uptake, including urt for urea, amt for ammonia, nrt for nitrate/nitrite, aap and liv for amino acids, and app and ddp for peptides, were actively expressed, indicating that Seep-BW-D1 had multiple N sources to fulfill its N demand.In addition, the expression level of urt gene was 5-10 times higher than any other N transporter, indicating urea is a preferable organic N source for Seep-BW-D1 in cold-seep bottom waters.As a diazotroph, phosphorus can be a critical limited element for Seep-BW-D1 (2).Our results showed that phosphate transporter gene pst and phosphonate transporter gene phn were actively expressed, indicating that Seep-BW-D1 can utilize both inorganic and organic phosphorus to fulfill its phosphate demand. Although Seep-BW-D1 contained genes for oxidizing various reduced compounds, not all were expressed activity.Our results showed that the most active chemotrophic process was the oxidation of formate catalyzed by fdo, followed by the oxidation of methanol (catalyzed by xoxF) and thiosulfate (catalyzed by sox system) (Fig. 4; see Table S4 in the supplemental material).All three compounds can be biogenetic by microbes through methane-oxidizing processes.For example, formate is one of the key intermedi ate compounds between ANME and SRB in the AOM process (82); methanol can be synthesized by methanotrophs in the water-sediment interface through aerobically methane oxidation (83); and thiosulfate can be a by-product of sulfate reduction coupled with AOM (82).These findings suggest that although Seep-BW-D1 cannot obtain energy from methane directly, its primary energy sources are still derived from methane, indicating that cold-seep ecosystems are ideal habitats for Sagittula diazotroph Seep-BW-D1. In addition to the energy sources, we also investigated the carbon sources of Seep-BW-D1.We found that abundant transcripts in the metatranscriptome were mapped to genes involved in acetate utilization (Fig. 4).Although acetate was reported to be microbial energy and carbon source in water column (72), we only found genes for the assimilation but not oxidation of acetate highly expressed in Seep-BW-D1, including acetyl-CoA C-acetyltransferase (ACAT) involved in glyoxylate pathway and acetyl-CoA carboxylase (acc) for fatty acid biosynthesis.Therefore, acetate is an important organic carbon source but not an energy source for Seep-BW-D1.In addition, many organic carbon transporters were found to be highly expressed, including transport systems for spermidine/putrescine (Pot and ABC.SP), glycerol 3-phosphate transporter (ugp), glycerol (glp), oligogalacturonide (encoded by tog), and multiple sugars (msm and mal).Moreover, we screened the activity of carbohydrate-active enzymes (CAZymes) and peptidases (see Fig. S3 in the supplemental material).Gene expression profiles showed that some CAZymes were highly active Seep-BW-D1, including GH102, GH103, and GH23, involved in the degradation of peptidoglycans.The most active peptidase family was C26 (gamma-glutamyl hydrolase), involving the turnover of folyl poly-gammaglutamates.In general, Sagittula diazotroph Seep-BW-D1 actively utilized kinds of organic compounds derived from methane-oxidizing microorganisms, cold-seep benthic animals, and refractory organics from surface waters. Potential interactions between Seep-BW-D1 and its co-occurring microbes Aggregate formation may be one behavioral strategy enabling diazotrophs to generate a low oxygen-level environment (84).Many diazotrophs can form aggregation and perform co-evolutionary mechanisms with their associated organisms (85,86).Based on the black queen hypothesis (87), certain functions or products of diazotrophs can be "leaky, " which affect or be used by associated organisms, and are therefore considered "public goods." Associated organisms that use these public goods may then experience positive selective pressure resulting in the loss of their costly pathways that are responsible for those "public goods." Sagittula strain P11, a close relative of Seep-BW-D1, was observed to form aggregates and exhibited a complex relationship with its associated microbes (63).We also identified genes involved in aggregation forma tion in Seep-BW-D1, including genes encoding secretion systems (88) and extracellular polysaccharides synthesis (89) (see Table S2 in the supplemental material), suggesting Seep-BW-D1 could exhibit close interactions with its associated microbes. We applied WGCNA analysis and found that Seep-BW-D1 co-occurred with MAGs from the module ME-Blue (see Fig. S4a in the supplemental material).MAGs from this module belonged to different taxonomies, including phylum Proteobacteria, Verruco microbiota, Myxococcota, Planctomycetota, and Actinobacteriota (see Table S5 in the supplemental material).We selected 11 medium-to high-quality MAGs (completeness > 80%) from ME-Blue and applied comparative genomic analysis to identify the potential "public goods" and the lost costly pathways (see Fig. S4b and c in the supplemental material).Our results showed that none of these MAGs could synthesize vitamin B12 (VB12), which is crucial for cell growth, while Seep-BW-D1 contained and expressed genes involved in the whole process of VB12 synthesis.By contrast, Seep-BW-D1 did not have the gene tauD for the last step of taurine utilization, while its associated microbes from ME-Blue contained genes encoding this enzyme.Moreover, the associ ated microbes from ME-Blue encoded various enzymes for pectin degrading but did not encode pectin oligomers transporters, while Seep-BW-D1 distinctly contained pectin oligomers transport system.In general, we present molecular evidence that Seep-BW-D1 may be closely associated with some microbes in cold-seep bottom waters, and they might maintain their relationships via sharing "public goods" such as VB12 and kinds of enzymes. Identifying the niche of Seep-BW-D1 We observed niche partitioning across seepage activity among diazotrophs in cold-seep bottom waters.The methane-oxidizing diazotroph ANME dominated in the HS site.By contrast, the sulfur-oxidizing diazotroph Sagittula was more predominant in the MS and LS seepage sites (Fig. 1d).The heterogeneity of energy sources may explain this distribution pattern.HS site has a significantly higher methane concentration (~10 6 µM in sediment, ~10 3 µM in water) than MS and LS sites (~10 5 µM in sediment, ~10 2 µM in water) sites (Fig. 1b and 6) (18), which benefits the prevalence of methane-oxidizing diazotrophs.By contrast, the H 2 S concentration is higher in the bottom waters of MS and LS sites (~0.4 µM) than in the HS site (~0.1 µM) (Fig. 1b and 6), which facilitates the prevalence of sulfur-oxidizing diazotrophs.Considering that the H 2 S in cold seeps are mainly synthesized in the sulfate-methane transition zone of sediments coupled with AOM, it is intriguing that H 2 S concentration is higher in MS and LS than in HS bottom waters.In fact, not only H 2 S concentration but other inorganic nutrients, such as nitrate, nitrite, ammonium, and phosphate, also have a higher concentration at LS than at HS sites (18).Considering these compounds possibly sourced from deep fluids (90), we hypothesize that the crowded mussel bed and authigenic carbonates in the HS site block the surface seafloor and thus reduce the upwelling of H 2 S and C1 compounds.In the MS and LS sites, however, the biogenetic barrier is decreased because of the lower methane concentration.In addition, the predominant benthic animals, such as clams and tubeworms, would dig into deep sediment and uptake H 2 S through their foot or roots for chemosynthesis (91,92).With less biogenetic barrier and more robust animal behavior, the MS and LS sediments would likely release more H 2 S and C1 compounds to the bottom waters, making this environment selected for the sulfur-oxidizing diazotrophs Seep-BW-D1 (Fig. 6). Sharp geochemical and redox gradients persist in cold-seep sediments and waters.For example, the H 2 S concentration in the anoxic cold-seep sediments is 10 2 -10 4 times higher than in the oxic cold-seep bottom waters.By contrast, C1 compounds, such as methanol, are more stable in the oxic environment, whose concentration in seawater (up to 429 nM) is similar to sediment (up to 112 nM) (93).Therefore, with the increased distance from the seepage site, the chemosynthetic diazotroph Sagittula may rely more on C1 compounds than reduced sulfur compounds.Nevertheless, since both C1 and sulfur compounds in cold seep are sourced from the methane-oxidizing process, and methane gas can reach up to 100 m above the seepage site (16,17), Seep-BW-D1 may restrict its distribution in the bottom waters within the methane seeping region. Conclusions In this study, we found that the relative abundance of diazotrophs in the bacterial community reached its highest level in the cold-seep bottom waters compared to the cold-seep upper waters and non-seep bottom waters, corroborating that the carbondominated cold-seep environments are hotspots of N fixation.Moreover, our results showed that the most active diazotroph in cold-seep bottom waters is an Alphaproteo bacterium belonging to the genus Sagittula, named Seep-BW-D1. To address the N limitation in cold seeps, Seep-BW-D1 adopted the capability to fix inorganic N and assimilate organic N. As a diazotroph, seep-BW-D1 contained catalytic genes (nifHDK) and biosynthetic genes (nifENB) for nitrogen fixation, and its nitrogenase-encoding genes were transcribed actively in situ.Moreover, Seep-BW-D1 expressed transport systems for various organic N, and its preferred organic N was urea.For carbon source, although Seep-BW-D1 cannot fix inorganic carbon, it can assimilate various kinds of organic carbon that are abundant in cold-seep ecosystems, includ ing acetate synthesized by methane-oxidizing microorganisms, spermidine/putrescine from the decaying tissues of cold-seep benthic animals, and refractory pectin from upper photosynthetic ecosystems.Seep-BW-D1 exhibited chemosynthetic capability and actively oxidized methane-derived compounds, such as C1 compounds (methanol, formaldehyde, and formate) and thiosulfate (S 2 O 3 2− ).Seep-BW-D1 was more abundant in MS and LS sites than in HS sites.This may be because the less biogenetic barrier and more robust animal behavior in the MS and LS sites facilitate the release of C1 and reduced sulfur compounds into bottom waters, which benefit the growth of Seep-BW-D1.In general, we corroborate that the carbon-dominated cold-seep bottom waters select for diazotrophs and reveal the ecological functions and metabolic strategies of a novel chemosynthetic N-fixing Sagittula in cold-seep bottom waters. MAGs recovered from cold-seep bottom waters have been deposited in figshare (https:// doi.org/10.6084/m9.figshare.25975903.v1).The "Diaiden" pipeline is available on GitHub (https://github.com/jchenek/Diaiden).Authors declare that all data supporting the findings of this study are available within the article and its supplemental information files or from the corresponding authors upon request. ADDITIONAL FILES The following material is available online. FIG 3 FIG3 Comparison of lineage-specific functions associated with environmental adaptation in oceanic alphaproteobacterial diazotrophs and close relatives of Sagittula.The maximum likelihood tree is constructed based on a multiple sequence alignment of 120 bacterial single-copy marker proteins.Alpha., Alphaproteobacteria. FIG 4 FIG4 Schematic representation of the metabolic capacities and activities of Seep-BW-D1. FIG 6 FIG 6 Schematic representation of the niche of alphaproteobacterial diazotroph Sagittula in cold-seep bottom waters.
8,024.8
2024-08-06T00:00:00.000
[ "Environmental Science", "Biology" ]
Construction of Nash Equilibrium in a Game Version of Elfving’s Multiple Stopping Problem Multi-person stopping games with players’ priorities are considered. Players observe sequentially offers Y 1 ,Y 2 ,... at jump times T 1 ,T 2 ,... of a Poisson process. Y 1 ,Y 2 ,... are independent identically distributed random variables. Each accepted offer Y n results in a reward G n = Y n r(T n ) , where r is a non-increasing discount function. If more than one player wants to accept an offer, then the player with the highest priority (the lowest ordering) gets the reward. We construct Nash equilibrium in the multi-person stopping game using the solution of a multiple optimal stopping time problem with structure of rewards { G n } . We compare rewards and stopping times of the players in Nash equilibrium in the game with the optimal rewards and optimal stopping times in the multiple stopping time problem. It is also proved that presented Nash equilibrium is a Pareto optimum of the game. The game is a generalization of the Elfving stopping time problem to multi-person stopping games with priorities. for department 2, and so on until the first acceptance. Candidates rejected for department i cannot be considered in the future. The aim is to select candidates with maximal expected "skills". So, one may say that each department acts as an independent player in a stopping game with priorities. We will formulate the problem as an m-person stopping game with priorities in which random offers are presented at jump times of a homogeneous Poisson process. Such a game has been considered in Ferenstein and Krasnosielska [8]. In this paper, we propose a new solution and we prove that a proposed strategy is a Nash equilibrium, which allows removing some assumption made in Ferenstein and Krasnosielska [8]. The difference between the solution proposed in this paper and those in [8] will be more thoroughly discussed at the end of the paper. The game considered is a generalization, to the case of several players, of the optimal stopping time problem formulated and solved first by Elfving [6], later considered also by Siegmund [18]. Various modifications of the structure of the reward in the Elfving problem were considered in Albright [1], David and Yechiali [3], Krasnosielska [12,13], Gershkov and Moldovanu [10], Parlar et al. [16]. Stadje [19] considered an optimal multi-stopping time problem in Elfving setting, in which the final reward is the sum of selected discounted random variables. Various stopping games with rewards observed at jump times of a Poisson process were considered in Dixon [4], Enns and Ferenstein [7], Saario and Sakaguchi [17], Ferenstein and Krasnosielska [9]. Stopping games were introduced in seminal paper by Dynkin [5] as an application of optimal stopping time problems, since then often referred as Dynkin games. An extensive bibliography on stochastic games can be found in Nowak and Szajowski [15]. Multiple Stopping Time Problem Let us recall the multi-stopping time problem presented in Stadje [19]. Let Y 1 , Y 2 , . . . be nonnegative independent identically distributed random variables with continuous distribution function F and E(Y 1 ) ∈ (0, ∞), Y 0 = 0. The random variables Y 1 , Y 2 , . . . are sequentially observed at jump times 0 < T 1 < T 2 < · · · of a homogeneous Poisson process N(s), s ≥ 0, with intensity function p(u) and T 0 = 0. Moreover, assume that the sequences {Y n } ∞ n=1 and {T n } ∞ n=1 are independent. Let r : [0, ∞) → [0, 1] be a right continuous, non-increasing function satisfying the conditions Assume that the set of points of discontinuity of r is finite. Note that without loss of generality we can assume that p(u) ≡ 1 because a nonhomogeneous Poisson process can be reduced to a homogeneous Poisson process with intensity 1 (see [2, pp. 113-114]). Note that the values of stopping times τ i are natural numbers and τ i = n means selecting an offer arriving at time T n . We are interested in finding an optimal m-stopping time for {G n } ∞ n=1 , that is, the m- and the optimal mean reward E(G τ 1,m k + · · · + G τ m,m k ). Interpretation The problem can be interpreted as a problem of selling m commodities of the same type, where the offers are received sequentially and must be refused or accepted immediately on arrival. Let {s 0 , . . . , s l }, where 0 = s 0 < s 1 < · · · < s l−1 < s l = U , l < ∞, contains all points of discontinuity of the discount function r. In the theorem below functions γ i determining the optimal expected and conditional expected total reward are obtained. . . , m}, the function γ i is continuous and has continuous derivative on The additional expected reward which can be obtained from selling i instead of i − 1 commodities is For k ∈ N and i ∈ {1, . . . , m}, define and The Markov time τ i,m k can be interpreted as a time of selling the ith commodity among m commodities for sale, if we start the process of selling at the time of the kth observation. Note that τ i,m k is the first time after the stopping time τ i−1,m k , at which the reward is not smaller than the optimal conditional expected reward from selling the ith commodity in the future if we have i instead of i − 1 commodities for sale. Therefore, γ i (T n ) determines the minimum acceptable offer of selling the m − i + 1-th commodity at time T n . Hence, γ i is a threshold below which it is not profitable to sell the m − i + 1-th commodity. Note that for each m ∈ N and in particular sup (τ 1 ,...,τ m )∈M 1 (m) Note that from monotonicity of the sequence {γ i (·)} m i=1 we get and from Theorem 2 we obtain Moreover, for k, m ∈ N and i ∈ {2, . . . , m}, we have Proof Immediately from (6) for i ≥ 2 and from (3) for i = 1. The Game Suppose that there are m > 1 ordered players who sequentially observe rewards G n at times T n , n = 1, 2, . . . . Players' indices 1, 2, . . . , m correspond to their ordering (ranking) called priority so that 1 refers to the player with the highest priority and m to the lowest one. Each player is allowed to obtain one reward at time of its appearance on the basis of the past and current observations and taking into account the players' priorities. More precisely, Player i, say, who has just decided to make a selection at T n gets the reward G n if and only if he has not obtained any reward before and there is no player with higher priority (lower order) who has also decided to take the current reward. As soon as the player gets the reward, he quits the game. The remaining players select rewards in the same manner, their priorities remain as previously. Model of the Game In this section, we make the same assumptions and denotations as in Sect Under the strategy profile ψ m , the reward of Player i, i ∈ {1, . . . , m}, is G σ i m (ψ m ) and the mean reward is Let Using Stadje's result from Theorem 2, we have that m selected rewards which maximize the expected total reward appear no later than τ m,m 1 . This motivates the searching of a Nash equilibrium strategy in the set D m 1 . From Lemma 4, we get that {ψ m,i n } is a sequence of 0-1 valued {F n }-adapted random variables. Hence, from Lemma 3, we getψ m,i ∈ D 1 (13) for i ∈ {1, . . . , m}. According to the above profile, Player i in the m-person game will behave in the same manner as Player i in the i-person game, that is, Proof Proof uses induction on i and Lemma 6. It follows from Proposition 1 that, for i ∈ {1, . . . , m}, Player i stops playing in the mperson game at the same time as in the i-person game, that is, where τ i,m k are defined in (3). Proof Using (9), Proposition 1 and Lemma 7, we get Hence, from Theorem 2 and (4) we get the assertion. Note that according to (17), the expected reward of Player i in the m-person game with profileψ m is equal to the expected reward of Player i in the i-person game with profileψ i . Proof Assume that there exists ϕ m ∈ D m 1 such that V m,i (ϕ m ) ≥ V m,i (ψ m ), i = 1, 2, . . . , m and at least one of the inequalities is strict. Then, from Theorem 3 and (4), On the other hand, from (9), which with (20) gives a contradiction. In the proposition below, we will show that players in the m-person game will choose the same rewards as those optimal in the multiple stopping problem. However, note that the stopping time selected by Player i in the m-person game can be different from the ith stopping in the multiple stopping problem. In Theorem 6 below, we will show that presented Nash equilibrium is a sub-game perfect equilibrium. Let us remind that a sub-game perfect equilibrium is a Nash equilibrium if after any history all remaining players' strategy profile is a Nash equilibrium in the remaining part of the game. Let V k m,i (ψ m ) be conditional expected reward for Player i in the m-person game at time of the k-th offer, that is, Theorem 6 The profileψ m is a sub-game perfect equilibrium. Summary In Theorem 4, we have proved that the strategy profileψ m is a Nash equilibrium. According to the strategy, Player i in the m-person game behaves as Player i in the i-person game (Eq. (14)). Moreover, their selected stopping times and expected rewards are equal (Proposition 1 and Eq. (17)). Additionally, the expected reward of Player i in the m-person game in Nash equilibrium is equal to the expected reward from selling the ith good in the future, if there are i instead of i − 1 goods for sale (Lemma 8). Note that τ 3 (1) = τ 1,3 1 . Hence from (11) we have In a similar way, we obtain that Player 2 in three-person game will finish the game at one of the two stopping times: τ 1,2 1 , τ 2,2 1 , that is, Moreover, the player with the highest priority will finish the game at σ 1 1 = τ 1,1 1 . Note that the ith player's stopping time is different from the optimal time of selling the ith commodity, but his expected reward is equal to the optimal expected reward, which can be obtained from selling the ith commodity. Discussion In this section, we compare in detail the results obtained in this paper with those presented in Ferenstein and Krasnosielska [8]. Let us briefly present the solution obtained in [8]. There it is assumed that there exist continuous functionsV m,i (·) such thatV m,i (u) ≤V m,i−1 (u), i ∈ {2, . . . , m}, u ∈ R + 0 , whereV m,i (u) is an"optimal" mean reward of Player i in the m-person game with reward structure G n (u) = Y n r(u + T n ). Next, there are defined Markov timeŝ τ i (u) = inf{n ≥ 1 : G n (u) ≥V m,i (u + T n )}, i = 1, 2, . . . , m, which are used to construct the following strategy: Next, it is proved that Player i stops in the i-person game at the same time as in the mperson game and in consequence the "optimal" expected reward of Player i in the m-person game is equal to the "optimal" expected reward of Player i in the i-person game, that is, , it was assumedV m,1 (u) =V 1,1 (u) and for i ≥ 2 whereV 1,1 (u) is the optimal expected reward in the Elfving problem. Hence, as in the Elfving problem, for each i ∈ {2, . . . , m} the differential equations describingV i,i (·) have been obtained. Next, it was stated (without proof) that the equations obtained have exactly one solution. Moreover, there were given some arguments suggesting that the profile is a Nash equilibrium. Note that the assumption given by (22) is a modification of those made in Elfving [6], and it was removed in Siegmund [18]. Note that the differential equations obtained in Ferenstein and Krasnosielska [8] are the same as those in Stadje [19], that is,V From (16), w have that the expected reward of Player i in the m-person game with profilê ψ m is equal to γ i (0). Hence, from (16), (23), and Theorem 4 we get that the profile is a Nash equilibrium. The connections between the problem solved in Stadje [19] and the solution of the game presented in Ferenstein and Krasnosielska [8] are discussed in detail in the doctoral dissertation of Krasnosielska [14]. Proofs Proof of Lemma 3 From (3), for i = 1, we have σ k . We will show that σ k i ≤ τ i,i k . From (11) and Lemma 2, we have The second inequality in Lemma 3 follows from (6). Proof of Lemma 6 It is enough to show that σ k i = σ k j , j < i. The proof uses induction on i. Proof of Lemma 7 The proof uses induction on m. Note that for each k ∈ N, we have σ k 1 = τ 1 (k) = τ 1,1 k . Hence, for m = 1, we get (15). Now assume that for each k ∈ N and given m ≥ 2, we have To prove that (15) is satisfied for k ∈ N, we will use the equality which will be proved later. From (27) and the induction assumption, we obtain where above, the middle equality follows from Lemma 2 and τ m (k) = τ 1,m k , which follows from (3). Now we will prove (27). From (11), we get where the last equality follows from (10) and (5). Adding the first and third summands appearing on the right side of the above equality, and using (5), we obtain Let i = 1, then from Lemma 5 and (11) we have σ k 1 = σ τ m (k)+1 where we used (15) where in the last but one equality we used (15).
3,537.8
2013-01-04T00:00:00.000
[ "Economics" ]
Intuition as Part of Informatics Creativity —The development of creativity is necessary for all professionals and universities cannot be exempted from it. According to studies (Acentura, 2007) is one of creativity does not develop skills in universities. Important factors of creativity is the intuition study by many authors but is not a concluded investigation. This paper presents the characteristics of intuition as part of informatics creativity. INTRODUCTION Creativity has been studied from antiquity to the present day (Gonzalez, 2004;, de la Torre, 2002 and many theoretical assumptions have been developed since then. All have agreed in recognizing the unity of intuition and logic in this process although each author has given prominence to one or the other. However, on intuitive processes there is a stalemate that has lasted several years (Sinclair, 2011). Various psychological currents have addressed the issue from different angles, highlighting in this article the three most important in this study: cognitivisto, humanism and historical approach -cultural. Furthermore, the formation process engineer has been treated differently in the scientific literature (Gonzalez, 2004) but has not been treated to the creative process and especially the development of intuition. That is why this article is an approach to the study of intuition in computing and how to develop it in the process of training of engineers. II. DEVELOPMENT The development of man as a social being imposed on this certain challenges in the educational context. It is in the historical process where you can find that work has been the generator of the greatest discoveries. In the process of social historical development reaffirms man as a social being and evolves depending on their conditions. Their performance is explained by the relationship established between the subject and the society in which he expresses the relationship between the singular and the general. Men make history conditioned by the activity engaged in this activity and are the basis for the discovery of new knowledge and ways of action in relation to society and nature. He agrees with Dr. Marta Martinez Wheeler (1995) when he says that the theoretical and practical assessment leading to the view that creativity is expressed in the transformative socio essence of man, which is not to say that all men are creators , but all can be potentially. There are countless ways to develop it. Many authors among which we can highlight to David , Edward de Bono (1973), A. Mitjáns (1995,1997,2002), America Gonzalez Valdes (1995,1999) work in this area and even have some conflicting criteria in all contact points are highlighted. One unit is the logical and intuitive. Based on their analysis one can conclude that agree on the importance of logic in the verification stage. It is possible the contrast between logic and intuition to be stages of the same process: the creation. Intuition presents new ways to solve known problems, streamlines the work but the ideas generated in the process should be filtered using intuitive as satellite operations and logical forms of thought. The creative process is long and complex, and it plays an important role intuitive assumptions and carefully planned research. With intuition man can find solutions to their business problems, they may be wrong or not. So, intuitive solutions should be verified through logic. The linkage between the logical and the intuitive occurs throughout the creative process and in general human activity. It can be said that intuition has a strong foundation in the subject's experience in a particular field of activity. This sudden intuition also appears in creative scientific work, sometimes hypothetical tasks whose solutions are easier than the methods or paths leading to it, ie when the result, the end point, which obviously must lead the thinking can anticipate Although the paths that could lead to it, are not yet sufficiently known. As is known these cases occur in science. Psychoanalysis has addressed the issue preferably motivational creation, claiming that this is rooted in the unconscious conflicts and is an embodiment thereof through sublimation. L. Kubie (2001), points out the value of preconscious processes for human activity. Meanwhile E. Kris (1999) provided ideas regarding that creativity is a phase and another elaboration inspirations. In the first, is temporarily lost control of thought processes, allowing regression preconscious levels leading to greater receptivity to manage ideas and impulses unrelated. With the onset of dual processing theories (Evans, 2007;Stanovich & West, 2000;Dijksterhuis, 2004;Dijksterhuis & Aarts, 2010), there has been a growing consensus That information is processed by two independent systems interact seamlessly That -until we consciously Intervene. Significantly, the analysis of the conception from building products (Mintzberg, 1998;Sinclair & Ashkanasy, 2005;Ponomariev, 2002). For this author at the time of rest at a subconscious level are produced free associations, based on the experience of the subject, which go through a decanting at this level. These associations are gathering and are forming a solution of the problem starts at the moment, to be felt by the individual as a feeling of wellbeing. Also interesting studies about the parts of the brain that are stimulated in intuitive processes (Volz and Cramon, 2006; KIRSTEN G. Volz, Rubsamen and YVES, 2008; Reimann & Bechara, 2010). When this process is PAPER INTUITION AS PART OF INFORMATICS CREATIVITY finished and the individual believed to have "dictated" the solution of the problem. However, it is recognized that this process is not the same for everyone and shaped into different styles and the way it appears, recognizing the role of affect in the process intuitive (Dane & Pratt, 2009;Dorl er, 2010;Sinclair , 2011;Slovic, Paul and Västfjäll, Daniel, 2010). In this case seen by the author to the affective approach in the study of intuition which is insufficient for understanding and leaves several questions. However, it should be noted in the analysis of intuition by cognitivism, it is necessary to consider the free associations or divergent (Sinclair, 2011) but not offered the way down, the creative intuition as a style used in decision-making ( . Although it appears that Vygotsky has been busy of intuition can peer into his work elements that could lead to explain the origin of free associations expressed above. For Vygotsky (1934:50) in children may find the language that defines complex as follows: "The thinking is complex and coherent thought and purpose, but does not reflect the objective relations just as conceptual thinking" , from which it follows that there are differences between the author called conceptual thinking. Subsequently, the author continues differentiating these concepts "... A complex, therefore, is first and foremost a given pool of objects connected by real links, and as there is formed in the plane of logical-abstract to junctions creates, as well as helping to create, lack logic unit and can be of many different types. Any truly present connection can lead to the inclusion of a given element in a complex. fundamental difference between a complex and consists concept following: while the latter groups objects according to one attribute, the links that connect the elements of a complex with the total, and each other, can be as diverse as they are actually the contacts and relationships of the elements. " (Vygotsky, 1934: 50). In this paragraph we can infer that there is great similarity between free associations or associations Ponomariev analyzed by divergent Sinclair (2011) and for complex thought addressed by Vygotsky. Following Vygotsky's ideas concerning the complex thinking by this author classifies them into five types (Vygotsky, 1934) in which we can appreciate the features addressed are linkages intuition as variables in the grouping of objects, contrast and similarity selection . Another similar device is in "... The critical attribute changes throughout the process. There consistency or type of links in the manner in which a chain link is joined with the preceding and following it , and the original sample has no central significance. Each link, once included in a complex chain is as important as the first, and can become the magnet that attracts a number of other objects. " (Vygotsky, 1934:52). Also in the same way refers to "... complex is characterized by diffuse flow of each attribute that links the isolated elements. Ties Through fuzzy and indeterminate groups form perceptually concrete objects or pictures." (Vygotsky, 1934:53). This Author complexes between thought and conceptual thinking is thinking pseudo -conceptual. On the same page 53 Vygotsky says "To complete the complex scheme of thought, we describe a latter type, the bridge, as it were, between the described and the final stage, higher development of concept formation. This type we have given the name of pseudo-concept ... " and continues on page 54 stating "In the experimental environment, the child produces a pseudo-concept increasingly surrounding an example with objects that could well have been assembled on the basis of an abstract concept." In this way it is moving to a conceptual thinking that, however, does not eliminate the complex but thought by this shaping underlying inner speech. "In inner speech, the phenomenon reaches its climax. Not uncommon that egocentric speech be inexplicable to others. Watson says that inner speech be incomprehensible ... " (Vygotsky, 1934: 109) "But while in external speech thought is embodied in words, in inner speech words die as soon as the thought passed. Inner speech is largely thought of pure meanings, is dynamic and unstable, fluctuating between the word and thought, the two components roughly outlined verbal thought. Their true nature and location can only be understood after examining the next plane of verbal thought, even more internal than inner speech ... That plane is the thought itself "(Vygotsky, 1934: 110). Many of the authors dedicated to intuition () similarly explain, as a process which should take immediate note to keep the ideas expressed. The main limitation of Vygotsky, the author relates the key elements addressed on intuition, is in it for the students based on their ideas. As seen in this sketch of the fundamental conceptions about intuition is a multifactor process with causes that have not yet been fully clarified scientifically. It also follows the earlier draft which is a process with a high degree of uncertainty involving nonconscious processes. That is why this author believes that it is a process characterized by complexity. On the other hand several authors recognize the role of affective processes ( (Gonzalez, 2007) subjective sense is defined as the psychological unit developing inseparable integrates symbolic processes and emotions, so that the emergence of one evokes the other, without cause, and without there being any linearity in the subsequent unfolding of these processes, during which new features are appearing new psychological and subjective meanings. The subjective senses are a human production that takes place in the experience, but it takes dynamic forms of organization, both within the personality, providing the basis for a redefinition of the concept, and in different social spaces within which develop different human activities. That is why the author felt intuition can be considered as a process that is part of the subjective sense to develop on the basis of the experience ( From the variety of existing criteria and integrating intuition and exposed to the elements the author intuition is the complex process without conscious regulation given to appropriate forms of activity depending on the individual's motivational training and is based on experiences accumulated to form free associations in a particular historical and social context that is integrated in the subjective sense. Immediate task is proceeding to clarify the relationship between intuition and creativity in computer science. The research of the necessary elements for the consideration of what creative and if a person is creative or not is today under heavy yet controversial, especially on informatics context. The creative process present in people who are dedicated to solving your computer problems or teaching has different characteristics to other sciences as it is permeated by the characteristics of the branch of human knowledge in which it develops. The development of creativity in a computer context it will play in three fundamental aspects expressed by Dr. Carlos Exposito (2009): 1. Protection of Information. Transmission of Information. 3. Conservation of Information. It is the author's opinion that working with different system and also created one of the edges where the computer expresses creativity, especially when these systems lead to the paradigm shift. In the definitions of Cuban writers on creativity discussed earlier in this thesis highlights different elements that, in the opinion of the author, must be analyzed in the context of teaching programming or software production. The author believes that the ultimate in computing refers to can be found in the production process of concepts, methods, models, systems and / or computer algorithms that have been made previously in a given social context. The novelty in this case ranges from software developers, companies that set guidelines for the work computer to the student preparing for the production of software or computer education. However, it is important to note that the process proceeds in different manner for each of the cases mentioned above. A remarkable element points to the social historical context in which the individual develops as Mitjáns says Dr. in the definition discussed above, since it addresses the issue of student and algorithms creator and producer discovered or not software, commercial occasionally, until the scheduler is in software producing companies. In the analysis of creativity in the context of information technology is essential to consider the conditions under which it develops and the resources available to each of them that determines the platform used to develop the systems. Therefore consideration of social demands is an important element to consider in the development of creativity in the context of teaching programming. While integrity must express the cognitive and the affective, emotional process in the case of information is vital. The motivation for the implementation of activities related to solving computer problems and contradictions they contain, and induces favorable performance of cognitive actions needed to remedy them as expressing the authors Shari Park-Gates (2001), Saturnino de la Torre (2002), Marta Martinez Wheeler (1999) among others. Cognitive activity in informatics is preceded by an intense motivational contradictions arising from problematic situations expressed in the individual drives in the computer building. To the author's creativity in computer science is defined as the process of producing complex concepts, methods, models, systems and / or computer algorithms related to aspects of information technology development to meet the social demands characterized by generation, extension, flexibility and autonomy. The characteristics listed by America Gonzalez (1999) fit the object of study so it is necessary to resize its dimensions in the computer context that the author may be considered: Generation: The original production itself, which relates to the resourcefulness and discovery to act independently, to reach creative transformation. Indicators are: 1. Production of various concepts, models, algorithms or codes for the solution of a problem. 2. Determination of the algorithms or concepts to be applied in the solution of a problem. Extension: This refers to the production of ideas, questions, problematizations, and solutions that advance the knowledge and experience themselves and / or others. It is the author's opinion referred to in computer algorithms to obtain not studied in classes, programs and models that integrate complex data types discussed forming the new data types or problems to solve. Indicators are: 1. Design and development of new systems, concepts, algorithms or codes that solve a problem. 2. Ideas to improve the systems, concepts, or codes existing algorithms to solve a problem. 3. New problems arising from practice that can be computerized. PAPER INTUITION AS PART OF INFORMATICS CREATIVITY Flexibility: This involves the ability to give a variety of responses, modify the ideas and overcome stiffness. Consider the author expressed in the analysis of different data models, algorithms and codes used to represent the problem and determine its solution and its codification. Are indicators: 1. Possible solutions to the problem. 1. Determining the possible solutions to the problem from the computing resources they have available. 2. Observe and change their opinions based on benchmarking against existing in the solution of the problem when necessary. 3. Collaboration with the persons involved in the process of building the software. Autonomy: To think for ourselves, make our own decisions without belittling other people's judgments. Determine which of the algorithms obtained is the most efficient and which model best represents the relationships contained in the problem. Are indicators: 1. Using their own criteria in determining the algorithms or concepts to be applied in the solution of a problem. 2. Using experience in developing algorithms, codes or systems. Creativity computing previously defined to express the process of creating a given computer as a key to an individual's personality together with the attention to be given to the creative process. Defined in computing creativity, intuition in computing should be characterized depending on the activity performed as defined above. That is why the author considers the complex process without conscious regulation that gives the ownership and / or obtaining concepts, methods, models, systems and / or computer algorithms, with the project as a key activity, depending on subjectivity of the individual and is based on experiences accumulated to form free associations in a particular historical and social context. Each of these results of intuition in computer activity must then be verified logically and in this sense, the hardware to use plays an essential role as the ultimate criterion of truth. It is necessary to address this latter approach because, in the opinion of the author, the dialectical hardware -software is contextualized to the concrete historical situation of the computer problem to solve. Still, in the opinion of this author, has not fully resolved the issues raised on the development of intuition in computing as part of computer creativity if not addressed pathways for development. In commenting above Vygotsky's ideas were discussed important aspects that should not stop being treated. It is the opinion of the author that place individuals in situations that allow solutions with high variability and form associations considering the dialectical relationship between them could be a solution to this problem. Please note also the mistakes and include them as part of learning, assuming that the error is necessary and providing a holistic and comprehensive training to individuals. On the other hand (Liberman, 2000; Pretz andTotz, 2007, Sinclair, 2011;Sinclair, 2010) suggest the need for expertise and personally experienced obtained with a high degree of emotional involvement to be taken over by the intuitive processes. For these authors the contradictions underpin the development of intuition and in the opinion of a large number of authors teaching problem represents the ultimate opportunity to feed into the process of teaching. It is therefore necessary to characterize the teaching problem in computer education. Within the categorical system of teaching problem highlights the problem situation. The problem situation represents the contradiction resulting in students 'surprise', a state of bewilderment at a situation in which he knows that there is "something" is wrong, it is not right in line with the system of knowledge which is has appropriated. In the teaching of Computing the author acknowledges that there are several types of problematic situations given by the characteristics of this science as a realization of these general A. No correspondence between the concept, model or procedure and the requirements of the task This situation is in line with the first given by Marta Martinez. The student does not know the concept and / or the procedure that allows solving the task. To solve problems associated with the determination of a value and comparison with previous values is necessary to introduce the concept of settlement. The solution of the problem is given in the array concept as variable type and establishing operations in computing expression. B. Contradiction between the concept, model and / or the procedure and its expression in a computer system The student knows various concepts or models but not how they are managed by a computer system or computer science. In this case we can find concepts like cycle, variable, column, row. The first concept is taught from the construction of the pseudo code is presented in contradiction coding algorithm obtained. C. No correlation between the expression of a concept computer in another system in the same family and the system used This situation can be found in the students who have prepared in Pascal and go to another programming language like C + + or Java. In this case students are mainly based on the analogy and the system helps to solve the problem. It becomes a problem for those students whose teacher development lets them know what is sought. The computing experience plays a key role. D. Contradiction between the potential of the system and the task at hand Teaching is typical of systems that are updated versions of others with whom already own or the student already knows. It is in the teaching of new language instructions for the task. E. Contradiction between the algorithm, the pseudocode and its implementation in a system The contradiction lies in the possibility of algorithmic the process, however, can not encode in a computer system. An example of this is seen in the process of copying texts if they know the procedure to copy files. The contradiction is how to select the texts. It can take from a fragment encoding to any encoding step. F. Problem Teacher The student has assimilated the contradiction contained in the problem situation and is oriented toward what to look for. The problem must be well developed so that students can provide elements in the search for the unknown, but gives no solution pathways. The computer education has gone through several approaches systematized by Dr. Carlos Expósito (2009) The project approach proposed by the author proposes to solve the task in which they integrate knowledge based on the core concept is being called by the author as a project with minimum requirements. The solution of the project with minimum requirements for a student involves a set of basic knowledge that should be included in the project to be evaluated. Based on these minimum requirements are structured problematic situations that the author has termed as problematics nodes associated with the project. The concatenation of problematics nodes and relationships established between them to solve the project is one of the fundamental aspects of the system approach in teaching computer science (54). The teaching problem is characterized by the formation of the contradiction between the known and the unknown structured by the teacher. In problémico approach integrated with the project approach, the author believes that the contradiction can be structured in two ways substantially different: • enunciated by the teacher. • emanating from the solution of the project submitted by the student. For the student has a higher level of demand since they face several problematic situations (along the theme) different in context to his and the outcome (knowledge) should be reformulated in terms of their problem be solved, an important element in structuring its scope. Another edge of the situation is the approach by the student problems that may lead to the solution of your project, which encourages questioning, for problems that contributes to the development of intuition as part of creativity in computer science. III. CONCLUSIONS Intuition as part of the creative process is one of the least studied in the literature although there is a resurgence of his analysis in the scientific literature. Various definitions of intuition in the literature is cognitive in nature but does not occur in the same way from the cultural historical approach, main reference of the investigation. Therefore define intuition based on the historical and cultural budgets is a major theoretical result. To be consistent with this relation is then defined psychological intuition in an area of knowledge and how to develop professionals in this area, being the result of fundamental theoretical research.
5,563.6
2013-06-26T00:00:00.000
[ "Computer Science", "Education" ]
Bounding Quantum Dark Forces Dark sectors lying beyond the Standard Model and containing sub-GeV particles which are bilinearly coupled to nucleons would induce quantum forces of the Casimir-Polder type in ordinary matter. Such new forces can be tested by a variety of experiments over many orders of magnitude. We provide a generic interpretation of these experimental searches and apply it to a sample of forces from dark scalars behaving as $1/r^3$, $1/r^5$, $1/r^7$ at short range. The landscape of constraints on such quantum forces differs from the one of modified gravity with Yukawa interactions, and features in particular strong short-distance bounds from molecular spectroscopy and neutron scattering. Introduction When going beyond the Standard Model (SM) of particle physics, it is natural to imagine the existence of other light particles, which would have been so far elusive because of their weak or vanishing interactions with the SM particles. Such speculations on dark sectors could be simply driven by theoretical curiosity although there are more concrete motivations coming from two striking observational facts: Dark Matter and Dark Energy. In both cases, theoretical constructions elaborated to explain one or both of these fundamental aspects of the Universe tend to assume the existence of dark sectors of various complexity. Among the many possibilities for the content of the dark sector, our interest in this work lies in dark particles with masses below the GeV scale, where Quantum ChromoDynamics (QCD) reduces to an effective theory of nucleons. Would a light scalar couple to nucleons, it would induce a fifth force of the form V = αe −r/λ /r, with λ = /mc being the Compton wavelength of the scalar and m its mass. The presence of such Yukawa-like force is sometimes dubbed "modified gravity". Experimental searches for such fifth forces between nucleons extend from nuclear to astronomic scales and lead to a landscape of exclusion regions, see summary plots in [1][2][3][4][5]. As noted in [6], even in the absence of a light boson linearly coupled to nucleons, other fifth forces can still arise from the dark sector whenever a sub-GeV particle of any spin is bilinearly coupled to nucleons. Such forces would arise from the double exchange of a particle and are thus fundamentally quantum. Moreover, in order to take into account retardation effects, such forces have to be computed within relativistic quantum field theory. This kind of computation has been first done by Casimir and Polder for polarizable particles [7], and by Feinberg and Sucher for neutrinos [8]. We will refer to such quantum forces as Casimir-Polder forces. There is a variety of motivations for having a particle of the dark sector coupling bilinearly to nucleons. The dark particle can be for instance charged under a symmetry of the dark sector, can be a symmetron from a dark energy model, or simply a dark fermion sharing a contact interaction with nucleons. Such Z 2 symmetry can be also needed in order to explain the stability of Dark Matter. In the presence of forces which do not have a Yukawa-like behaviour, as is the case of the Casimir-Polder forces we focus on, the landscape of fifth force searches is expected to change drastically. A thorough investigation of the experimental fifth force searches becomes then mandatory in order to put bounds on such extra forces in a consistent manner, and thus on the underlying dark particles. This requires revisiting each of the experimental results, a task that will be performed in this paper. In Sec. 2, we consider Casimir-Polder forces focussing on the case of a scalar with various effective interactions with nucleons. General features of Casimir-Polder forces are then derived in Sec. 3. A generic interpretation of the most recent and stringent fifth force searches, valid for arbitrary potentials, is given in Sec. 4. The exclusion regions will be displayed and discussed in Sec. 5. We emphasize that our approach to constrain dark sectors relies only on virtual dark particles, and is thus independent on whether or not the dark particle is stable. The case where the dark particle is stable and identified as Dark Matter has been treated in a dedicated companion paper, Ref. [6]. Searches for dark sectors via loops of virtual dark particles include Refs. [6,9,10], and are yet under-represented in the literature. Casimir-Polder forces from a dark scalar There are many reasons for which the dark sector could feature a scalar with a Z 2 symmetry with respect to the Standard Model sector. If such a scalar is charged under a new symmetry such as a U (1) X charge while the SM fields are not, the scalar should interact with the SM via bilinear operators. The scalar can also be the pseudo-Nambu-Goldstone boson (pNGB) of an approximate global symmetry, in which case it couples mostly with derivative couplings to the nucleons. Theories of modified gravity can also feature light scalars with a bilinear coupling to the stress-energy tensor [11]. While the properties of these scalars are often considered to be modified by some screening mechanism, it is certainly relevant to consider scenarios where screening is negligible or absent. This is the most minimal possibility, and can also serve as a reference for comparison with the screened models. Moreover for models like the symmetrons, screening does not happen in vacuum. It is convenient to use an effective field theory (EFT) approach to describe the interactions of the dark particle. All the measurements we consider occur well below the quantum chromodynamics (QCD) confinement scale, hence we can readily write down effective interactions with nucleons. The operators we consider have the form O nuc O DS , where O nuc is bilinear in the nucleon fields and O DS is bilinear in the dark sector field. O nuc has in principle aN Γ A N structure, where Γ A can have any kind of Lorentz structure. In the limit of unpolarized non-relativistic nucleons, only the interactions involving O nuc =N N,N γ 0 N are relevant, the other being either canceled by averaging over nucleon spins or suppressed by powers of m −1 N . In this paper we focus on the exchange of a dark scalar. The exchange of dark fermions and dark vectors, either self-conjugate or complex, have been treated in [6], and details of the calculations for all these cases are given in App. A. Here we focus on three types of effective interactions, We assume that only one of these operators is turned on at a time. In the O 0 a,c cases, we assume a real scalar, while for O 0 b we assume a complex scalar. The O 0 a interaction corresponds to the case of a symmetron, the O 0 b interaction is typically the one generated from a heavy Z exchange, and the O 0 c would occur if the scalar is the pNGB of a hidden global symmetry. In the last case, as the pNGB mass explicitly breaks the shift symmetry, an interaction of the form m 2 Λ 2 O 0 a could also be present, however its effect would be negligible at short distance hence we do not take it into account. Similar calculations have been performed for disformal couplings in [12,13]. Higher dimensional operators are in principle present in the effective Lagrangian, and are suppressed by higher powers of either Λ or Λ QCD . The EFT is valid for momenta below min (Λ, Λ QCD ) when coupling constants are O(1) in the UV theory. We will assume a universal coupling to protons and neutrons-all our results are easily generalized for non-universal couplings. Also, for simplicity, we do not consider the dark particle coupling to electrons. Including the coupling to electrons would lead typically to stronger forces and thus to enhanced limits. As a result of the O a,b,c interactions, nucleons can exchange two scalars as shown in the Feynman diagram of Fig. 1. This Feynman diagram induces a Casimir-Polder force (i.e. a relativistic van der Waals force) between the nucleons. The forces induced by the O a,b,c operators have been computed in [6] and are given by the potentials where K i is the i-th modified Bessel function of the second kind. The V a force is consistent with a previous calculation of [14] after matching to our conventions. The main steps of the general calculation are as follows. One first calculates the amplitude corresponding to the diagram in Fig. 1. In order to calculate loop amplitudes in the EFT, dimensional regularization has to be used in order not to spoil the EFT expansion. The one-loop amplitudes can be decomposed over the basis Λ is the scale at which the effective theory is matched on to the UV theory, and is also the scale at which the EFT breaks down. Then one takes the non-relativistic limit of the amplitude and identify the scattering potential where s 1,2 (s 1,2 ) corresponds to the spin polarization of each ingoing (outgoing) nucleons. The spatial potential is given by the 3d Fourier transform ofṼ (|q|), where r = |r| and the momentum has been extended to the complex plane in the last equality, ρ ≡ |q|. Using standard complex integration one obtains where [V ] is the discontinuity from right to left across the positive imaginary axis, [V ] = V right −V left , and one has defined ρ = iλ. Notice that λ can also be understood as √ t, the square root of the t Mandelstam variable extended to the complex plane. The discontinuities [f n ] needed to compute the Casimir-Polder force via Eq. (2.6) are given in Appendix A. In the case of the scalar dark particle exchanged via the O a , O b or O c operators, the amplitudes are given in App. B. The discontinuities needed to calculate the V a,b,c potentials are The discontinuity of the nonrelativistic scattering potentials for the three diagrams considered above are At short distance mr 1 the forces behave as while at long distance mr 1 the forces go as (2.10) As sketched in [6], the broad features of these forces can be understood from general principles. The arguments are given in detail in the next section. General features of Casimir-Polder forces Let us first comment on the effective theory giving rise to the Casimir-Polder forces. The fournucleon loop diagrams we consider come from higher-dimensional operators and are thus more divergent than the four-nucleon diagrams from the UV theory lying above Λ. This implies that four-nucleon local operators (i.e. counter-terms) of the form (N N ) 2 , (∂ µ (N N )) 2 , . . . are also present in the effective Lagrangian to cancel the divergences which are not present in the UV theory. The finite contribution from these local operators is fixed by the UV theory at the matching scale, and is expected to be of same order as the coefficient of the log Λ term in the amplitude by naive dimensional analysis (this situation is analog to renormalisation of the non-linear sigma model, see Ref. [15]). The loop amplitudes have the form where F (q 2 ) is complex, with F (q 2 = 0) = 0, and G(q 2 ) is a real polynomial in q 2 (both depend also on m, Λ). The log term is a consequence of the divergence. The log term is real and contributes to the running of local four-nucleon operators. The Casimir-Polder force arises from the branch cut of F (q 2 ), and is thus independent of the log term. An experiment measuring only the Casimir-Polder force will have the advantage of being unsensitive to these four-nucleon operators -which are set by the UV completion and thus introduce theoretical uncertainty. This happens either when the experiment is nonlocal by design (e.g. measuring the force between nucleons at a non-zero distance), or by construction of the observables as we will see in the case of neutron scattering. All the measurements considered in this paper are either fully or approximately unsensitive to local four-nucleon interactions. The main features of Casimir-Polder forces between two non-relativistic sources can be understood using dimensional analysis and the optical theorem. We focus on the double exchange of a particle having local interactions with the sources, the operators used in Sec. 2 being examples of such scenario. We further assume that the sources are identical-a similar approach applies similary to different sources. We denote by X the dark particle exchanged,X its conjugate, m its mass. We use nucleons as source for concreteness. X can take any spin. The generic operator we consider has the form where Γ A can be any Lorentz structure. When averaging over the nucleon spins, the first nonvanishing Lorentz structures areN N ("scalar channel"),N γ µ N ("vector channel"), and we will focus on those ones. Within the above assumptions we obtain the following properties: 1. Sign. Operators of the form O(X)N N give rise to attractive forces. Operators of the form O µ (X)N γ µ N give rise to repulsive forces. 2. Short distance. An operator of dimension n + 4 gives rise to a potential behaving at short distance as 3. Long distance. When the square amplitude |M(NN ↔ XX)| 2 taken at √ s ∼ 2m is suppressed by a power (s − 4m 2 ) p (i.e. velocity-suppressed by v 2p ), the long range behaviour of the force is given by Let us prove the above properties. Property 2 is simply a consequence of dimensional analysis. When r 1/m, the potential can be expanded with respect to rm and at first order, V (r) = V (r)| m=0 (1 + O(mr)). In this limit the potential depends only on r and on the effective coupling 1/Λ n squared. The potential having dimension 1, it must have a dependence in 1/r 2n+1 so that dimensions match. Notice that this argument applies similarly for the exchange of a single particle (giving then a 1/r potential) or for the exchange of an arbitrary number of particles. For Properties 1 and 3, let us denote the amplitude of interest ( Fig. 1) by iM t , and introduce the amplitude iM s = iM(NN → X * X * → NN ), which is the s ↔ t crossing of iM t . In order to get some insight on iM t , we can study iM s use crossing symmetry. The optical theorem applies to iM s , with where in the last line we use the fact that the amplitude arising from local interactions (Eq. (2.6)) depend only on the center-of-mass energy √ s. The optical theorem is of interest because Im(M t ) is directly related to the discontinuity of M t over its branch cut, which is precisely the quantity needed to calculate non-relativistic potential. In the formalism of Sec. 2, we have It turns out that Im(M t ) > 0 (< 0) corresponds to an attractive (repulsive) force. Let us prove Property 1. For the scalar channel, the crossing of Im(M s ) stays positive, hence Im(M t ) > 0 and the force is attractive. For the vector channel, we have M(NN → XX) ∝ J µ,N J µ X where the J µ are vector currents. The square matrix elements takes the form (J µ,N J ν,N )(J µ,X J ν,X ). All the J µ are conserved currents, J µ q µ = 0. The J µ,N can be pulled outside of the integral in Eq. (3.5). Conservation of the J µ,N currents implies that they project out the components proportional to q µ of the quantity they are contracted with. It follows that where we have introduced s = (q 1 + q 2 ) 2 and A(s) is a positive function. In the non-relativistic limit, one keeps only the µ = ν = 0 components of the nucleon currents, and the projector reduces to q µ q ν − sg µν ∼ q 2 -hence A(s) has to be positive to ensure Im(M s ) > 0. The crossing of whereJ µ,N denotes the crossed nucleon currents. In the non-relativistic limit we haveJ µ,NJν,N ∼ However, when taking the Fourier transform ofṼ (q) (see Eq. (2.6)), |q| is extended to the complex plane. The non-relativistic potential is then given by an integral of Im(M t ) over positive values of the real variable λ, which is related to t by λ ≡ √ t. Hence the t variable in Eq. (3.8) is positive when computing the non-relativistic potential. This implies that Im(M t ) is always negative, and thus the Casimir-Polder force between nucleons induced by a vector channel is always repulsive. Let us finally prove Property 3. We first remark that the long distance behaviour of the V (r) potential amounts to having a steep exponential in ∞ 2m dλλ[Ṽ ]e −λr , see Eq. (2.6). When this is true we are allowed to expand [Ṽ ] as a power series at small values of λ, hence at the point λ = 2m. In order to understand what form this power series takes, let us consider the square amplitude |M(NN ↔ XX)| 2 , which corresponds to pair production or annihilation of X. This amplitude arises from the local operators of Eq. (2.6) hence it depends only on the center-of-mass energy √ s. We extend s to the complex plane. We can always perform a power series expansion near s = 4m 2 , 1 where the 4m 2 N factor is introduced for further convenience and the a, b, c are dimensionful constants. Using the optical theorem, we obtain that and crossing then gives = q m ≡ v taken in the center-of-mass frame is the usual velocity of the X particle. It is common to say that the squared matrix-element is "velocity-suppressed" when e.g. a = 0. The nucleons being by assumption heavier than X, neither production nor annihilation of X can physically happen at this threshold. However, formally, nothing forbids us to perform the expansion. 2 The general case is obtained similarly using the identity 12) We can see that an extra factor of 1/r in V (r) is associated to each factor of s−4m 2 in the expansion of |M(NN ↔ XX)| 2 . Fifth force searches This section describes how to interpret the results of a number of experiments as bounds on an arbitrary fifth force. Neutron scattering Progress in measuring the scattering of cold neutrons off nuclei have been recently made and have been used to put bounds on short-distance modified gravity, [16][17][18][19][20][21][22][23]. The cold neutron scattering cross-section can be measured at zero angle by "optical" methods, at non-zero angles using Bragg diffraction, or over all angles by the "transmission" method giving then the total cross-section [24]. In the following we adapt the analyses of [22] to the Casimir-Polder forces of Eq. (2.2). At low energies the standard neutron-nuclei interaction is a contact interaction in the sense that it can be described by a four-fermion operator O 4N =N NN N . 3 New physics can in general induce both contact and non-contact contributions to the neutron-nuclei interaction. A non-contact contribution vanishes at zero momentum, while a contact contribution remains non null and can be described by O 4N . It is convenient to introduce the scattering length where the l C std , l C NP local terms are independent of momentum transfer q and l NC NP (q), which satisfies l NC NP (q = 0) = 0, is the non-contact contribution. The l NC NP (q) term contains the Casimir-Polder force (see Sec. 3), and log terms of the form |q| 2n log(m/Λ). The new physics contribution l NP (q) is related to the scattering potentialṼ by l NP (q) = 2m NṼ (q), which is just the Born approximation. For the forces described in Eq. (2.2), the new physics contributions are given by where the f n are the loop functions defined Eq. (2.3). A convenient way to look for an anomalous interaction is to search for l NC NP (q) by comparing the scattering length obtained by different methods, using for instance l Bragg − l opt , l tot − l opt . This approach eliminates the contact contributions l C std and l C NP , and is therefore only sensitive to l NC NP (q). • Optical + Bragg One approach is to compare the forward and backward scattering lengths measured respectively by optical and Bragg methods. Using the analysis from [22], one has a 95% CL bound • Optical + Total cross-section The total cross-section measured by the transmission method provides the average scattering lengthl i (k) = 1 2 π 0 dθ sin(θ)l i (4k 2 sin 2 (θ/2)) . (4.6) Using information from optical method measurement, we have the 95% CL bound with k ex = 40 keV. For both methods, a dependence on the |q| 2n log(m/Λ) remains, which turns out to be mild in practice. Hence our results are still approximatively independent of the local four-nucleon operators -which are fixed by the unspecified UV completion (see Sec. 3). Molecular spectroscopy Impressive progress on both the experimental [25][26][27][28][29][30][31][32] and the theoretical [33][34][35][36][37][38][39][40][41][42][43][44] sides of precision molecular spectroscopy have been accomplished in the past decade, opening the possibility of searching for extra forces below the Å scale using transition frequencies of well-understood simple molecular systems. Certain of these results have recently been used to bound short distance modifications of gravity, see Refs. [5,[45][46][47]. The most relevant systems for which both precise measurements and predictions are available are the hydrogen molecule H 2 , the molecular hydrogen-deuterium ion HD + , the antiprotonic helium p 4 He + and muonic molecular deuterium ion ddµ + , where d is the deuteron. These last two systems are exotic in the sense that a heavy particle (namelyp and µ − respectively) has been substituted for an electron. As a result the internuclear distances are reduced, providing a sensitivity to forces of shorter range, and thus to heavier dark particles. The presence of an extra force shifts the energy levels by at first order in perturbation theory. We have computed these energy shifts for the transitions between the (ν = 1, J = 0) − (ν = 0, J = 0) states for H 2 , the (ν = 4, J = 3) − (ν = 0, J = 2) states of HD + , the (m = 33, l = 32) − (m = 31, l = 30) states ofp 4 He + , and the binding energy of the (ν = 1, J = 0) state of ddµ + using the wave functions given in [5,46]. For the quantum states considered here, the typical internuclear distances are ∼ 1 Å for H 2 and HD + , ∼ 0.2 Å forp 4 He and ∼ 0.005 − 0.08 Å for ddµ + . The bounds on the extra forces can then be obtained by asking that ∆E be smaller than the combined (theoretical + experimental) uncertainties δE. These uncertainties are given in Tab. 1 (see references for details). Experiments with effective planar geometry A variety of experiments searching for new forces at sub-millimeter scales are measuring the attraction between two dense objects with typically planar or spherical geometries. Whenever the Table 3: Densities of the materials used in the fifth force experiments listed in Tab. 2 distance between the objects is small with respect to their size, these objects can be effectively approximated as infinite plates, and the force becomes proportional to the potential energy between the plates. This is the Proximity Force (or Derjaguin's) Approximation [48]. An important subtlety is that most of the experiments are using objects coated with various layers of dense materials, that should be taken into account in the computation of the force. We thus end up with calculating the potential between two plates with various layers of density for each. The effective plane-on-plane geometries are summarized in Tab. 2. It is convenient to describe all these configurations at once using a piecewise mass density function describing n layers over a bulk with density ρ, . (4.9) In this notation, the layer labelled n is the closest to the other plate. The potential between an infinite plate of density structure γ a (z) and a plate with area A and density structure γ b (z) at a distance s is then given by In practice, most of these sub-millimeter experiments have released their results as bounds on a Yukawa-like force. In order to obtain consistent bounds on the strength of the Casimir-Polder forces Λ as a function of the scalar mass m, we have to compare the plane-on-plane potentials from the Casimir-Polder forces to the plane-on-plane potential from the Yukawa force. Bounds on the (α, m) parameters of the Yukawa force can be then translated into bounds on the (Λ, m) parameters of the Casimir-Polder forces, using the limit-setting procedure provided by each experiment. The plane-on-plane potential for the Yukawa force is straightforward to compute analytically and reads with ρ 0 = ρ. In the case of the Casimir-Polder forces shown in Eq. (2.2), the triple integral of Eq. (4.10) are much less trivial to carry on analytically. A numerical integration is however easily done. It is worth noticing that the z-integrals on the Casimir-Polder potentials can be realised using a different representation for the potentials, which naturally occurs when calculating the diagram of Fig. 1 in a mixed position-momentum space formalism -which we will extensively use in future work [56]. Bouncing Neutrons New forces can also be probed using bouncing ultracold neutrons (i.e. neutrons with velocities of a few m/s) [57][58][59][60][61]. The vertical motion of a neutron bouncing above a mirror nicely realizes the situation of a quantum point particle confined in a potential well, the gravitational potential m N gz pulling the neutron down, and the mirror pushing the neutron up. The properties of the discrete stationary quantum states for the bouncing neutron can be calculated exactly. The wavefunction of the k th state reads: where Ai is the Airy function, k is the sequence of the negative zeros of Ai and z 0 = (2m 2 N g/ 2 ) −1/3 ≈ 6 µm. The theoretical energies of the quantum states are E k = m N gz 0 k = {1.41, 2.46, 3.21, 4.08, · · · } peV. (4.13) Recently, a measurement of the energy difference E 3 − E 1 was performed at the Institut Laue Langevin in Grenoble using a resonance technique [62]. The result is in agreement with the theoretical predictions. From this experiment a bound can be set on any new force which would modify the energy levels, the experimental precision being (4.14) Let us calculate the energy shift due to the new Casimir-Polder Dark force. The additional potential of a neutron at a height z above a semi-infinite glass mirror is given by where ρ glass m N = 10 10 eV 3 is the number density of nucleons in the glass, V i (r) is the potential between the neutron and one nucleon at a distance r = ρ 2 + z 2 . The double integral in the expression of the potential can be simplified to a single integral: In the case of the potentials V a and V b , the integrals cannot be calculated analytically. However we found suitable analytical approximations having the correct asymptotic behaviour at zero and infinite height: and . (4.18) The approximate expressions have a relative precision of better than 50 % for V a,z and better than 3 % in the case of V b,z , for all values of z. The case of V c,z remains to be done. Using the approximate expressions, we have computed the shift in the energy levels of the neutron quantum bouncer using first order perturbation theory: The bounds on the extra forces V a and V b as a function of the mediator mass m are obtained from the experimental constraint (4.14). They are reported in Figs. 2, 3. Moon perihelion precession The existence of a fifth force at astrophysical scales would imply a slight modification of planetary motions. Any such fifth force can be treated perturbatively whenever it is small with respect to gravity at the distance between the two bodies. The modification of the equation of motion implies, among other effects, an anomalous precession of the perihelion of the orbit. In the case of the Moon, this precession is experimentally measured to high precision by lunar laser ranging experiments [63]. where L ≡ m r 2 dθ dt is the conserved angular momentum and the first term in the parenthesis is the gravitational force. The solution of the unperturbed equation reads where is the excentricity ( = 0.0549 for the Moon), θ 0 indicates the perihelion of the ellipse, and the major semiaxis a is given by a −1 = u (1 − 2 ). At first order in perturbation theory, the extra force is just as a constant, F i (1/a ), which only modifies u , the overall size of the orbit. At second order in perturbation theory, one has The term linear in u modifies the frequency of the orbit on the left-hand side of the equation of motion. The motion is now given by where the ellipsis denotes irrelevant corrections to the overall magnitude of the orbit. Having ω = 1 implies a precession of the perihelion, which can be seen using cos ω(θ − θ 0 ) = cos ω θ − θ 0 + 2πn ω . The precession angle between two rotations is finally given by The Moon precession angle is constrained by lunar laser ranging experiments. Other wellunderstood perturbations induce Moon's orbit precession: the quadrupole field of the Earth, other bodies of the solar system, and general relativity. Once all these effects are taken into account, one obtains a bound on an extra, anomalous precession angle. Following Ref. [2], an experimental limit from lunar laser ranging is given as λ −5 is expected to be overwhelmed by the increase of the force in r −7 , implying that bounds from the experiments at the smallest scales (from neutron scattering and molecular spectroscopy) dominate over all the bounds from higher distances. The exclusion regions for the V a , V b , V c Casimir-Polder potentials are respectively presented in Figs. 2, 3, 4. For the V a potential, we obtain that the Eöt-Wash bound is the dominant one for λ > 10 −3 m. For both V b and V c potentials, we obtain indeed an inversion in the hierarchy of bounds. The two leading bounds turn out to be from the ddµ + molecular ion and from the neutron scattering bound combining optical and total cross-sections. This fact can be taken as an incentive to pursue and develop such small scale experiments. Interestingly, for V a , the bound from antiprotonic heliump He + is stronger than the bound from the ddµ + ion, while it is not the case for the V b , V c potentials. This feature comes from the fact that the wave function of the ddµ + ground state has a large tail towards short distances. This tail enhances the contribution of the potentials which grow faster at small distance, hence the ddµ + bound gets favored with respect to thep He + bound for V b and even more V c . The leading bound being either ddµ + orp He + depending on the potential, further studies (both theoretical and experimental) in both systems should definitely be encouraged. Using the calculation given in 4.5, we obtain that limits from lunar laser ranging are indeed subleading. At zero mass, the bounds on Λ for the V a , V b , V c potentials are found to be respectively Λ > 2 GeV, 6 · 10 −5 eV, 2 · 10 −8 eV. All these bounds are overwhelmed by stronger ones from shorter distance experiments. Conclusions There are many motivations-including Dark Matter and Dark Energy-for speculating on the existence of a dark sector containing particles with a bilinear coupling to the Standard Model particles. Whenever one of the dark particles is light enough and couples to nucleons in a spinindependent way, it induces forces of the Casimir-Polder type, that are potentially accessible by fifth force experiments across many scales. The short and long-range behaviours of these forces as well as their sign can all be understood and predicted using dimensional analysis and the optical theorem. We provide a comprehensive (re)interpretation of bounds from neutron scattering to the Moon perihelion precession, applicable to any kind of potential. We then focus on the case of a scalar with a variety of couplings to nucleons, generating forces with 1/r 3 , 1/r 5 , 1/r 7 short-distance behaviours. It turns out that forces in 1/r 5 , 1/r 7 are best constrained by neutron scattering and molecular spectroscopy, which provides extra motivation to pursue these kind of low-scale experiments. Implications for Dark Matter searches have been discussed in Ref. [6]. A Calculation of the potentials This appendix contains details of the computation for the potentials in Eq. (2.2) and those given in Ref. [6]. The full set of operators considered is A dark particle of spin 0, 1/2, 1 is denoted by φ, χ, X. π and c,c are respectively the Goldstone bosons and ghosts accompanying X. At that point the dark particle can be self-conjugate (real scalar or vector, Majorana fermion) or not (complex scalar or vector, Dirac fermion). When X is complex, so are π, c andc. We will give the results for all cases. We introduce We calculate the loop diagram of Fig. 1 induced by each of these operators using dimensional regularisation. The matching of the effective theory with the UV theory being done at the scale Λ, we can readily identify the divergent integrals as (see [64,65] From these amplitudes, the discontinuities in the non-relativistic scattering potentialṼ are given by Eq. (2.4) and are found to be where the discontinuities of f 0,1,2 are given in Eq. (A.10) The [f n ] discontinuities can be obtained by noticing that ln ∆ = ln( has a branch cut between x − and x + and a discontinuity of 2πi. This leads to Finally, the spatial potential is given by Eq. (2.6). The integrals over λ needed in the last step of the calculation are ∞ 2m dλ λ 2 − 4m 2 e −λr = 2m r K 1 (2mr) (A.13) ∞ 2m dλλ 2 λ 2 − 4m 2 e −λr = 8m 3 r K 1 (2mr) + 12m 2 r 2 K 2 (2mr) (A.14) ∞ 2m dλλ 4 λ 2 − 4m 2 e −λr = 32m 4 r 2 K 2 (2mr) + 120m 3 r 3 + with q = p 1 − p 2 . These integrals can be reduced to the basis shown in Eqs. (A.3), (A.4), (A.5) using textbook techniques (see [64], including Feynman trick).
8,222.2
2017-10-02T00:00:00.000
[ "Physics" ]
The Mansurov effect: Seasonal and solar wind sector structure dependence – We investigate the connection between the interplanetary magnetic fi eld (IMF) B y -component and polar surface pressure, also known as the Mansurov effect. The aim of the investigation is to unravel potential dependencies on speci fi c seasons and/or solar wind sector structures, and it serves as a sequel to Edvartsen et al. (2022) [ J Space Weather Space Clim 12 : 11]. The mechanism for the effect includes the ability of the IMF to modulate the global electric circuit (GEC), which is theorized to impact and modulate cloud generation processes. By usage of daily ERA5 reanalysis data for geopotential height since 1968, we fi nd no signi fi cant response con fi rming the current Mansurov hypothesis. However, we do fi nd statistically signi fi cant correlations on decadal timescales in the time period March – May (MAM) in the northern hemisphere, but with an unusual timing. Similar phased anomalies are also found in the southern hemisphere for MAM, but not at a signi fi cant level. In an attempt to explain the unusual timing, heliospheric current sheet crossing events, which are highly correlated with the B y -index, are used. These events result in higher statistical signi fi cance in the NH for the MAM period, but cannot fully explain the timing of the response. In general, these statistically signi fi cant correlations differ from previously reported evidence on the Mansur-ov effect, and suggest a revision of the Mansurov hypothesis. Our results also highlight a general feature of time-lagged cross-correlation with autocorrelated variables, where the correlation value itself is shown to be a fragile indicator of the robustness of a signal. For future studies, we suggest that the p -values obtained by modern statistical methods are considered, and not the correlation values alone Introduction The hypothesis on the Mansurov effect, which assumes a relation between daily polar surface pressure and the B y -component of the interplanetary magnetic field (IMF), was first proposed by Mansurov et al. (1974). Multiple studies have found a correlation supporting this hypothesis in more recent times (Burns et al., 2008;Lam et al., 2013, Lam & Tinsley, 2016Zhou et al., 2018;Tinsley et al., 2021;2018). However, Edvartsen et al. (2022) found the previous correlations to be below the 95% statistical significance limit. Moreover, the 27-day cyclic response, which has previously been used as evidence for the effect (Burns et al., 2008;Lam et al., 2018;Tinsley et al., 2021), was shown to occur as a statistical artifact due to the periodic B y -forcing and high autocorrelation in the surface pressure. This work aims at investigating the open ends not addressed by Edvartsen et al. (2022), mainly, the potential for a seasonal and/or solar structure dependency for the link between B y -forcing and polar surface pressure response. The Mansurov hypothesis assumes a positive (negative) correlation between B y and surface pressure anomalies in the Southern (Northern) Hemisphere. The effect is thought to arise in connection with the Global Electric Circuit (GEC). The GEC links the electric fields and currents flowing in the lower atmosphere, ionosphere, and magnetosphere to form a global spherical conductor (Siingh et al., 2007). Global thunderstorms act as batteries charging the GEC by generating upward-driven currents J z . In addition to electrified clouds, this maintains an average potential difference (V i ) between the ionosphere and the Earth's surface at about 250 kV (Tinsley, 2000;Williams, 2005). In fair weather regions, a return current (J z ) flows in the direction ionosphere-surface, thereby completing the GEC. When the velocity of the solar wind (V) flows radially outwards from the sun with its frozen in the magnetic field (B), relative to the Earth, this gives rise to a V Â B motional electric field as seen by an observer stationed at Earth. Through the conducting magnetic field lines, the potential of the electric field is superimposed on the global ionospheric potential V i (Tinsley, 2008). In Geocentric Solar Magnetospheric (GSM) coordinates a magnetic field in the y-direction (B y ) gives rise to a potential difference between the northern polar cap ionosphere and the southern polar cap ionosphere of typically a few tens of kV (Tinsley & Heelis, 1993). As the ionospheric potential V i changes, so does the fair weather current J z . Figure 1 shows an illustration of the components involved in the mechanism, mainly the perturbation of the ionospheric potential and the effect on J z . Tinsley (2008) also discusses other sources able to modulate J z and then links to atmospheric changes. Studies have found a relation between Galactic Cosmic Rays (GCR) and cloud cover over decadal timescales (Tinsley, 2008;Veretenenko & Ogurtsov, 2012;Veretenenko et al., 2018). Correlations have also been found between internally driven modulations of J z (thunderstorm generator) and atmospheric pressure changes, which are together known as the Burns effect (Burns et al., 2007(Burns et al., , 2008Zhou et al., 2018). It is suggested that all these mechanisms, through the modulation of the ionospheric fair weather current J z , affect microphysical processes in clouds (Tinsley, 2022). As the currents flow through high gradients of conductivity across cloud boundaries they add to the separation of positive and negative ions. These ions can attach to aerosols and droplets, where they influence microphysical processes due to the Coulomb interaction (Tinsley & Deen, 1991;Tinsley, 2000Tinsley, , 2008. The influence on the microphysics of clouds should occur nearly instantaneously. However, this effect is relatively small, and it is predicted that the microscale changes take days before materializing as macro-physical changes in cloud radiative properties. Furthermore, after manifesting, these radiative changes might lead to pressure responses observed at the surface level (Frederick et al., 2019;Tinsley et al., 2021). Both the atmosphere and the solar wind are highly variable in nature, potentially leading to different surface responses during different conditions. Tinsley et al. (2021) show an intensified relation between B y and the surface pressure anomaly during local northern winter. However, no significance estimation is assessed. Zhou et al. (2018) found that during four years from 1998 to 2001, the correlation between the vertical electrical field and surface pressure is larger during local winter in both hemispheres. For the variability in the solar wind, Tinsley et al. (2021) found that cloud irradiance over Alert, Canada was larger when the solar wind structure was two-sector (IMF B y oscillating at a 27-day period), compared to four-sector (IMF B y oscillating at a 13.5-day period). It is also highlighted that the most cited period in favor of the Mansurov effect, 1999Mansurov effect, -2002, is dominated by two-sector structures. In our work, we will focus exclusively on the Mansurov effect. We seek to determine whether there is statistically significant evidence in favor of its existence, and therefore confine attention in this study to the correlation between IMF B y and surface pressure. We do not attempt to comment on the viability of any particular mechanism. In contrast to our previous work (Edvartsen et al., 2022), where we questioned the statistical significance of the effect on the basis of analysis of continuous decadal timescales, we now address the potential seasonal and solar wind sector structure dependence. 2 Data and method 2.1 Solar wind (B y ) data We use daily averaged IMF B y (Geocentric Solar Magnetospheric, GSM, coordinates) values obtained from the National Space Science Data Center (NSSDC) OMNIWeb database (http://omniweb.gsfc.nasa.gov) for the interval 1968-2020. In this coordinate system, X points along the Sun-Earth line, Z points along Earth's magnetic dipole axis, with Y perpendicular to both X and Z. Pressure/geopotential height data For the atmospheric data, we use the European Center for Medium-Range Weather Forecast Re-Analysis (ERA5) (https:// cds.climate.copernicus.eu). These data are constructed by interpolating observations with numerical simulations and models, effectively constructing a high-resolution atmospheric database. It is noted that reanalysis data does not have the accuracy as pure observational data at every grid point. Nevertheless, it still allows for a physically justified approximation in the grids where observations are not accessible. Multiple studies have used reanalysis data in the examination of the Mansurov effect (Lam et al., 2013(Lam et al., , 2018Zhou et al., 2018;Freeman & Lam, Fig. 1. IMF B y (+) leads to a decrease in V and J in the NH and an increase in V and J in the SH. IMF B y (-) leads to an increase in V and J in the NH, and a decrease in V and J in the SH. Relating this to the Mansurov-associated pressure changes means that an increase in the ionospheric potential and fair weather current leads to an increase in pressure. We focus on the geopotential height of the 700 hPa level in both hemispheres. In the SH this represents the surface, while in the NH this represents a few kilometers above surface level. The geomagnetic perturbations of IMF B y in the ionosphere are centered around the geomagnetic pole. Therefore, the geopotential height will be averaged to one value for each hemisphere from 70°poleward in geomagnetic coordinates (mlat). The full data period covers the time period 1968-2020. To account for seasonal variability, a perturbation value is obtained for each hemisphere (Z g(NH) and Z g(SH) ). These are obtained by subtracting a running mean of ±15 days from the daily value of the geopotential height data series. Modern statistical methods Analogous to Edvartsen et al. (2022), we use Monte Carlo (MC) simulation together with the false discovery rate (FDR) method to estimate the statistical significance. The MC approach handles the uncertainty introduced by temporal autocorrelation. The FDR method tests multiple null hypotheses simultaneously, namely the expected increase in falsely rejected null hypotheses at the 5% level as the number of hypotheses itself increases. The following sections provide details on how the MC and FDR methods are implemented in this study. Monte Carlo approach The main goal of the MC approach is to construct a repeated analysis with similar statistical conditions as found in the original data series, however, with an introduced element of randomness for each iteration. The process results in a distribution of simulated results where the null hypothesis is assumed to hold. As such, original findings can be compared to the fraction of as extreme or more extreme simulated results to obtain the p-value, which then becomes the likelihood of obtaining a similar result by chance. The main investigative tool used in our study is the timelagged cross-correlation method (Pearson linear correlation coefficient). It correlates two different data series (forcing and response), with an introduced shift with respect to each other in the temporal direction. The method can therefore identify directionality (forcing ? response) between the data series, as well as the associated time lag. The MC significance test can be implemented by replacing the response data series with surrogate data while keeping the forcing data identical. The surrogate data have to be statistically equivalent in terms of statistical features (e.g. autocorrelation, standard deviation, mean, etc.). Lancaster et al. (2018) provide a technical overview of different ways to create simulated data. We use the Fourier transform (FT) method, which is computationally cheap and easy to implement, and proceeds as follows: First the FT (ft x ) of the original response data series, the geopotential height, is calculated. Then, a random phase vector (/ r ) is generated. As the FT is symmetrical, the new phase randomized vector (ft r ) can be obtained by multiplying the first half of ft x by exp(i/ r ) (this corresponds to the positive frequencies). The second half of ft r is then computed by horizontally flipping the complex conjugate of the first half. Finally, the inverse Fourier transform of ft r gives the FT surrogate data. This method was initially introduced to test for non-linearity in data. It has, however, been shown by e.g. Theiler & Prichard (1996) that the FT-based method provides a good surrogate technique alternative when the statistics of interest are not pivotal, meaning that the distribution of targeted values (correlation value in our case) under the null hypothesis is unknown. Figure 2 displays the results of the FT-method performed for the geopotential height data series at the 700 hPa level averaged over 70°-90°S for the period 1968-2020. The top left panel shows the raw geopotential height data plotted against time, while the bottom left panel shows the surrogate data after the FT-procedure. In the middle panels, the autocorrelation function for the raw (top) and surrogate data (bottom) are shown, while the right panel shows the power spectrum of the raw (blue) and surrogate data (red). As expected, the FT-method produces a physically unrelated surrogate data series, however, it retains the necessary statistical conditions like autocorrelation (which implies the same number of independent data points), power spectrum, and other features such as standard deviation, variance, and mean (not shown). This procedure requires continuous data. However, investigating the seasonal dependence of the response in December, January, and February (DJF), requires that these portions of the full continuous data period be extracted. To produce the surrogate data representing the geopotential height for every DJF, we perform the FT-procedure on every individual DJF period before finally stitching the surrogate data together to form a single data series (|DJF|DJF|DJF|. . .). This is computationally expensive, but necessary to avoid introducing artificial frequencies not found in the original data. False-discovery rate The FDR is an appropriate tool when testing multiple null hypotheses simultaneously. When testing a null hypothesis in isolation, the p-value obtained by our MC approach defines the probability of obtaining a result at least as extreme as the observed result, under the assumption that the null hypothesis holds. For example, the common p = 0.05 threshold implies that there is a 5% probability of obtaining a given result under the assumption that the null hypothesis is correct. When N null hypotheses are tested (e.g., map plot with multiple grids or time-lagged cross-correlation with multiple lead-lags), the probability of falsely rejecting at least one null hypothesis increases as pN increases. The FDR method, developed by Benjamini & Hochberg (1995) and later applied to atmospheric sciences by Wilks (2016), aims to account for the increase in the expected rate of falsely rejected null hypotheses as N increases. In its simplest form, the FDR method assumes statistically independent null hypotheses and an identical distribution of observations (i.e., the data characteristics, such as the mean, median, and standard deviation, are the same for every group or sample being compared). When dealing with atmospheric data such as geopotential height, high autocorrelation exists, both temporally ( Fig. 2 middle panels) and spatially. In a time-lagged cross correlation plot or map plot with multiple grids, each data point will therefore not be statistically independent. To address this issue, Wilks (2016) improved on the approach developed by Benjamini & Hochberg (1995), by introducing a factor that accounts for the autocorrelation in the data. The full process involves the computation of p-values for each data point (MC-approach), before sorting them in ascending order. The sorting forms the set i = 1, . . ., N where N represents the total number of individual null hypotheses to be tested. A new global p-value, p FDR , is then calculated by iterating through the individual p-values starting at the lowest, and looking for the last p-value fulfilling the equation: In case of independent individual null hypotheses, a FDR = 0.05 ensures that the global p-value (p FDR ) is correctly interpreted at the 95% confidence level. The statistical significance of individual tests is determined by comparing their original p-values against the threshold value p FDR . Only those tests with p-values equal to or lower than p FDR are considered statistically significant. However, as discussed, individual data points in a timelagged cross-correlation plot or map grid plot are not independent. Wilks (2016) demonstrates that for autocorrelation commonly found in atmospheric data, an e-folding distance of 1.54Á10 3 km, setting a FDR = 0.1 corrects for dependence between data points and ensures that the p FDR threshold is exceeded in only 5% of the cases globally. We can calculate which a FDR value is appropriate for our specific data. This analysis will also give insight into how the FDR approach (Wilks, 2016) works at a global scale. The left panel of Figure 3 shows the distribution of correlation values constructed based on an MC-simulation with 20,000 iterations. The distribution of correlation values is made by cross-correlating the real IMF B y for the period 1968-2020 with surrogate data made from the geopotential height (700 hPa level averaged over 70°-90°S for the same time period) with the FT method for lead-lags À20 to 20. Then, we perform another 20,000 iterations of the same setup. In the right panel, the results from the new iterations are compared against the distribution to the left to calculate the p-value at each specific lead lag for each individual iteration. Simultaneously, the FDR approach is applied to the pvalues for each iteration, where five different values for a FDR are tested. When a FDR = 0.09, only 5% of the iterations obtains pvalues passing the global FDR threshold. In other words, for our specific data, when a FDR is set to 0.09, 5% of global responses will pass the FDR test when the null hypothesis is assumed correct. Therefore, a FDR = 0.09 will be used in analyses conducted in this study. The requirement of identical distribution of the observations will also be valid in our case. The left panel of Figure 3 shows that all lead-lags have near identical distribution of correlation values after the MC simulation. They have also similar statistical features, such as standard deviation, mean, and median, in the geopotential height data series for two consecutive days. They only diverge slightly with increasing intervals between the days being compared (e.g., winter months tend to exhibit greater variance than summer months). However, since the lead-lag plots in our study are limited to a maximum of 41 days, the distribution of observations can be considered approximately identical. False-discovery rate in combination with Monte Carlo approach The FDR method (Eq. (1)), requires a minimum p-value to reject the global null test. For example, if 50 data points are analyzed with a FDR = 0.05, the first sorted p-value must be lower than or equal to (1/50)Á0.05 = 0.001. Assuming the null Right panel: Power spectrum of the raw data (blue) and the FT surrogate (red). As can be seen, there is an identical match for the autocorrelation and the power spectrum. hypothesis holds, the distribution of p-values will be uniform. Therefore, obtaining a p-value of 0.001 is a 1/1000 event. If one has exactly 1000 tries, the probability of obtaining the 1/1000 event can be calculated as follows: Probability that one MC iteration will not be the 0:001 event Probability that 1000 MC iterations will not be the 0:001 event Probability that 1000 MC iterations give atleast one 0:001 event For 1000 MC iterations, the probability that at least one 1/1000 event occurs is only 63.23%. Hence, applying 1000 iterations will not give an accurate estimate of the underlying statistics of the distribution at a 0.001 resolution. We therefore propose a formula that gives the lowest possible number of iterations to perform for an accurate representation of the underlying statistics at the required resolution when combining the MC approach with FDR. It is analogous to probability with replacement, with the FDR equation for the first sorted p-value substituted for the 1/1000 chance event (the FDR equation gives the desired resolution level for accurate statistics). The new equation is also set equal to 1 to indicate that the statistics at the desired level of resolution should be achieved 100% of the time when the number of iterations is optimized: Evidently, the equation above can only be approximately fulfilled, as it converges as number of iterations goes to infinity: The right side of the equation can be replaced by E A , symbolizing an error in accuracy. Then applying the natural logarithm To accurately estimate the appropriate a FDR in a way that takes into account the autocorrelation present in our data, all p-values generated from each iteration of the simulation are processed through the FDR method, where 5 different values for a FDR are tested. By doing so, we can obtain the specific a FDR value which ensures that any signal determined to be statistically significant occurs globally in only 5% of cases when the null hypothesis is assumed to be true for our data. As can be seen, when a FDR is set to 0.09, only 5% of the 20,000 iterations produce a response that passes the global FDR limit. on both sides, the equation gives the number of iterations required to achieve the desired error in accuracy when representing the underlying statistics at a given resolution: For our cases, where most time-lagged cross correlation plots consist of 41 lead-lags, a FDR = 0.09, and by setting E A = 10 À9 (E A = 10 À9 ) indicates there is a 1 in a billion chance of not obtaining an accurate representation of the underlying statistics at the desired resolution), the equation yields: This implies that >9430 iterations will ensure that our specified resolution of (1/41)Á0.09 = 0.0022 will be fulfilled with a 99.9999999% accuracy. The following analyses apply significance assessments based on 10,000 MC iterations. Full data period 1968-2020 The time-lagged cross correlation between the IMF B y and the geopotential height Z g(NH) and Z g(SH) is calculated from 1968 to 2020. As seen in Figure 4, no significance is obtained in either hemisphere by applying MC simulation and FDR significance tests for the interval À13 to +13. However, the SH exhibits a peak in the correlation values from lead-lag À8 to À2. Seasonality analysis The next step is to look for potential seasonal dependency. The atmosphere exhibits large variability depending on the seasons, which again could lead to different pathways and strengths of the coupling between IMF B y and the polar surface pressure. The full data period is therefore sorted into the seasons grouped as December, January, February (DJF), March, April, May (MAM), June, July, August (JJA), and September, October, November (SON). The time-lagged cross-correlation analysis is then performed for each season individually, with the results shown in Figure 5. In the NH, a significant positive anomaly occurs around lead-lag À4 for the MAM period (which is significant even after the FDR method). The same positive anomaly, shown in Figure 4, still occurs in the SH for both MAM and JJA but is not significant with FDR interval À13 to +13 leadlags. The Mansurov effect should impose opposite responses in the ionospheric polar cap for the two hemispheres. Positive anomalies are expected in the SH and negative anomalies in the NH at lead-lag 0 and beyond. A general overview of the responses seems more in-phase than out-of-phase for our results. In line with previous studies the seasonal analysis in this section, as well as a sector structure analysis (Sect. 3.5) and combined seasons and sector structure analysis (Sect. 3.6) are also performed for the most cited period of 1999-2002. In summary, there is no season or sector structure rendering statistical significance for the 1999-2002 period after the FDR method is applied. Plots for this sub-period can be found in the Appendix. Sector structures Structures in the solar wind originate from two mechanism. Structures are either imposed from the Sun directly, or, structures form as the solar wind propagates outwards and fills the heliosphere (Viall et al., 2021). The global solar magnetic field itself is composed by a superposition of the dipole, quadrupole and octupole harmonics, which can lead to imposed 2-, 4-and higher order harmonic sector structures. Tinsley (2022) specifically highlights the importance of the 2-sector solar wind structures in regards to the Mansurov effect, and hypothesise that this sector structure favors the Mansurov effect compared to 4-sector or irregular sector structures. The distinction of the two solar wind sector structures are illustrated in Figure 6. In the left panel, the dipole harmonics of the global solar magnetic field dominates, and the away and toward sectors experienced on Earth will oscillate with a periodicity of approximately 27-days. Tinsley (2022) hypothesises that longer duration of 2-sector structures nudges uncorrelated pressure oscillations into partial synchronization with the solar wind, while due to the more irregular nature, this is not accomplished for the 4-or irregular sector structures illustrated in the right panel. The frequently cited period 1999-2002 has a 68% occurrence rate of . FDR interval is set between lead and lag À13 to +13. Statistical significance after the FDR method is observed in MAM at lead-lag À4 in the NH and in JJA at lead-lag 7 in the SH. Right panels: Same procedure, only for the SH. No significance is observed after the FDR method. the 2-sector structure pattern (Tinsley, 2022). These numbers were manually identified by Tinsley (2022), which recommends a wavelet analysis for more accurate and objective identification. The middle panel of Figure 7 shows the scalogram obtained by wavelet analysis of the B y -index, while the top panel shows the raw B y -index with red lines indicating a 2-sector structure, and blue lines indicating 4 or irregular sector structures. The analysis itself is done by binning all days with the largest intensity in the scalogram of a period occurring in the interval between 22 and 32 days as a 2-sector structure, while the remaining days are binned as 4 or irregular sector structures. From the wavelet analysis, we find a 73% occurrence rate of 2-sector structures and a 27% occurrence rate for 4 or irregular sector structures in the 1999-2002 period. Tinsley (2022) state that the period 2007-2010 yields a less impactful Mansurov effect as the occurrence rate of 2-sector structures is only 40%. However, the wavelet analysis suggests a 65% occurrence rate of the 2-sector structure for this period. Figure 7 also provides the yearly occurrence rate for the period 1968-2020 of 2-sector structures obtained from the scalogram (bottom panel). Time-lagged cross-correlation and the dependence on the autocorrelation function of both the forcing and responding variable Before dividing the IMF B y data into the two different sector structures, a clear understanding of the inner workings of the time-lagged cross-correlation method is needed. Figure 8 shows a power spectrum analysis (left panels) of the IMF B y , the autocorrelation function for Z g(NH) (middle panels) and the autocorrelation for Z g(SH) (right panels). The analysis is also divided into 2-sector structures (top panels) and 4-or irregular sector structures (bottom panel). The top left panel shows that there is a clear peak in power surrounding 27 days/cycle, which is expected as the 2-sector structures exhibit a 27-day periodicity on average. In the bottom left panel, a clear peak in power is seen around 13.5 day/cycle, which is also expected as the 13.5 periodicity is the second most dominating sector structure. For the autocorrelation functions of the geopotential height, not much variance is seen by the sector division. The NH and SH exhibit similar autocorrelation functions. Edvartsen et al. (2022) shows how a periodic forcing variable together with an autocorrelated response variable is susceptible to producing artificial periodic responses when a time-lagged cross-correlation method is used. Here, we demonstrate further implications of this artificial anomaly which is particularly relevant in the investigation of the Mansurov effect, but also generally in any other phenomenon with a periodic forcing and autocorrelated response variable. The left column of Figure 9 shows 1000 iterations where the IMF B y is firstly divided into 2-and 4-or irregular sector structures before it is cross-correlated with the geopotential height data series Z g (NH) . (Due to the roughly similar autocorrelation functions for the hemispheres, it is only necessary to show this experiment for one hemisphere, where the choice of NH is arbitrary.) For every iteration, the geopotential height data series are phase-randomized. In essence, this is the same process that defines the significance limits in the figures above (Figs. 4 and 5). In the middle column, the largest positive peaks occurring between day À13 and +13 for every iteration are shifted and placed at day 0. At last, the right column shows the averaged response of the shifted peaks shown in the middle panels. It is evident that the 2-sector structure in both hemispheres (1. and 3. row) produces a larger artificial periodicity than the 4-or irregular sector structure (2. and 4. row). Simply put, this means that any time-lagged cross correlation between the IMF B y and the geopotential height in 2-sector structure periods will have higher values in general, as compared to the 4-or irregular sectors structures. These higher values are then only a result of the autocorrelation function of the forcing and responding variable. The same experiment is performed on the raw geopotential height data in both hemispheres for the period 1968-2020 with similar results. In addition to being dependent on the autocorrelation function of the forcing and response variables, the value of the correlation coefficient will also depend on the amount of data points used. This highlights the need for modern statistical methods such as MC simulation. MC-simulations applied on suitable statistical material (phase-randomization of original J. Ø. Edvartsen et al.: J. Space Weather Space Clim. 2023, 13, 17 response data series) will take all the relevant information affecting the correlation analysis (autocorrelation of forcing and response variable, and amount of data points) into account. This will result in a realistic p-value that is of higher relevance than the correlation values itself. Sector structures analysis The time-lagged correlation analysis sorted by solar wind sector structures for the full-time period 1968-2020 is performed. Figure 10 shows the time-lagged cross correlation between the IMF B y and the variation values Z g(NH) and Z g(SH) for periods of 2-sector structures (top panels) and periods of 4-or irregular sector structures (bottom panels). No clear response is seen in the NH for any of the sector structures over the whole data period. However, for the SH, the 2-sector structures seem to enhance the peak in pressure around day À6 compared to Figure 4 where sector structures are not taken into account. It is noted that the positive anomaly is still not statistically significant after applying the MC simulation together with the FDR method for the interval À13 to 13. The timing of the positive response on day À6 is not in line with the current Mansurov hypothesis, where the pressure anomaly should lag the IMF B y driver by a few days (Frederick et al., 2019;Tinsley et al., 2021). We also note that different magnitudes are seen for the significance intervals (green and red shaded area) between the results from the 2-sector structures and 4-or irregular structure analysis even though the two-sector structures have approximately the same number of data points. This is a consequence of the effect described in Section 3.4, demonstrating the importance of MC simulation when assessing the statistical significance. Seasons and sector structure analysis The final step investigates the combination of both seasonal and sector structure dependence. The results are shown in Figure 11. In the first column, the time-lagged cross-correlation for the four different seasons in the NH for 2-sector structures is shown, and 4-or irregular sector structures are shown in the second column. The third and fourth columns follow the same logic for the SH. In all plots, the FDR interval is set from lead À13 to lag +13. The highest obtained significant data point is also marked with its corresponding p-value. As a general overview, there exists no combination of sector structure and season obtaining significant data points in line with the current Mansurov theory (a positive significant anomaly around day 0 in the SH, and a negative significant anomaly around day 0 in the NH). In DJF, the responses in both hemispheres do follow this pattern. For the NH, this fits the correlations found by Zhou et al. (2018) and Tinsley et al. (2021) predicting a local winter effect, but does not fit with the theory in the SH. It is noted that these correlations are still not statistically significant. However, the same reoccurring pattern of a positive pressure anomaly in both NH and SH around leadlag À5 appears in March, April and May (MAM) for the 2-structure periods, and renders statistically significant in the SH. It also appears in JJA (mostly in the SH). Day À5 anomaly From all the analyses no indication of the Mansurov effect is found. However, the work has unraveled a rather strange occurrence. In Figure 11, where atmospheric seasons and solar sector structures are combined, a positive anomaly around day À5 is seen concentrated around MAM in both hemispheres for the 2-sector structure periods. This same anomaly is also present in the SH in Figure 10 when divided according to the 2-sector structures for all months. In Figure 5, the anomaly is even statistically significant in the NH for MAM after the FDR method is applied, and it is present in Figure 4 in the SH for the full data period with no sorting requirements. Summarized, our analyses have unveiled a reoccurring positive pressure anomaly happening on average 5 days before the peak B y anomaly in both hemispheres. The anomaly obtains the highest statistical significance in MAM but is also visible in JJA in both hemispheres. For division into sector structures, the anomaly favors the 2-sector structure in both hemispheres. As the anomaly is most persistent in both hemispheres in the 2-sector structure in MAM, the latitudinal extension at this specific lead lag is explored. Figure 12 shows the zonal mean pressure differences (Z g(SH) and Z g(NH) ) on lead À5 for days with B y > 3 nT averaged and subtracted the average of days with B y < À3 nT (note that correlation is not used, but rather a double superposed epoch analysis. This is done for coherence with earlier analyses on the Mansurov effect (see Zhou et al., 2018, Figs. 1-3). As an extra fail-proof for the significance assessment, we have run the MC simulation for 1,000,000 iterations, including the FDR method over all latitudes giving a total of 72 data points. We note that the FDR over all latitudes may not be physically justified, as the Mansurov effect is only expected to occur at high latitudes. However, since day À5 is a rather unknown anomaly, the latitude-wise extension is also unknown. With all latitudes included, one will therefore expect less significance at the 95% level than if the FDR method only covered the poles. Nevertheless, the figure still demonstrates a remarkable statistical result. The pressure response is significant from 85 to 90°S and 70 to 90°N, where the latitudes 75 to 80°N have a positive response outside of both tails of the probability distribution. In reality, this means that these data points have a p-value less than 0.000001. Heliospheric current sheet crossings To further investigate the reality of the day À5 anomaly, a final analysis focusing on heliospheric current sheet crossing events (HCSC) is performed. HCSC marks the intersection between the toward sector (B x < 0, B y > 0) (T) and the away sector (A) (B x > 0, B y < 0) of the IMF. As the magnetic field flips, there is an increase in proton density, proton dynamic pressure, magnetic field intensity, and a decrease in solar wind speed (Kan & Wu, 2021). Crossing events happen in between the maximum B y events, which could mean that the day À5 anomaly observed fits with the time of the crossing. A list of crossing events derived by Prof. Leif Svalgaard 1 spanning the data interval 1968-2020 is applied. Since MAM without the sector structure sorting is the result with the highest significance for the lead-lag plots, we will focus on this period. Sector structure sorting for the HCSC will be considered in the discussion. Figure 13 shows superposed epoch analyses of the HCSC for MAM over the whole data period 1968-2020. Similar significance assessment as other lead-lag plots apply. The top row shows the results when the pressure on days with "A T" sector crossings are averaged, and subtracted the average pressure on days with "T A" sector crossings. The middle row shows the superposed epoch for only days where an A sector crossing occurs, while the bottom row shows a superposed epoch for only days where a T sector crossing occurs. As seen in the figure, when the two different crossings are combined as seen in the top row, we get a statistically significant positive anomaly in the NH peaking at day À2. Comparing it to Figure 5, the significance is increased for the crossings compared to the correlation with B y . For the SH, no significance is obtained. In the middle and bottom rows, where the different crossings are treated separately, significance is obtained in the NH for A ? T crossings. Not shown is the same seasonal analysis for HCSC, also including the separation of sector structures, equivalent to Figure 11. The most significant response for the NH is seen in the 4-or irregular sector structures, and not the 2-sector structures which show the most significant response when B y is correlated to the pressure. In the correlation analysis, (ex. Fig. 11), emphasis is put on the highs and lows of B y . In the superposed epoch analysis of crossings, every event is treated equally, and the average response of all events is shown. As any mechanism for the HCSC affecting the polar surface pressure is yet to be determined, we can not know if the most impactful HCSC is related to times with the largest variations of B y . If we assume that the significant correlations seen for the B y and pressure (Figs. 5 and 11) are in reality anomalies resulting from HCSC (Fig. 13), a non-linear relationship between the strength of the B y and the surface effect of a HCSC could result in differing signal strengths between the two modes of analysis. We also note that analyzing the zonal mean differences of the HCSC (equivalent to Fig. 12) results in the 4-sector structure showing anomalies outside of the probability distribution in the NH at day À2. In general, the HCSC superposed epoch analysis has shortcomings in terms of the timing of the response. The peak anomaly in the NH occurs 2 days before the actual event happens, and no significant response is seen in the SH. However, the responses seen are statistically significant, and one can argue that the pressure response is closer to a physically justified response. Discussion and possible hypotheses In previous work, the Mansurov effect is shown to not be statistically significant on the decadal timescale (Edvartsen et al., 2022). The same study also shows how previous evidence for the Mansurov effect (27-day cyclic pressure response) is due to a statistical bias created by a periodic forcing and a temporally autocorrelated response variable, and cannot be treated Fig. 12. The significance level for the superposed epoch analyses of zonal means after 1,00,000 MC-iterations for 2-sector structures in MAM for the period 1968-2020. Latitudes 75-80°N renders a p-value less than 0.000001 as the positive anomaly has an absolute value outside both tails of the distribution as evidence for a physical link. This evidences weakens the overall case for the Mansurov effect, as the hypothesis itself is built from pure correlation analyses (Mansurov et al., 1974;Burns et al., 2008;Lam et al., 2013Lam et al., , 2018Zhou et al., 2018;Tinsley et al., 2021). However, Edvartsen et al. (2022) did not consider in depth the possibility of seasonal and solar wind structure dependence, which is the aim of the current study. This study includes a seasonal analysis of decadal timescales. When analyzing seasonal variations, this considers both the atmospheric state and the Earth's dipole tilt relative to the IMF, which affects geomagnetic activity. Local winter months lead to stronger polar vortexes, and higher atmospheric variability, while less variability and weaker vortexes are seen in local summer. In the solar wind, different seasons mean different geometric conditions impacting the connection between the IMF and Earth's magnetic field. This manifests itself as the Russel McPherron effect, which is the increased probability of a negative B z -component leading to increased geomagnetic activity occurring around early April and early October (Russell & McPherron, 1973). By dividing the time period 1968-2020 into DJF, MAM, JJA, and SON we show (Fig. 5) that none of the specific seasons produce a significant response in line with the Mansurov hypothesis. However, a significant positive pressure anomaly occurs around lead-lag À5 in the NH for MAM, with 3 data points rendered statistically significant after MC-simulations and FDR-method over the interval À13 to 13 lead-lags. Though not significant according to the FDR method, the SH does show phase coherence with the NH for the MAM period. From the perspective of the Mansurov hypothesis, the peak pressure perturbation is expected to happen days after the forcing. This is due to the microphysical changes being small and acquiring time to materialize as macro-physical changes in cloud radiative properties (Frederick et al. 2019;Tinsley et al., 2021). In conclusion, an effect occurring 5 days before the forcing is unphysical given the Mansurov hypothesis. Considering the significant period of MAM, this might be linked to the Russel-McPherron effect, which states that in these months, the connectivity between the IMF and Earth's geomagnetic field increases. This could be hypothesized to lead to the enhanced surface impact of any mechanism propagating from the solar wind to the surface. However, it would then also be reasonable to expect a pressure response around late September/early October, which is not observed in our results. Another way the atmospheric variability could play a role is if the effect is very small, and risks being disguised by background noise. Figure 14 shows the standard deviation of Z g(NH) and Z g(SH) as bars, for the specific seasons analyzed in this study. The local summer in both hemispheres has the least variability. However, the lowest p-values are obtained in MAM. In NH, this is the month with the second-largest pressure variability. On this basis, it is not likely that the atmospheric variability acts as an obscurer for an effect that is always occurring. Earth's dipoles' geometrical positioning or specific reoccurring atmospheric seasonal conditions increasing the coupling between IMF and the polar atmosphere might rather be at play. There have been studies showing that winters following volcanic eruptions, which inject large quantities of sulfate aerosols into the stratosphere, increase the correlation between solar wind parameters and atmospheric effects (Tinsley et al., 2012;Zhou et al., 2014). This effect is not taken into account in this study and remains an open pathway for further research. Moreover, our study includes the division of the IMF into either 2-sector or 4-or irregular sector structures. These sectors are defined according to the periodicity of the fluctuating B y , with 2-sector structures defined as a 27-day cycle, and 4-or irregular sector defined as 13-day and all other cycles occurring (Power spectrum of the two distinct sectors for the IMF B y can be seen in Fig. 8). Previous research has highlighted the 2-sector structure as important for the manifestation of the Mansurov effect, with the argument mainly based on the occurrence rate in the regularly studied period of 1999-2002(Tinsley, 2022. It is hypothesized that a continuous period of 2-sector structure oscillations with large amplitude jumps in the B y (>6 nT) is needed to nudge the internal atmospheric waves into partial phase coherence (from now called the B y Nudge hypothesis). However, Tinsley (2022) also states that the period 2007-2010 does not yield a correct Mansurov manifestation due to the low occurrence of 2-sector structures (40%). As we see in Figure 4, our results show that the period 2007-2010 has as high as a 65% occurrence rate of the 2-sector structure. Nevertheless, as the 2-sector structures in the 2007-2010 period do not reach as high peak amplitudes as the 1999-2002 period, this might still be applied in favor of the B y Nudge hypothesis. Figure 4 reveals that if this is the condition necessary for the manifestation of the Mansurov effect, there exists no other sub-period in the interval 1968-2020 having as high amplitudes and long duration of 2-sector structures as the period 1999-2002. Hence, in case the hypothesized pathway does exist, it is likely to excerpt a negligible role with respect to climate variability on decadal scales. This is supported by our analysis of the correlation between the B y and pressure after the sector division, as seen in Figure 10. As our results show, the sector division does not enhance any response associated with the current Mansurov theory. It does, however, enhance the day À5 anomaly in the SH. Compared to Figure 4, which is the full-time period, the sector structure division lowers the p-value for the day À5 anomaly substantially in support of favorable 2-sector structures in the IMF. The final step in this study includes both seasonal and sector structure divisions. The results are shown in Figure 11. None of the combinations show a response in either hemisphere that is in line with the current Mansurov theory. However, for the day À5 anomaly pattern, both MAM and JJA show phase coherence in both hemispheres for the 2-sector structures. In the NH, MAM and 4-or irregular sector structures also show phase coherence, but with less significance than the 2-sector. As stated early, none Fig. 14. Variability in Z g(NH) and Z g(SH) measured by the standard deviation of the different seasons. Largest variability is seen in the local winter. J. Ø. Edvartsen et al.: J. Space Weather Space Clim. 2023, 13, 17 of the combinations show a signal that is significant at the 95% level after considering MC-simulation and FDR-method for the interval À13 to +13 lead-lags. This study aims to unravel any seasonal or solar wind sector structure dependence on the Mansurov effect, as this was an open end in our earlier study (Edvartsen et al., 2022). For the main known arguments about specific dependencies, mainly seasonal and IMF structural effect, no results obtained show the existence of a significant correlation between the pressure in either hemisphere and the IMF B y , acting according to the hypothesized mechanism. The study is not able to conclude that such a pathway does not exist, only that the data used in our study does not support it. However, in this process, the analyses have unraveled a rather strange but persistent occurrence, namely the day À5 positive peak anomaly occurring in both hemispheres mostly in the MAM period and the 2-sector solar wind structure. The anomaly is persistent and also renders significant values for some of the analyses after FDR. Figure 12 shows the zonal mean pressure with respect to days for which B y > 3nT subtracted days with B y < À3nT for all latitudes at day À5 in the combined MAM and 2-sector structure period. As the behavior of the anomaly does not fit any existing theory, the MC simulations were run 1 million times, just as a test of robustness. As the figure shows, for latitudes 75-80°, the response obtained is outside the probability distribution. This is equivalent to p < 0.000001. We conclude that the signal is extremely robust and very unlikely to be produced by chance. To clarify this rather strange anomaly, Figure 13 shows the MAM period for HCSC, where these events are treated to produce differing signed anomalies depending on if it's an "A T" or "T A" event. Since the peak B y values usually occurs some days after a crossing of the 0-line, the HCSC was considered a potential driver. The crossing events do produce statistically significant positive anomalies in the NH. However, the anomaly still occurs 2 days before the key date, which now represents the day of the HCSC. Another problem with the HCSC is the fact that the 4-sector structures in MAM produce more significance than the 2-sector, while the opposite is true for the correlation between B y and the pressure. As explained earlier, this could be due to the effect having a non-linear relationship between the strength of the B y and the surface impact, which could influence how the response in the correlation analysis appears. Edvartsen et al. (2022) showed how periodic forcing and an autocorrelated response variable will induce an artificial periodicity in the response obtained, even if completely random numbers are used ( Fig. 9 in Edvartsen et al., 2022). The results obtained in this study further build on this by showing how two structurally different forcing data series (from 2-and 4-sector structures) exhibit very different degrees of this artificial bias (Fig. 7). This highlights the importance of MC simulation, which is able to take the full autocorrelation function of both forcing and response variables into account. As seen in Figure 8, this leads to the significance distributions being adjusted for the specific periods, and not a one size fits all period. To our knowledge, there exists no method taking this into account as efficiently as MC simulations. Finally, we will discuss possible hypotheses potentially explaining the lack of significant support of the Mansurov hypothesis and the potential À2 day lag for an HCSC driver. 1. External forcing on the GEC can lead to effects on the internally driven thunderstorm generator. Changes in the GEC could also manifest themselves as changes in the rate of lightning, again leading to atmospheric changes. Changes in the lightning rate at low latitudes can lead to atmospheric disturbances propagating to higher latitudes. Owens et al. (2014) find a statistically significant result for correlations between different IMF polarities and local distribution of lightning. For the toward sector, the lightning rate above the UK is enhanced with respect to the away sector. It is suggested that rather than the annual lightning rate being modulated, a redistribution of the lightning activity with respect to location occurs. However, no definite mechanism is established. Owens et al. (2015) also find results of HCSC correlating at a significant level with thunderstorm activity in the UK over the time period [2000][2001][2002][2003][2004][2005][2006][2007]. "A T" crossings are cited to be associated with a strong rise in lightning flash rates immediately following the HCSC. On the contrary, "T A" crossings are cited to be associated with a decrease in flash rates. Both results are statistically confirmed significant by MC-simulation. These results are compelling, as the pressure response also shows asymmetric behavior at the two sector boundary crossings consistent with this study. However, a physical explanation for the À2 day lag in our results is not found. A recommended pathway is a further investigation with improved and prolonged data correlating the IMF B y and HCSC with the global or local distribution of lightning. 2. The relation is known as the Mansurov Effect is misunderstood. The data does not support the peak B y as the maximum force, and asymmetry between the hemispheres is also not supported by our analyses. The relation could be non-linear depending on the rate of change of B y , or both the rate of change of B y and the maximum B y in an intricate manner. The pressure response could also have a threshold value before the switching sign as mentioned by Burns et al. (2008). However, for the MAM period (Figs. 5, 11, and 12) the hemispheres have opposite seasons. It can therefore be argued that due to the different atmospheric conditions, this could manifest itself as the same signed responses, even though the forcing itself is asymmetric between the hemispheres. 3. The Mansurov effect exists as it is hypothesized, but the actual effect in the atmosphere is too small to stand out from the noisy background. This is supported by Zhou et al. (2018) showing how the internal thunderstorm generator produces anomalies in accord with the Mansurov effect for the period 1998-2001. However, problems with this specific analysis are the small time period of data and the limited assessment of significance. A better way of detecting the Mansurov effect would be through correlation analyses of the internally generated ionospheric vertical electrical field (E z ) and polar surface pressure. The externally generated changes are suggested to attribute <10% of the total change in Ionosphere-Earth current flow (Tinsley, 2022). If analyses over longer timescales can show the internally generated Ionosphere-Earth current flow (>90% of total) significantly correlating with surface pressure according to the Mansurov hypothesis, one can also assume that external effects will play a role. The external effect might be too small to be detected in a noisy background with the data periods available today, but the existence of statistically significant internal effects would strengthen the hypothesis tremendously. In addition, as the internal changes are larger, it should also be easier to detect significant changes. These kind of analyses are out of the scope of the article but is a highly recommended pathway for further research on this and related phenomena. 4. The HCSC, not the B y amplitudes, are responsible for the low altitude pressure correlations. As our results show, HCSC shows up as a statistically significant anomaly in the NH for MAM. However, the significant peak anomaly occurs 2 days before the actual sector boundary crossing. Wilcox et al. (1973) found correlations between the atmospheric vorticity poleward of 20°N and HCSC during the winter months of 1963-1970. These results showed no preference for an "A T" or "T A" crossing and were confined to 500-300 hPa. Figure 13, demonstrates, however, a sector boundary preference. Nevertheless, no mechanism is established for the HCSC correlations termed the Wilcox effect. To our knowledge, there also does not exist recent research on the Wilcox effect. Recommended further research for this pathway would be to look at the correlation between HCSC and pressure for higher atmospheric levels. Before dismissing the physically unjustifiable À2 day lag of the response, it is also recommended to look for solar structures or other phenomena related to HCSC. 5. There exists no physical link between external effects originating from the IMF B y on the global electric circuit and surface polar pressure. Our analyses show that the sorting of common non-stationary features dependent on the seasons and IMF sector structure gives no statistical evidence in favor of the Mansurov Effect, and the anomaly seen at day À5 could be purely coincidental. However, the extremely low p-values obtained in the NH at MAM are hard to discredit on a statistical basis, especially as the same levels of low p-values are also found for the HCSC. Nevertheless, the responses are also hard to justify on a physical basis with the current knowledge of possible mechanisms rendering a day À2 or À5 lag physically unlikely. An explanation for the discrepancy could therefore also be an aliasing phenomenon. Evidence of the solar rotational UV cycle influencing the Madden-Julian Oscillation (MJO) has been obtained at significant levels after MC simulations (Hood, 2018). The MJO itself is a tropical weather phenomenon, but it has still been shown to impact the Arctic . Incorporating the MJO oscillation in studies between the IMF and atmospheric pressure is beyond the scope of this paper, but remains a pathway for future research. Conclusion This study has extended the analyses of the Mansurov effect to possible seasonal and solar wind sector structure-dependent responses on decadal timescales compared to Edvartsen et al. (2022). By correlating the IMF B y and surface polar pressure, no statistical evidence for dependent behavior is found. However, a new statistically significant anomaly has appeared in multiple sub-periods in both hemispheres. The anomaly occurs approximately 5 days before the maximum B y value, implying that the effect precedes the forcing, which is not physically justified. We, therefore, provide five different hypotheses as an attempt to explain the phenomena and open pathways for further investigation. A similar analysis as done in Section 3.2 (seasonality) is performed for the most cited period of 1999-2002 and shown in Figure A.1. It is noted that due to the small time period (implying cheaper computations for the MC simulation), we perform 20,000 MC iterations for increased accuracy. Tinsley (2022) discuss how the mixing of the seasons might affect the significance assessment, due to favorable conditions for the Mansurov effect in the local wintertime. However, as the figure shows, no specific season has a statistically significant response when the FDR is applied over the interval À13 to +13 lead-lags. A similar analysis as done in Section 3.5 (sector structure) is performed for the most cited period of 1999-2002 and shown in Figure A.2. No specific sector structure shows a statistically significant response when the FDR is applied over the interval À13 to +13 lead-lags. A similar analysis as done in Section 3.6 (seasons and sector structure) is also performed for the most cited period of 1999-2002 and shown in Figure A.3. No specific combination of season and sector structure shows a statistically significant response when the FDR is applied over the interval À13 to +13 lead-lags. The most notable anomaly occurs in the Arctic for the JJA period in 4-or irregular sector structures. Here, the negative anomaly on day 1 obtains a p-value equal to 0.0061. It is noted that if the FDR method is only performed over the interval À2 to +2 lead-lags the anomaly at day 1 would be rendered statistically significant. However, this result is not in line with the hypothesized mechanism being favored in local winter and 2-sector structures (Tinsley, 2022), as the result would be significant in the opposite combination (local summer and 4-or irregular sector structures). A reasonable explanation for this result might very well be appointed to chance. For the FDR interval set to À2 to +2 lead-lags, this particular response is per definition statistically significant as only 5% of rendered responses will have equally low p-values within this interval. However, as the figure display a total of 16 subplots, this means that the expected value of getting 1 signal that passes the FDR limit is 16/20 = 0.8. Based on the premise that this particular period does not fit the hypothesized mechanism, it is therefore reasonable to assume that this might occur by chance. Nevertheless, as discussed in Section 3.8 (Heliospheric Current Sheet Crossings), the most significant responses are seen under the 4-sector structures. A mechanism including crossing events might then give an explanation for this occurence. . FDR interval is set between lead and lag À13 to +13. Right panels: Same procedure, only for the SH. No significance is obtained in either hemisphere.
13,713
2023-05-09T00:00:00.000
[ "Physics" ]
THE VOLUNTARY DISCLOSURE DILEMMA: UNRAVELING THE COMPLIANCE-EVASION CAUSALITY IN TAX ADMINISTRATION This research investigates the causality between taxpayer compliance and tax evasion behaviors, specifically within the context of participants in the Voluntary Disclosure Program (PPS) registered at the Small Tax Office of West Pontianak. The study delineates its population as taxpayers who, prior to their engagement in the PPS, had outstanding tax liabilities on income derived from business or employment activities. Utilizing the documentation method, secondary data were solicited from pertinent governmental bodies to facilitate the research. A linear regression model was employed to analyze the relationship between the variables under consideration. The findings underscore the impact of pre-PPS tax evasion activities on subsequent enhancements in taxpayer compliance, as evidenced by ransom payments. The study contributes to governmental authorities by offering valuable information regarding the patterns of tax evasion behavior among PPS participants, thereby informing policy and enforcement strategies. INTRODUCTION Tax avoidance constitutes a legal strategy within the ambit of tax planning, characterized by the lawful structuring of fiscal affairs to minimize income tax liabilities.This strategy exploits extant legal loopholes, enabling taxpayers to circumvent adverse legal repercussions, such as penalties or sanctions, arising from tax avoidance maneuvers (Barli, 2018;Oktavia et al., 2021).Although tax avoidance and tax evasion are both aimed at diminishing tax liabilities, they diverge fundamentally in legality.Tax evasion involves the illicit reduction or negation of tax obligations through unauthorized means, distinguishing it markedly from tax avoidance (Barli, 2018;Purba et al., 2022).Saputri and Kamil (2021) delineate various tax evasion tactics, including the failure to report accurate assets and income, the misalignment of tax payments with statutory requirements, and the omission of periodic or annual tax returns.Additionally, Purba et al. (2022) observe that tax evasion can extend to the strategic placement of assets in jurisdictions with favorable tax regimes-often referred to as tax havens-or countries that offer reduced tax rates or tax exemptions.These evasion practices compromise taxpayer compliance, potentially precipitating significant revenue losses for the state (Anam et al., 2018;Monica & Andi, 2019;Riyadi et al., 2021), thereby underscoring the critical distinction between legal tax avoidance measures and illicit tax evasion actions. Analyzing the behavior and characteristics of individual taxpayers reveals multiple determinants influencing their propensity towards tax evasion (Ekaputra et al., 2022;Nathalie & Setiawan, 2024).These factors encompass perceptions of fairness, experiences of discrimination, and attitudes towards the tax system (Sasmita & Kimsen, 2023), a predilection for material wealth or the belief among taxpayers that tax payments are futile and financially detrimental (Umaimah, 2021;Zainuddin et al., 2021), as well as the taxpayers' income levels, which reflect their economic capabilities (Randiansyah et al., 2021).Furthermore, the inclination to evade taxes is also shaped by the manner in which tax regulations are applied and executed by governmental authorities, including the quality of public services, the efficacy of the implemented tax system, and the enforcement of penalties for non-compliance (Kamil, 2021).In response to these challenges and in a bid to enhance compliance with tax reporting obligations, the Indonesian government has instituted a tax amnesty policy (Inasius et al., 2020).This policy aims to encourage the declaration of previously unreported net assets, serving as a proxy for an increase in the taxpayers' economic status or income (Ispriyarso, 2019).The amnesty provides relief from administrative and criminal penalties for undeclared income, contingent upon the payment of a defined amount (referred to as "ransom") based on the taxes applicable to the newly disclosed net assets (Kusuma & Dewi, 2018;Nugraha & Setiawan, 2018).This approach seeks not only to rectify past non-compliance but also to foster a more transparent and cooperative relationship between taxpayers and the tax authorities (Hadistiyah & Putra, 2022;Wulan et al., 2023).Kurniawan et al. (2019) articulate that the tax amnesty initiative is designed to achieve both immediate and protracted objectives.In the near term, it is anticipated to bolster the fiscal year's tax revenue through the collection of "ransom" payments on newly disclosed net assets (Darma et al., 2022;Mardi, 2019).Over a more extended period, the initiative seeks to cultivate a culture of enhanced compliance among taxpayers with regard to their reporting duties.This strategic shift is aimed at diminishing the prevalence of tax evasion, broadening the tax base, and fostering economic growth via the reallocation of assets (Murweni, 2018;Pravasanti, 2018).The underlying rationale for the tax amnesty, as aligned with the overarching objective of securing increased tax revenue, hinges on the principle that elevated levels of taxpayer compliance will directly contribute to higher tax revenue collections, predicated on lawful taxpayer behaviors and the avoidance of tax evasion (Riyadi et al., 2021).Nonetheless, research conducted by Purba et al. (2022) casts doubt on the efficacy of the tax amnesty program in mitigating tax evasion within Indonesia, revealing a persistent inclination towards such practices even amidst the policy's enactment.This inclination is exemplified by a notable surge in the allocation of funds to offshore banking institutions, exceeding 137 million USD, which suggests that the amnesty's implementation has not necessarily translated into improved taxpayer compliance (Hermawan et al., 2020;Permana, 2020).The persistence of tax evasion behaviors subsequent to the tax amnesty policy underscores the complexity of ensuring compliance through policy measures alone (Kurniawan et al., 2019), highlighting the necessity for comprehensive strategies that address the underlying factors contributing to evasion (Sayidah & Assagaf, 2019). Extant literature on tax evasion predominantly explores the determinants prompting taxpayers to engage in such practices, with notable contributions from Kamil (2021), Randiansyah et al. (2021), Sasmita and Kimsen (2023), Umaimah (2021).In parallel, scholarly inquiry into the tax amnesty policy's implementation has largely focused on evaluating its impact on taxpayer compliance levels (Mardi, 2019).Beyond compliance metrics, other investigations have assessed the tax amnesty policy's influence on the efficacy of tax revenue collection (Suratno et al., 2020).Regarding the objective of mitigating tax evasion through tax amnesty, limited research, such as the study by Purba et al. (2022), has examined the correlation between tax amnesty initiatives and shifts in taxpayer conduct, particularly in terms of increased overseas fund allocations. Diverging from the aforementioned scholarly endeavors, this study aims to investigate the nexus between tax evasion behaviors and enhanced taxpayer compliance, specifically through the mechanism of ransom payments under a voluntary disclosure program.This research seeks to contribute to the academic discourse by elucidating the potential for ransom payments to not only signal but also catalyze a transformation in taxpayer compliance, thereby offering new insights into the dynamics between tax evasion practices and complianceenhancing strategies.The novelty of this study lies in the variables selected for analysis and the methodology employed to assess the interrelations among these variables.Specifically, the research endeavors to ascertain the influence of tax evasion behaviors on the magnitude of ransom payments made by participants in the Voluntary Disclosure Program, utilizing these payments as a proxy for heightened taxpayer compliance.The metric for evaluating an increase in compliance is operationalized through the ransom amounts levied on previously undisclosed assets (Riyadi et al., 2021), whereas the gauge for tax evasion intensity is based on the undeclared tax liabilities associated with business or employment income that taxpayers have failed to remit (Saputri & Kamil, 2021). This exploration suggests a complex, possibly non-linear, relationship between the original intents underpinning the tax amnesty policy and the subsequent shifts in taxpayer attitudes and behaviors post-implementation.The study seeks to offer a novel perspective on the dynamics between the behavioral predispositions of taxpayers enrolled in the Voluntary Disclosure Program towards tax evasion, and the extent to which ransom payments reflect and potentially alter these tendencies.The critical inquiry revolves around whether the tax amnesty policy's execution effectively mirrors and modifies the propensities of taxpayers inclined towards evasion.Addressing this query necessitates an empirical examination of the correlation between enhanced compliance among Voluntary Disclosure Program participants and their evasion activities.The findings are anticipated to serve as a valuable reference for governmental bodies, specifically the Directorate General of Taxes, in formulating targeted oversight strategies for taxpayers predisposed to evasion, thereby informing policy adjustments and enforcement frameworks. LITERATURE REVIEW Hagger (2019) explains that in the Theory of Reasoned Action (TRA), a person's intention is a motivational foundation that has a major influence in determining a person's behavior.These intentions build individual attitudes based on the results of evaluating the negative and positive impacts of individual attitudes and subjective norms in society that view how individuals should behave towards their environment.This theory is used to predict how individuals will behave towards a problem or condition based on interests that are influenced by beliefs about the results of past events and views from other individuals on the same problem or condition.In general, taxpayers have a tendency to pay the lowest tax possible, even if possible they will try to avoid it (Margaretha et al., 2023).Umaimah (2021) explains that there are two factors that can affect taxpayer compliance in disclosing assets owned, namely internal factors related to the taxpayer's lack of understanding of the benefits or usefulness of fulfilling tax obligations and external factors in the form of information from outside parties with negative connotations related to the management and implementation of tax policies.Another study conducted by Mujiyati et al. (2022) concluded that taxpayers who participate in the tax amnesty program have a higher tendency to commit tax evasion and are more prone to tax evasion than taxpayers who do not participate in the tax amnesty program where the higher the level of disclosure of net assets and payment of ransom, the higher the level of tax evasion or evasion committed by taxpayers.The description above explains that there is a relationship between taxpayer attitudes and behavior and tax evasion or avoidance. The implementation of the tax amnesty program has been implemented several times in Indonesia, namely the Tax Amnesty Period I policy in 1964, the Tax Amnesty Period II policy in 1984, the Tax Amnesty Period III policy in 2007, the Tax Amnesty Period IV policy in 2009, the Tax Amnesty Period V policy in 2015 and the Tax Amnesty Period IV policy in 2016.In 2021, the government through Law Number 7 of 2021 concerning Harmonization of Tax Regulations provides an opportunity for Individual and Corporate Taxpayers participating in tax amnesty to re-disclose assets that have not been reported or not reported at the time of participating in the tax amnesty program through policy I Voluntary Disclosure Program and Individual Taxpayers other than tax amnesty participants to disclose net assets still owned on December 31, 2020 which were obtained from January 1, 2016 to December 31, 2020 and have not been reported in the Individual Annual Tax Return for the 2020 Fiscal Year through policy II Voluntary Disclosure Program.The purpose of enacting this policy is to increase taxpayer voluntary compliance (Mahmud & Mooduto, 2023;Ningtyas & Aisyaturrahmi, 2022).To encourage the successful implementation of the program, the government offers compensation or benefits for Voluntary Disclosure Program participants.By participating in the Voluntary Disclosure Program, Taxpayers participating in PPS Policy I will not be subject to administrative sanctions of an increase of 200% of undisclosed assets and Taxpayers participating in PPS Policy II will receive benefits including no tax audit on tax obligations for Fiscal Years 2016 to 2020 and data and information from the disclosure of net assets cannot be used as the basis for criminal investigation, investigation and/or prosecution. According to Irawan and Raras ( 2021), the Voluntary Disclosure Program can be referred to as tax amnesty volume II because it has the same substance, namely the provision of tax amnesty for net assets that have not been disclosed in the Tax Return.Amnesty.This PAS-Final policy aims to provide an opportunity for taxpayers participating in tax amnesty to make improvements to asset disclosure reporting if there are assets that have not been fully disclosed when participating in the tax amnesty program and taxpayers not participating in tax amnesty to disclose assets that have not been reported in the Annual Tax Return (SPT) (Farhan & Rosdiana, 2023).By participating in the PAS-Final program, taxpayers can avoid the imposition of an increase sanction of 200% of the value of assets that have not been or less disclosed for tax amnesty participants and 2% per month for a maximum of 24 months starting from the discovery of data and / or information on additional income until the issuance of an Underpaid Tax Audit Letter (SKPKB) for taxpayers not participating in tax amnesty. From the description above, the question arises, the provision of opportunities for taxpayers to correct or re-report assets that have not been disclosed on the first opportunity to report the assets when participating in the tax amnesty program through the implementation of the PAS-Final program in 2017 and the Voluntary Disclosure Program in 2022 can fully encourage voluntary compliance from taxpayers or even have the opposite effect?According to the results of research from Ispriyarso (2019) the existence of legal uncertainty in the application of tax amnesty sanctions creates a tendency for taxpayers not to pay taxes in advance and prefer to wait for other tax amnesty policies in the future because the ransom payment from the tax amnesty program is considered cheaper.The results of the study show that on the other hand, the existence of tax amnesty can even trigger the disobedient attitude of taxpayers. Based on the explanation above, it is known that taxpayers in determining their attitudes and behavior in carrying out tax obligations are based on individual reasons or intentions to minimize tax payments, which is indicated by the behavioral tendency to commit tax evasion (Sasmita & Kimsen, 2023;Umaimah, 2021;Zainuddin et al., 2021;Ispriyarso 2019).This tendency of tax evasion can be seen from the behavior of taxpayers who choose not to pay taxes in advance and prefer to wait for future tax amnesty policies in order to pay taxes that are considered cheaper through ransom payments (Ispriyarso 2019).The delay in payment behavior has an impact on the amount of ransom payments when participating in the tax amnesty program, where the delay in large tax payments will lead to greater disclosure of net assets when participating in the tax amnesty program so that the ransom which is considered an increase in taxpayer compliance will be high.Based on the description above, the first hypothesis (H1) can be formulated in the form of an alternative hypothesis as follows: the increase in taxpayer compliance through the payment of ransom is influenced by the level of tax evasion. METHOD This research is anchored in the positivist paradigm, which serves as the conceptual foundation for elucidating the phenomena and realities inherent to the topic under investigation.Within the framework of positivism, it is posited that empirical facts form the exclusive basis for all scientific assertions, with social reality perceived as objective (Wekke, 2019).Guided by this paradigmatic stance, the researcher employs a quantitative methodology to quantitatively assess and interpret the correlation between taxpayer compliance and tax evasion (Ambarwati et al., 2021;Qadri et al., 2023;Qadri & Darmawan, 2021).The investigative process is operationalized through a case study approach, leveraging quantitative methods to examine the behaviors and responses of taxpayers enrolled in the Voluntary Disclosure Program at KPP Pratama Pontianak Barat. The analytical focus of this investigation encompasses individuals and entities enrolled in two distinct cohorts of the Voluntary Disclosure Program: Policy I targets Individual and Corporate Taxpayers who participated in the Tax Amnesty program yet failed to fully declare their net assets up to December 31, 2015.Simultaneously, Policy II pertains to those who acquired undisclosed net assets between January 1, 2016, and December 31, 2020, and omitted these from their 2020 Tax Return (SPT), thereby potentially engaging in tax evasion through the non-remittance of taxes due on earnings from business or employment activities.Data for this study were procured via the secondary data documentation technique, involving formal requests for relevant data from designated agencies, specifically KPP Pratama Pontianak Barat, in accordance with the data requisition protocols established by the Directorate General of Taxes.This process entailed the submission of a comprehensive data request alongside an application for research authorization through the online platform www.eriset.pajak.go.id.The application process required the provision of several documents, including an endorsement or introductory letter from the affiliated academic institution, a detailed research proposal, and a formally stamped declaration committing to the dissemination of the research findings to the Directorate General of Taxes. The total population comprising taxpayers enrolled in Policies I and II of the Voluntary Disclosure Program at KPP Pratama Pontianak Barat numbers 773, as delineated in Table 1.For sample selection, this study employs the Purposive Sampling technique, a methodological approach whereby specific individuals or instances are deliberately chosen to yield critical insights unattainable from alternative sources.This selection process, as articulated by Firmansyah & Dede (2022), incorporates cases or participants into the research sample with the intent of generating findings that align with the study's initial aims and accurately reflect the characteristics of the broader population under investigation.One of the practices of tax evasion is by not paying the tax burden in accordance with the provisions of the law (Saputri & Kamil, 2021) The sample criteria specify that taxpayers eligible for inclusion in the study are those engaged in the Voluntary Disclosure Program, possessing income from business or employment activities, yet who have either failed to report or remit tax obligations on said income for the tax years 2018 to 2020, or for any tax year prior to the commencement of the Voluntary Disclosure Program.Of the 773 taxpayers who participated in the Voluntary Disclosure Program, it was found that 731 of them did not indicate having committed tax evasion (no unpaid tax data).Thus, taxpayers who meet the criteria to become research samples are 42 taxpayers. This research delineates two primary variables: taxpayer compliance, serving as the dependent variable, and tax evasion, positioned as the independent variable.The operationalization and measurement of these variables are comprehensively detailed in Table 2.The Taxpayer Compliance Variable (TPC) measures the increase in tax revenue through ransom payments (Riyadi et al., 2021), while the Tax Evasion Variable (TEV) measures unpaid tax obligations (Saputri & Kamil, 2021).The study applies a single linear regression model to evaluate the impact of the independent variable (tax evasion) on the dependent variable (taxpayer compliance), with the model's formulation grounded in the operationalization outcomes of the variables under examination.lnTPC = β0 + β1*lnTEV + ε In the process of conducting quantitative data analysis, the researcher begins by compiling a dataset that includes information on the total tax liabilities unpaid by taxpayers during the tax years 2018 to 2020, alongside data pertaining to the ransom payments made in relation to the disclosure of net assets within the Voluntary Disclosure Program.Once the dataset is prepared, the subsequent step involves selecting an appropriate regression technique, for which a single linear regression method is employed.Following the selection of this method, the researcher undertakes a comprehensive regression analysis.This analysis encompasses the execution of diagnostic tests to assess measurement errors, verify the regression model's specification, and conduct tests for classical assumptions, which include examinations of normality and heteroscedasticity.These procedural steps are critical to ascertain that the chosen model adheres to the classical assumptions, thereby ensuring that the resulting coefficients are BLUE (Best Linear Unbiased Estimate), indicative of the most reliable and unbiased estimates achievable within the linear regression framework. The measurement error test is conducted with the objective of verifying the precision with which the variables have been quantified.To assess the accuracy of these measurements, an exhaustive descriptive analysis is performed for each variable to examine the distribution characteristics, utilizing the skewness and kurtosis coefficients as indicators (Cain et al., 2017).Moreover, an analysis of outlier data is undertaken through the application of the predicted Cook's distance value for variables exhibiting signs of non-normal distribution, specifically identified by a skewness coefficient divergent from zero and a kurtosis coefficient exceeding three (Smiti, 2020).Variable data manifesting a Cook's distance value surpassing one are subsequently excluded from the research sample to maintain the integrity of the analysis.Upon ensuring the appropriate measurement of the sample and variables, the analysis proceeds to the regression specification test and the classical assumption test.The regression specification test employs the scatter plot method to ascertain the linearity of the regression model and verify that the relationship between variables aligns with the formulated hypothesis.This step is crucial in validating that the regression model is correctly specified and that it adherently reflects the theoretical relationship posited between the dependent and independent variables. The subsequent phase in the analysis encompasses the regression specification test and the testing of classical assumptions.The regression specification test is executed via the scatter plot technique to assess the linearity of the regression model and to confirm its conformity with the predefined hypothesis, ensuring the model's appropriateness for the data and theoretical expectations (Nguyen et al., 2020).In addition to the regression specification, a series of classical assumption tests are conducted to validate the underlying assumptions of the regression model .This includes the normality test (Ruxton et al., 2015), implemented through the Shapiro-Wilk method, to verify the normal distribution of the dataset.Concurrently, a heteroscedasticity test (Romeo et al., 2023), utilizing the Breusch-Pagan method, is employed to ascertain the constancy of error variance or predictive error across the dataset, aiming to establish homoscedasticity. Given the employment of a single linear regression model in this investigation, the test for multicollinearity was deemed unnecessary (Kim, 2019).Multicollinearity testing is typically relevant in models involving multiple independent variables, where high correlations among predictors may distort the reliability of the regression coefficients.Furthermore, the analysis did not encompass an autocorrelation test, predicated on the rationale that the dataset is cross-sectional (Pötscher & Preinerstorfer, 2018).Cross-sectional data, representing observations at a single point in time, inherently minimizes the concerns of autocorrelation typically associated with time-series data, where the independence of observations across time intervals is a critical assumption. RESULT AND DISCUSSION Table 3 presents a summary of the descriptive statistics of each variable used in the study.The sample of the research conducted was 42 taxpayers who were taxpayers participating in the Voluntary Disclosure Program who were indicated to have committed tax evasion as indicated by the existence of unpaid tax data on income received or earned during the 2018 tax year period up to the 2020 tax year.However, after analyzing the accuracy of data measurement through the measurement error test using the skewness and kurtosis coefficient indicators in the initial descriptive analysis, it is known that the skewness coefficient value of the two research variables is greater than 0 (zero) and the kurtosis coefficient value of the taxpayer compliance variable is greater than 3 (three) which indicates the presence of outliers which causes the selected sample data not to be normally distributed. To overcome the abnormal data distribution, the author changes the unit of measure for each variable data, which was originally in the form of the rupiah amount of the ransom value for the taxpayer compliance variable and the unpaid tax value for the tax evasion variable, to the natural logarithm (Ln) unit of measure.Furthermore, to be able to determine the presence of outliers, a prediction of the cook's distance value on each variable is carried out using the STATA statistical data processing software.From the prediction of the cook's distance value, it is known that there are 2 (two) variable data that are indicated as outliers (cook's distance value greater than 1 (one)) so that the data is removed from the research sample and the remaining samples to be used as research data and statistical analysis using STATA software are 40 samples.The exclusion of outlier data from the research sample is based on the consideration that after the outlier data is removed, the kurtosis coefficient value of the taxpayer compliance variable which was previously greater than 3 (three) changes to less than 3 (three) and the skewness coefficient value of both variables is close to 0 (zero) which indicates that the data is normally distributed. Based on Table 3, it can be concluded that of the 40 samples of taxpayers studied, the taxpayer compliance variable proxied by the amount of ransom paid when participating in the voluntary disclosure program (lnTPC) has the smallest (minimum) natural logarithm value of 14.853 or IDR 2,823,365 and the largest (maximum) value of 21.326 or IDR 1,827,141,886.The average (mean) amount of ransom paid by the sampled taxpayers has a natural logarithm (Ln) value of 18.055 or IDR 69,411,978 and a median value of 17.830 or IDR 55,433,796.The natural logarithm (Ln) value of the deviation of 1.425 which is smaller than the average value indicates that the taxpayer compliance variable has low data variation. In the tax evasion variable (lnTEV) proxied by the total amount of tax that has not been paid or paid by taxpayers for income received or earned in the period 2018 to 2020 tax year, it is known that of the 40 sample taxpayers studied, the smallest (minimum) natural logarithm value (Ln) is 14.200 or IDR 1,469,643 and the largest (maximum) value is 20.172 or IDR 576,738,779.The average (mean) of the total amount of tax evasion from the sampled taxpayers has a natural logarithm (Ln) value of 16.582 or IDR 15,907,480 and a median value of 16.712 amounting to IDR 18,110,456.The natural logarithm (Ln) value of the deviation of 1.567 which is smaller than the average value indicates that the tax evasion variable has low data variation. . Regression Specification Test Scatter Plot Figure 1 above is the result of the regression specification test using the Scatter Plot method which illustrates the linear and positive relationship between the taxpayer compliance variable as the dependent variable and the tax evasion variable as the independent variable.The linear and positive relationship is depicted by a straight line on the graph which has a positive slope, indicating that the relationship between the independent variable and the dependent variable is directly proportional or linear.The relationship between variables based on the graph above is in accordance with the hypothesis compiled where the graph illustrates a directly proportional or linear relationship between tax evasion and increased taxpayer compliance where taxpayers participating in the PPS with a high level of tax evasion will pay a high ransom which indicates an increase in taxpayer compliance in making tax payments when participating in the Voluntary Disclosure Program. Based on the results of the Normality test using the Shapiro-Wilk method in accordance with Table 4 above, it is known that the probability value (Prob (z)) of the taxpayer compliance variable is 0.469 and the tax evasion variable is 0.227.The Prob (z) value of each variable is greater than 0.050 so that it can be ascertained that the data used in the study are normally distributed and the normality assumption has been fulfilled.With the fulfillment of the assumption of normality, further statistical analysis can be carried out on the data that is the research sample. In addition, the table above shows the results of the Heteroscedasticity test using the Breusch-Pagan method where the probability value (Prob (Chi-sq)) is 0.145.From the Prob (Chi-sq) value, it can be ascertained that the research data is homoskedasticity (not heteroskedasticity) because the Prob (Chi-sq) value is greater than 0.050.Based on the results of the heteroscedasticity test, it is known that the error variance or prediction error of the research data is constant throughout the data range.Based on the regression test results Table 4 and Table 5, a single linear regression equation model is obtained as follows.lnTPC = 6,977 + 0,668*lnTEV + ε Based on the single linear regression equation above, it is known that the constant value of 6.977 is positive, which indicates that if the independent variable of tax evasion is assumed to be 0 (zero), then the dependent variable of taxpayer compliance with the indicator of ransom payment is the natural logarithm value (Ln) 6.977.The tax evasion variable (lnTEV) has a coefficient of 0.668 which indicates that a 1.000% increase in the amount of tax evasion (lnTEV) will increase the amount of ransom payments as an indicator of increased taxpayer compliance (lnTPC) when participating in the Voluntary Disclosure Program by 0.668%.0.000 Table 5 shows the coefficient of determination (adjusted R 2 ) of 0.527 which indicates that the effect of the independent variable of tax evasion on the dependent variable of taxpayer compliance is 52.700% while the remaining 47.300% is influenced by other variables not included in the study.H1 states that the increase in taxpayer compliance through ransom payments is influenced by the level of tax evasion.Judging from the significance value of the independent variable tax evasion of 0.000 or less than the value of α = 0.050, it can be concluded that the tax evasion variable has a significant positive effect on the taxpayer compliance variable.From the significance and nature of the relationship, it can be concluded that H1 is accepted or rejects H0 and it can be concluded that the higher the tax evasion committed by taxpayers participating in the Voluntary Disclosure Program, the higher the increase in taxpayer compliance when participating in the Voluntary Disclosure Program as indicated by the greater the amount of ransom paid when participating in the Voluntary Disclosure Program. In general, taxpayers have a tendency to pay the lowest possible tax or even if possible will try to avoid it (Margaretha et al., 2023).To overcome the tendency of tax avoidance that leads to tax evasion behavior that can reduce potential tax revenue for the state (Anam et al., 2018;Monica & Andi, 2019;Riyadi et al., 2021), the government through the Directorate General of Taxes has implemented a tax amnesty policy with the hope of increasing the level of taxpayer compliance through increased tax revenue from ransom deposits on net assets disclosed when participating in the tax amnesty program.However, on the other hand, the increase in taxpayer compliance through ransom payments may indicate how the tax evasion behavior of taxpayers who participate in the tax amnesty program. The results showed that there was a significant effect of every 1.000% increase in the amount of tax evasion on the increase in the amount of ransom of 0.668%.From the results of this study, it can be seen that taxpayers with a higher amount of ransom payments or experiencing a high increase in compliance when participating in the PPS program tend to have a high level or amount of evasion.This is in line with the results of research conducted by Mujiyati et al. ( 2022) which states that taxpayers who participate in the tax amnesty program have a higher tendency to commit tax evasion and are more prone to tax evasion compared to taxpayers who do not participate in the tax amnesty program where the higher the level of net asset disclosure and ransom payments, the higher the level of tax avoidance or evasion committed by taxpayers. Based on the literature review that has been conducted, there are several factors that become reasons or causes that can lead to tax evasion behavior (reasoned action).These factors include perceptions of fairness, discrimination and the tax system (Sasmita & Kimsen, 2023), the nature of love of money or the assumption of taxpayers that tax payment is something that is not useful and causes losses (Umaimah, 2021;Zainuddin et al., 2021) and the existence of legal uncertainty in the application of tax amnesty sanctions (Ispriyarso, 2019).These factors form a tendency for taxpayer behavior not to pay taxes first and prefer to wait for another tax amnesty policy in the future because the ransom payment from the tax amnesty program is considered cheaper (Ispriyarso, 2019).From the explanation above, it can be concluded that the existence of a tax amnesty program implemented by the government can actually trigger tax evasion behavior as indicated by the behavior of taxpayers who do not report and pay taxes that are their obligations directly or when they are owed taxes and choose to report them when participating in the tax amnesty program.The tendency to report income or increase net assets only when participating in the tax amnesty program is certainly not in line with the original purpose of implementing the program, namely to reduce the level of tax evasion (Kurniawan et al., 2019).This condition is also supported by other research conducted by Purba et al. (2022) which states that tax evasion practices actually tend to continue during the implementation period of the tax amnesty program. The tendency of tax evasion committed by taxpayers participating in the tax amnesty program is in line with theory of reasoned action.At the time of participating in the Voluntary Disclosure Program, taxpayers with a high level of tax evasion will pay ransom in a larger amount compared to taxpayers who do not commit tax evasion or taxpayers with a lower level or amount of tax evasion.The existence of a larger ransom payment indicates that the taxpayer's income measured by the addition of net assets that were not or have not been reported partially or wholly before participating in the Voluntary Disclosure Program, is not reported correctly or in accordance with the actual conditions so that when participating in the tax amnesty program will cause a large amount of ransom.The decision to report additional net assets when participating in a Voluntary Disclosure Program is a reasoned action (Hagger, 2019).The main reason for this preference is the advantage of a tax burden that is considered lower through the ransom payment (Ispriyarso, 2019). CONCLUSION The findings of this research elucidate that tax evasion exerts a positive impact on taxpayer compliance within the context of the Voluntary Disclosure Program.This positive correlation is manifested in the observation that the higher the level of tax evasion engaged in by taxpayers prior to their participation in the Voluntary Disclosure Program, the more substantial the ransom payments derived from the disclosure of net assets.Such payments serve as indicators of enhanced taxpayer compliance, consequent to the implementation of the Voluntary Disclosure Program.Nonetheless, this correlation also suggests that taxpayers who make significant ransom contributions likely exhibited a pronounced propensity towards tax evasion before their engagement with the Voluntary Disclosure Program.Previous studies have previously investigated the nexus between tax amnesty policies and shifts in taxpayer behaviors and attitudes, particularly using the metric of increased offshore fund allocations by taxpayers as a proxy for the propensity towards tax evasion.The current study extends this inquiry by examining the relationship between taxpayers' predispositions towards tax evasion, quantified by unpaid tax liabilities, and their compliance levels within the Voluntary Disclosure Program.Specifically, the aim is to ascertain the extent to which prior tax evasion activities influence subsequent compliance improvements, as evidenced by the volume of ransom payments associated with the disclosure of net assets.Through this lens, the research seeks to contribute to a nuanced understanding of the dynamics between pre-program tax evasion behaviors and compliance enhancements facilitated by participation in the Voluntary Disclosure Program. This research furnishes the government, specifically the Directorate General of Taxes, with insights into the tax evasion tendencies among taxpayers enrolled in the Voluntary Disclosure Program.The findings are intended to inform the Directorate General of Taxes, underscoring the need for enhanced oversight of taxpayers within this program who exhibit propensities towards tax evasion, with the ultimate goal of maximizing state revenue.Despite its contributions, this study acknowledges limitations in the scope of its research variables and the operationalization of these variables.The analysis reveals that the independent variables considered herein account for only 52.700% of the variance in the dependent variable.Moreover, the operational measure of tax evasion-based on the aggregate amount of undeclared taxes by participants of the Voluntary Disclosure Program-is confined to the tax years spanning 2018 to 2020.Future research is encouraged to expand upon the present study by incorporating additional relevant variables and extending the timeframe for variable measurement indicators.Such extensions would potentially offer a more comprehensive understanding of taxpayer behavior over an extended period, thereby enhancing the predictive power and applicability of the research findings to policy formulation and enforcement strategies aimed at curbing tax evasion and improving taxpayer compliance. After the implementation of the tax amnesty program and before the enactment of the Voluntary Disclosure Program, a similar program has been implemented in the form of the Voluntary Asset Disclosure program using the Final Rate (PAS-Final) through Minister of Finance Regulation Number 165/PMK.03/2017 on the Second Amendment to Minister of Finance Regulation 118/PMK.03/2016 on the Implementation of Law Number 11 of 2016 on Tax Table 1 . Purposive Sampling Method Criteria Sample Taxpayers participating in the Voluntary Disclosure Program 773 Taxpayers participating in the Voluntary Disclosure Program are not indicated to have committed tax evasion ( )
8,436.4
2024-03-05T00:00:00.000
[ "Law", "Business", "Economics" ]
Creation of Philadelphia chromosome by CRISPR/Cas9-mediated double cleavages on BCR and ABL1 genes as a model for initial event in leukemogenesis The Philadelphia (Ph) chromosome was the first translocation identified in leukemia. It is supposed to be generated by aberrant ligation between two DNA double-strand breaks (DSBs) at the BCR gene located on chromosome 9q34 and the ABL1 gene located on chromosome 22q11. Thus, mimicking the initiation process of translocation, we induced CRISPR/Cas9-mediated DSBs simultaneously at the breakpoints of the BCR and ABL1 genes in a granulocyte-macrophage colony-stimulating factor (GM-CSF) dependent human leukemia cell line. After transfection of two single guide RNAs (sgRNAs) targeting intron 13 of the BCR gene and intron 1 of the ABL1 gene, a factor-independent subline was obtained. In the subline, p210 BCR::ABL1 and its reciprocal ABL1::BCR fusions were generated as a result of balanced translocation corresponding to the Ph chromosome. Another set of sgRNAs targeting intron 1 of the BCR gene and intron 1 of the ABL1 gene induced a factor-independent subline expressing p190 BCR::ABL1. Both p210 and p190 BCR::ABL1 induced factor-independent growth by constitutively activating intracellular signaling pathways for transcriptional regulation of cell cycle progression and cell survival that are usually regulated by GM-CSF. These observations suggested that simultaneous DSBs at the BCR and ABL1 gene breakpoints are initiation events for oncogenesis in Ph+ leukemia. (200/200 words). INTRODUCTION Chromosomal translocation is among the most common chromosomal abnormalities observed in leukemia, and is highly involved in leukemogenesis. It is supposed to be generated as a result of aberrant repairs of two simultaneous DNA double-strand breaks (DSBs) at different portions of the chromosome. The Philadelphia (Ph) chromosome was the first chromosomal translocation identified in cancer. It was discovered in 1960 by Nowell PC and Hungerford DA as an abnormal minute chromosome in chronic myeloid leukemia (CML) patients [1]. In 1973, using chromosome banding techniques, Rowley JD demonstrated that it is a balanced translocation between chromosomes 22 and 9 [2]. It was also identified in acute lymphoblastic leukemia (ALL) [3]. Later studies identified the BCR gene and the ABL1 gene at each breakpoint [4,5]. There are two major types of Ph chromosome. In most CML patients and approximately one-quarter of Ph chromosomepositive (Ph + ) ALL patients, exon 13 or 14 of the BCR gene is fused to exon 2 of the ABL1 gene, which encodes p210 BCR::ABL1 fusion protein [6]. In the other Ph+ ALL patients, exon 2 of the BCR gene is fused to exon 2 of the ABL1 gene, which encodes p190 BCR::ABL1 fusion protein [7]. In BCR::ABL1 oncoprotein, the tyrosine kinase domain of ABL1 is constitutively activated due to acquisition of a dimerization domain of BCR and a loss of the SH3 domain of ABL1, which negatively regulates ABL1 kinase activity [8]. BCR::ABL1 potently activates diverse signaling pathways involved in leukemic transformation by promoting cell cycle progression and cell survival [9][10][11][12]. Of clinical importance, tyrosine kinase activity of BCR::ABL1 protein is proven to be an effective therapeutic target [13]. Tyrosine kinase inhibitors (TKIs) have dramatically improved prognoses of CML and Ph+ ALL patients [14][15][16][17][18][19]. For functional evaluation of tyrosine kinase activity of BCR::ABL1 and pharmacogenetic evaluation of BCR::ABL1 gene mutations on TKI sensitivities, a murine IL-3-dependent Baf3 cell line transduced with human BCR::ABL1 cDNA by retrovirus vector has been generally used [20,21]. A bona fide model of CML was initially developed in lethally irradiated mice after syngeneic transplantation of bone marrow, in which BCR::ABL1 cDNA was retrovirally transduced [22]. Although leukemia progression was not achieved by simple transplantation of human CD34 + cord blood cells retrovirally transduced with p210 BCR::ABL1 cDNA into NOD-SCID mice [23], simultaneous transduction of BMI1 cDNA induced ALL progression [24]. In transgenic mice of p210 or p190 BCR::ABL1 under diverse promoters, leukemia progression has been widely confirmed [25][26][27]. The leukemogenic potential of BCR::ABL1 was also evaluated in the endogenous locus of mouse bcr gene promoter. Notably, B-cell leukemia was developed in knock-in mice of p190 BCR::ABL1 cDNA [28], while leukemia progression was not confirmed in those of p210 BCR::ABL1 [29]. One possible explanation for this discrepancy between transgenic models and knock-in models is that promoter activity of the bcr gene in the knock-in mice might not be sufficiently high for leukemia transformation by p210 BCR::ABL1. In this context, 3'untranslated region (UTR) is also involved in transcriptional and post-transcriptional regulation [30,31]. In the BCR::ABL1 gene, the involvement of microRNA in post-transcriptional regulation through 3' UTR has been reported [32]. However, 3' UTR of the BCR::ABL1 cDNA was largely deleted in the above mice models. Moreover, cDNA lacks introns. Although its significance in leukemogenesis remains to be elucidated, alternative splicing of the BCR::ABL1 gene has been reported to be involved in TKI resistance [33]. Another difference between human Ph+ leukemia and the above mice models is reciprocal ABL1::BCR fusion derived from balanced translocation. Gene transfer of reciprocal ABL1::BCR fusion into murine hematopoietic stem cells enhanced proliferation and stem cell capacity of early progenitors [34], suggesting the involvement of reciprocal ABL1::BCR in leukemogenesis. Under these circumstances, the development of a novel platform that permits testing of leukemogenic activities of balanced translocation under intrinsic transcriptional and post-transcriptional regulation is indispensable. In the present study, we sought to investigate the hypothesis that the Ph chromosome is generated by aberrant repair of two simultaneous DSBs at the BCR and ABL1 gene breakpoints as an initiation event for leukemogenesis. Thus, we induced DSBs at specific breakpoints of BCR and ABL1 genes, using the CRISPR/ Cas9 system in a human factor-dependent leukemia cell line. The obtained factor-independent sublines acquired p210 or p190 BCR::ABL1 and their reciprocal fusion genes as a result of balanced translocation, which is cytogenetically identical to the Ph chromosome. Using these sublines, we evaluated the significance of p210 and p190 BCR::ABL1 in signal transduction and transcription profile. Creation of BCR::ABL1 fusion by CRISPR/Cas9 Synthesized self-complementary oligomers designed by Benchling software (https://www.benchling.com) (Supplement Table 1) and ligation adaptors were purchased from IDT (https://sg.idtdna.com). Each single guide RNA (sgRNA) and pSpCas9(BB)-2A-GFP (Addgene, Watertown, MA, #48138) was amplified by polymerase chain reaction (PCR) and ligated using a NEBuilder HiFi DNA assembly kit (New England BioLabs, Ipswich, MA, USA). A 293 T cell line was maintained with 10% fetal calf serum (FCS) containing Dulbecco's modified Eagle's medium (DMEM) medium. The plasmid was transfected using Lipofectamine 3000 (Thermo Fisher Scientific, Waltham, MA, USA). A TF-1 cell line was purchased from ATCC (#CRL-2003, Manassas, VA, USA) and expanded with 10% FCS containing RPMI1640 medium with 2 ng/ml of human recombinant granulocyte-macrophage colony-stimulating factor (GM-CSF) (PeproTech, Cranbury, NJ, USA). The plasmid was transfected using the Neon electroporation system (Thermo Fisher Scientific) with single pulse at 1300 volts for 20 msec. The cells were cultured in the presence of GM-CSF for seven days. Subsequently, 1 × 10 4 cells were placed in a 24-well plate in the absence of GM-CSF. The number of living cells was counted every seven days after trypan blue staining. Polymerase chain reaction (PCR) analyses Genomic DNA was extracted using a PureLink Genomic DNA Mini Kit (Thermo Fisher Scientific). The sequence of each primer is listed in Supplement Table 2. PCR products were subcloned using a TA Cloning Kit (Thermo Fisher Scientific) and directly sequenced using each forward primer. For PCR analysis of the BCR::ABL1 transcript, total RNAs were extracted using a RNeasy Plus Mini Kit (QIAGEN, Hilden, Germany), and complementary DNAs (cDNAs) were generated with SuperScript IV Reverse Transcriptase (Thermo Fisher Scientific). Amplification was performed using the primers listed in Supplement Table 2. Short tandem repeat (STR) analysis Genomic DNA was extracted using a QIAamp DNA Blood Mini Kit (QIAGEN). PCR was performed using the fluorescent primers listed in Supplement Table 3. The PCR products were analyzed using an ABI 3500 Genetic Analyzer system (Thermo Fisher Scientific) and quantified using GeneMapper software, v4.1 (Thermo Fisher Scientific). G-band karyotyping and fluorescence in situ hybridization (FISH) After 2 h of treatment with 0.1 ug/ml of KaryoMAX COLCEMID Solution (Thermo Fisher Scientific), the cells were exposed to 0.075 mol/l of KCL at 37°C for 15 min and fixed on slide glasses, using a 3:1 methanol/glacial acetic acid solution three times. After trypsin-Giemsa staining of the airdried slide samples, 20 metaphases were analyzed for each sample. For FISH analysis, the air-dried slide samples were denatured at 75°C for 1 min and hybridized with LSI TM BCR Dual Fusion and LSI TM ASS-ABL probes (Vysis/Abbott, Abbott Park, IL, USA) at 37°C for 50 h. Karyotypic and FISH analyses were performed using a CytoVision system (Applied Imaging, Santa Clara, CA, USA). AlamarBlue assay Cells (0.1 × 10 4 ) were incubated with six concentrations of imatinib, dasatinib, nilotinib, or ponatinib, in triplicate, using a 96-well plate. After 66 hr of incubation, the cells were additionally incubated with alamarBlue (Bio-Rad Laboratories, Hercules, CA, USA) for 6 h [35]. Absorbance at 570 nm was monitored by spectrophotometer, using 600 nm as a reference wavelength. Cell viability was calculated by the ratio of the optical density of the treated wells to that of the untreated wells, as a percentage. Western blot analyses Cells were treated with 1 mM of 4-(2-aminoethyl) benzene sulfonyl fluoride HCl (Calbiochem, Darmstadt, Germany) on ice for 10 min, then solubilized in lysis buffer (50 mM Tris-HCl, pH 7.5, 150 mM NaCl, 1% Nonidet P-40, 5 mM EDTA, 0.05% NaN3, 1 mM phenylmethylsulfonyl fluoride, 100 μM sodium vanadate). The lysates were separated on a SDS-polyacrylamide gel under reducing conditions and transferred to a nitro-cellulose membrane. The membrane was incubated overnight at 4°C with the primary antibodies listed in Supplement Table 4, and subsequently with a horseradish peroxidase-labeled second antibody (MBL) at room temperature for 1 h, and developed using an ECL Prime Western Blotting Detection Kit (GE Healthcare, Little Chalfont, UK). Quantitative real-time (RT) PCR Triplicated samples with SYBR Green PCR Master Mix (Thermo Fisher Scientific) were amplified through 40 cycles (at 95°C for 15 sec and 60°C for 1 min), using the primers listed in Supplement Table 5. Quantitation was performed using an ABI Prism 7500 Sequence Detection System (Thermo Fisher Scientific). The relative gene expression level was determined using INHβB as an internal control. M. Tamai et al. RESULTS Creation of p210 BCR::ABL1 fusion gene by double cleavages of BCR and ABL1 genes, using CRISPR/Cas9 In CML, a p210 BCR::ABL1 fusion gene is generated between exon e13 of the BCR gene and exon a2 of the ABL1 gene. To generate e13a2 type fusion, we transfected two sgRNAs targeting intron 13 of the BCR gene and intron 1 of the ABL1 gene ( Fig. 1a) with Cas9 cDNA into HEK293T, a human embryonic kidney cell line. Genomic PCR revealed formation of e13a2 type fusion when both sgRNAs were transfected (Fig. 1b). We next tried to create e13a2 type fusion in a TF-1 cell line, which is a human GM-CSF-dependent erythroleukemia cell line, since transfection of BCR::ABL1 fusion cDNA was reported to induce factorindependent cell growth [42]. After transfection of two sgRNAs with Cas9 cDNA, TF-1 cells were first expanded in the presence of GM-CSF for seven days, and subsequently cultured in the absence of GM-CSF. Seven days after GM-CSF depletion, the factor-independent cells started to expand. The obtained subline proliferated without GM-CSF (Fig. 1c), while parental cells were unable to grow without GM-CSF. Genomic PCR revealed generation of e13a2 and reciprocal a2e13 fusions in the subline (Fig. 1d). Sanger sequencing after TA-cloning confirmed direct ligations of two target sites of the sgRNAs in the majority of both PCR products (Fig. 1e, Supplement Fig. 1a, b). RT-PCR analysis confirmed expression of the e13a2 fusion transcript in the subline but not in the parental cells (Fig. 1f). Consistently, Western blot analysis using anti-ABL1 antibody confirmed generation of aberrant protein in the subline, which showed an identical migration pattern and similar intensity to that of p210 BCR::ABL1 fusion protein in a Nalm1 cell line (Fig. 1g). Finally, STR analysis showed an identical pattern between parental cells and subline (Supplement Fig. 2), thus excluding contamination of other cells. These results indicate the generation of the p210 BCR::ABL1 fusion gene as a result of balanced translocation by direct ligation of two cleavage sites in a human leukemia cell line. Generation of balanced translocation corresponding to Ph chromosome In order to validate the artificial generation of a p210 BCR::ABL1 fusion gene at chromosomal level, we first performed a FISH analysis (Supplement Fig. 3). In parental cells, two red signals corresponding to the ABL1 gene and two green signals corresponding to the BCR gene were detectable in all nuclei. Notably, in the subline, single, double, and triple yellow fusion signals were detectable in 50%, 46%, and 4% of the nuclei, respectively (Fig. 2a, Supplement Fig. 4). In a G-banding analysis, parental cells showed highly rearranged hyperdiploidy with diverse variations (Fig. 2b). The subline showed similar structural and numerical abnormalities (Fig. 2c). Notably, the subline acquired balanced translocation resembling that of the Ph chromosome. To confirm the structural abnormality, we performed a SKY analysis. In the parental cells, chromosome 22 was translocated to chromosome 20. In the subline, the telomeric end of chromosome 22q was translocated to the centromeric end of chromosome 9q, and vice versa (Fig. 2d). These results indicate that double cleavages at the breakpoints of the p210 BCR::ABL1 fusion gene by CRISPR/Cas9 artificially created a balanced translocation corresponding to the Ph chromosome. Creation of p190 BCR::ABL1 fusion gene by double cleavages of BCR and ABL1 genes, using CRISPR/Cas9 In most cases of Ph+ ALL, a p190 BCR::ABL1 fusion gene is generated between exon e1 of the BCR gene and exon a2 of the ABL1 gene. In order to create e1a2 type fusion, we transfected another set of sgRNAs targeting intron 1 of the BCR gene and intron 1 of the ABL1 gene into a TF-1 cell line with Cas9 cDNA (Fig. 3a). After selection in the absence of GM-CSF, we obtained a GM-CSF-independent subline (Fig. 3b). Genomic PCR analysis revealed generation of e1a2 and reciprocal a2e1 fusions in the subline (Fig. 3c). Sanger sequencing after TA-cloning revealed direct ligation at two breakpoints, with some minor variations in both genomic PCR products ( Fig. 3d and Supplement Fig. 5a, b). RT-PCR analysis confirmed expression of the e1a2 fusion transcript in the subline but not in the parental cells (Fig. 3e). Western blot analysis of the subline using anti-ABL1 antibody revealed aberrant protein with an identical migration pattern and similar intensity to that of p190 BCR::ABL1 fusion protein in a Kasumi8 cell line (Fig. 3f). These results indicate the generation of a p190 BCR::ABL1 fusion gene as a result of balanced translocation. Constitutive activation of artificially generated p210 and p190 BCR::ABL1 tyrosine kinases in TF-1 cells Since two sublines showed GM-CSF-independent cell growth, we next evaluated the functional significance of BCR::ABL1 fusion proteins. In a cell cycle analysis, almost half of the parental cells cultured in the absence of GM-CSF (GM-) were accumulated in the sub-G0/G1 phase, while two GM-sublines showed almost similar distributions to those of the parental cells cultured in the presence of GM-CSF (GM + ) (Fig. 4a). In an apoptosis analysis (Fig. 4b), nearly half of the GMparental cells underwent apoptosis, while most of two GM-sublines survived. We next evaluated the phosphorylation status of intracellular signaling molecules by Western blot analysis. In the GM-parental cells, STAT5, MAPK, and P70/S6K were dephosphorylated (Fig. 4c). In contrast, in two GM-sublines, STAT5, MAPK, and P70/S6K were constitutively phosphorylated (Fig. 4c). These observations indicate that GM-CSF-independent proliferation and cell survival of the p210 BCR::ABL1 and p190 BCR::ABL1 sublines were sustained by constitutive phosphorylation of intracellular signaling molecules. Notably, two GM-sublines were sensitive to all four TKIs (imatinib, dasatinib, Fig. 1 Creation of p210 BCR::ABL1 fusion gene, using the CRISPR/Cas9 system. a Schematic representation of sgRNA target sites. Targeted protospacer adjacent motif (PAM) site is highlighted in orange. Arrows and arrowheads indicate sequences of sgRNA and Cas9 cleavage sites, respectively. b Genomic PCR of BCR::ABL1 junctional region in 293 T cell lines transfected with either or both of two sgRNAs for the BCR and ABL1 genes. c Growth curves of parental TF-1 cells and subline cultured in the absence of GM-CSF, with error bars of triplicated samples. d Genomic PCR of BCR::ABL1 and ABL1::BCR junctional regions in parental cells and subline. e Representative genomic sequences of BCR::ABL1 (top panel) and reciprocal ABL1::BCR (bottom panel) fusion sites. f RT-PCR analysis of the ABL1 and p210 BCR::ABL1 genes in parental cells and subline. Genes with exon numbering of forward and reverse primers are indicated (top of panel). g Western blot analysis of parental cells and subline with anti-ABL1 and anti-α-tubulin antibodies using Nalm1 cell line as a positive control. nilotinib, and ponatinib), while the GM + parental cells were highly resistant (Fig. 4d). These observations indicate that artificially generated p210 BCR::ABL1 and p190 BCR::ABL1 were constitutively active in TF-1 cells cultured in the absence of GM-CSF. Distinctive transcriptional profile between p210 BCR::ABL1 and p190 BCR::ABL1 in TF-1 cells Since artificially generated p210 BCR::ABL1 and p190 BCR::ABL1 are functionally active in TF-1 cells, we investigated the significance of transcriptional profile. RNA sequencing was performed in GM + and GM-parental cells and in two GMsublines. In a principal component analysis, each sample clustered distinctly (Fig. 5a), and the gene expression profiles of the two GM-sublines were distinctly situated to each other. When compared with the GM-parental cells, the expression levels of 250 and 116 genes were commonly upregulated and downregulated, respectively, in the GM + parental cells and the two GM-sublines ( Fig. 5b and Supplement Table 6). GO analysis indicated activation of STAT5 signaling, inflammatory response, and KRAS signaling, and inactivation of heme metabolism in the GM + parental cells and the two GM-sublines (Fig. 5c). In a GSEA, enrichment of STAT5 signaling, KRAS signaling, and TNFα signaling, and an apoptotic pathway were commonly observed in the GM + parental cells and the two GM-BCR::ABL1 sublines, compared to the GM-parental cells (Fig. 5d). Consistently, expression levels of the genes involved in cell cycle progression (CCND2, CCND3, and MAPKAPK3) and cell survival (LITAF and BCL2L11) were commonly upregulated in the GM + parental cells and the two GM-sublines ( Fig. 5e and Supplement Fig. 6). These observations indicate that p210 and p190 BCR::ABL1 induced factor-independent cell growth by upregulating the genes involved in cell cycle progression and cell survival, which are normally regulated by growth factor stimulation. Considering the distinctive pattern in the principal component analysis, we concentrated on differences in gene expression profiles between the two GM-BCR::ABL1 sublines. Compared with the p190 BCR::ABL1 subline, 399 and 181 genes were upregulated and downregulated respectively in the p210 BCR::ABL1 subline (Fig. 6a). GO analysis revealed upregulation of extra-cellular matrix receptor interaction and hematopoietic lineage genes, and downregulation of systemic lupus erythematous genes in the p210 BCR::ABL1 subline, compared to the p190 BCR::ABL1 subline (Supplement Fig. 7). In a volcano plot analysis, myeloid lineagerelated genes (CD93, MPL, RARB, and MECOM) were upregulated in the p210 BCR::ABL1 subline (Fig. 6b). We then evaluated changes in these myeloid lineage-related gene expression levels by realtime RT-PCR analysis in two GM-sublines after treatment with 1 μM of imatinib for 24 h. Notably, the gene expression levels of CD93, MPL, RARB, and MECOM were significantly downregulated by imatinib treatment in the p210 BCR::ABL1 subline, but were unchanged or upregulated in the p190 BCR::ABL1 subline (Fig. 6c). These observations indicate that p210 BCR::ABL1 specifically promotes myeloid features, compared to p190 BCR::ABL1. DISCUSSION Chromosomal translocation is generated by aberrant ligation of two simultaneous DSBs [43]. Interestingly, simultaneous DSBs on different chromosomes are reported to be sufficient to promote reciprocal translocations in mouse embryonic stem cell system [44]. When a genome editing system became available to induce a DSB of human genome at any locus of interest, EWSR::FLI1 and NPM1::ALK, which are oncogenic fusions in Ewing sarcoma and anaplastic large cell lymphoma, respectively, were artificially generated in human mesenchymal precursors using zinc finger and transcription activator-like effector nucleases [45]. Subsequently, EWSR::FLI1 and NPM1::ALK were generated by simultaneous cleavages at the target sites with the CRISPR/Cas9 system [46][47][48][49]. Similarly, leukemogenic MLL fusion genes including MLL::AF4 [50][51][52], MLL::AF9 [50,[53][54][55], and MLL::ENL [56] were generated by double cleavages using CRISPR/ Cas9 in human hematopoietic cells and in the murine 32D myeloid progenitor cell line. For the development of incorrect ligation to take place between two DSBs, each DSB is supposed to be proximally located in the nucleus. Here, we note that the intergenic distance between BCR and ABL1 genes in hematopoietic cells was reported to be less than expected [57,58]. We hypothesized that simultaneous DSBs at two specific breakpoints of the BCR and ABL1 genes by the CRISPR/Cas9 system may artificially generate a BCR::ABL1 fusion gene in human cells as a result of balanced translocation. To test this hypothesis, we selected the human GM-CSF-dependent erythroleukemic cell line, TF-1, since generation of the BCR::ABL1 fusion gene may induce factor-independent cell growth. In the leukemogenesis of the Ph chromosome, the significance of the reciprocal ABL1::BCR fusion gene, generated as a result of balanced translocation, is controversial. In approximately one-third of CML cases, reciprocal ABL1::BCR fusion mRNA was undetectable [59], suggesting that the reciprocal ABL1::BCR fusion gene may not be indispensable, at least for development of CML. Meanwhile, gene transfer of the reciprocal ABL1::BCR fusion cDNA into murine hematopoietic stem cells enhanced proliferation and stem cell capacity of early progenitors [34]. Moreover, ABL1::BCR induced the B-cell commitment of murine hematopoietic stem cells and human umbilical cord blood cells. These in vitro models suggest that reciprocal ABL1::BCR might play a role in leukemogenesis by influencing the lineage commitment [34]. However, previous Ph+ leukemia models were unable to generate reciprocal ABL1::BCR fusion in combination with BCR::ABL1 fusion. Notably, in the present study, genomic PCR analysis confirmed the generation of the reciprocal ABL1::BCR fusion gene and the BCR::ABL1 fusion gene. In the FISH analysis of the GM-CSF-independent p210 BCR::ABL1 subline, single and double fusion signals were observed in 50% and 46% of the nuclei, respectively. Accordingly, the cells with a single fusion signal were supposed to have the BCR::ABL1 fusion gene only, while those with two fusion signals might have both the BCR::ABL1 and the reciprocal ABL1::BCR fusion genes. Thus, at least half the population of the subline did not acquire or lost the reciprocal ABL1::BCR fusion gene during clonal evolution. These observations suggest that the reciprocal ABL1::BCR fusion gene is not indispensable, at least for factor-independent growth of TF-1 cells. The genomic sequences at the BCR::ABL1 and ABL1::BCR fusion sites of the p210 and p190 BCR::ABL1 sublines basically showed direct ligation of two cleavage sites, with minor variations. These direct ligations, without large insertion or deletion (indel), differed substantially from diverse large indels observed at the cleavage sites of genome editing with CRISPR/Cas9. These differences may be attributed to re-cleavage of the repair site by CRISPR/Cas9. In its usual repairing process by non-homologous end joining, a repair site might be repeatedly cleaved until massive indel is acquired [60]. In contrast, at the fusion site of two cleavage ends, further cleavage could not be induced, since the target sequences of sgRNAs are completely disrupted as a result of fusion [61]. Consistent with previous studies [62][63][64][65], we confirmed that both GM-CSF and BCR::ABL1 induced phosphorylation of Transcriptional profiles in parental cells and BCR::ABL1 sublines. a 3D PCA analysis of transcriptional profile in parental cells cultured in the presence (green) or absence (gray) of GM-CSF, and the p210 BCR::ABL1 (red) and p190 BCR::ABL1(blue) sublines cultured in the absence of GM-CSF. b Venn diagram of commonly upregulated (left panel) and downregulated (right panel) genes (FDR < 0.01, Log2 fold change < 1 or > -1) in parental cells cultured in the presence of GM-CSF and two sublines cultured in the absence of GM-CSF, compared to the parental cells cultured in the absence of GM-CSF. c Gene ontology analysis of commonly upregulated (left panel) and downregulated (right panel) genes in the parental cells, cultured in the presence of GM-CSF, and two sublines cultured in the absence of GM-CSF, compared to the parental cells cultured in the absence of GM-CSF. d GSEA of common profile in the parental cells cultured in the presence of GM-CSF and two sublines cultured in the absence of GM-CSF, compared to the parental cells cultured in the absence of GM-CSF. e Gene expression levels of the CCND2, CCND3, MAPKAPK3, LITAF, and BCL2L11 genes in triplicated samples of the parental cells cultured in the presence (green) or absence (gray) of GM-CSF and the p210 BCR::ABL1 (red) and p190 BCR::ABL1(blue) sublines cultured in the absence of GM-CSF. Fig. 6 Distinct gene expression profile between p210 BCR::ABL1 and p190 BCR::ABL1 sublines. a Volcano plot of differentially expressed genes between the p210 BCR::ABL1 and p190 BCR::ABL sublines. Red plots indicate genes with p-value < 10 −9 and absolute log2 fold change > 1. b Gene expression levels of the myeloid lineage-related genes (CD93, MPL, RARB, and MECOM) in triplicated samples of the parental cells cultured in the presence (green) or absence (gray) of GM-CSF, and the p210 BCR::ABL1 (red) and p190 BCR::ABL1(blue) sublines cultured in the absence of GM-CSF. c Effect of imatinib treatment on myeloid-related gene (CD93, MPL, RARB, and MEIS1) expression levels in triplicated samples of p210 BCR::ABL1 and p190 BCR::ABL sublines incubated with or without 1 μM of Imatinib for 24 h in the absence of GM-CSF. RT-PCR analyses were performed using INHβB as an internal control. Data are shown as mean ± standard deviation (SD). The p-values in a student t-test are indicated. independent growth of the BCR::ABL1 sublines, we compared the gene expression profile of the BCR::ABL1 sublines and parental cells with that of 59 myeloid leukemia cell lines, including 13 Phpositive cell lines (https://sites.broadinstitute.org/ccle/) and 10 CML patients' samples (five chronic phase and five blastic crisis samples [66]), by performing two PCAs. However, in the whole transcriptome, the gene expression profile of Ph-positive myeloid leukemia cell lines (Supplemental Fig. 8a) and that of CML patients' samples (Supplemental Fig. 8b) were substantially different from that of p210 or p190 BCR::ABL1 sublines and parental cells, regardless of whether cultured with GM-CSF or not. We also performed principal component analysis using the 16 genes upregulated by retroviral gene transfer of p210 BCR::ABL1 fusion in HL-60 [67], an acute myeloid leukemia cell line, which includes PIM1 oncogene, a signaling kinase (a guanine nucleotide exchange factor and Ras homolog) [68], RAPGEF2, a member of the RAS subfamily of GTPases that function in signal transduction [69], HOXB2, SOX5, and KLF1, transcription factors, and GAGE antigens, a family of cell surface antigens originally identified in melanoma cells. Of note, in these 16 genes, the gene expression profile of Ph-positive myeloid leukemia cell lines (Supplemental Fig. 8c) and that of CML patients' samples (Supplemental Fig. 8d) were more similar to that of p210 or p190 BCR::ABL1 sublines than that of parental TF-1 cells. These observations suggested that generation or gene transfer of BCR::ABL1 fusion in human myeloid leukemia cell lines may affect gene expression involved in signal transduction and transcriptional regulation, which are upregulated in Ph-positive myeloid leukemia cell lines and CML patients' samples, at least in part. Interestingly, although p210 BCR::ABL1 and p190 BCR::ABL1 induced factor-independent cell growth through similar signaling pathways, the gene expression profiles of the two sublines were distinctive. In particular, myeloid lineagerelated genes, including CD93 [70], MPL [71], RARB [72], and MECOM [73], were upregulated in the p210 BCR::ABL1 subline, compared to the p190 BCR::ABL1 subline. Moreover, most of these genes were downregulated by imatinib treatment in the p210 BCR::ABL1 subline but not in the p190 BCR::ABL1 subline. These observations suggest that our cell system might aid further understanding of differences in oncogenic activities between p210 BCR::ABL1 and p190 BCR::ABL1. Based on our success in TF-1, we have also attempted to generate the Bcr::Abl1 and BCR::ABL1 fusions in the murine Ba/F3 line and in human hematopoietic stem cells purified from cord blood cells, respectively, using the same strategy. Since we have had no success in these two cellular systems, we speculated that TF-1 could have some advantages in generating the BCR::ABL1 fusion with this strategy. In summary, we demonstrated that double cleavages at the breakpoints of BCR and ABL1 genes by the CRISPR/Cas9 system generate a balanced translocation that mimics the Ph chromosome in a human factor-dependent leukemia cell line, indicating that simultaneous DSBs at the BCR and ABL1 breakpoints could be initiation events in Ph+ leukemia oncogenesis. Although the utility of the simultaneous introduction of the DSBs by the CRISPR/ Cas9 system for the generation of BCR::ABL1 fusion is limited to TF-1 cells thus far, our strategy may provide a novel platform for functional evaluation of the oncogenic activities of BCR::ABL1 in the near future. (4421/4500 words). DATA AVAILABILITY The sequence reads are available at the DDBJ Sequence Read Archive (DRA014097).
6,703.2
2022-08-23T00:00:00.000
[ "Biology", "Chemistry" ]
The Impact of Corporate Social Responsibility on Corporate Financial Performance from Multiple Literature Perspectives Corporate Social Responsibility’s (CSR) impact exceeds beneficiary communities to include the practitioner companies itself. That impact on business-related aspects is more obvious on the financial performance of these companies relative to non-financial performances. Thus, this has become a hot subject for literature over the past 15 years to investigate. However, these literatures ended up by an ongoing argument on the type of impact that CSR places on corporate financial performance (CFP). Therefore, this paper aims to reorganize the findings of previous studies that had tested the impact of CSR on CFP to enable upcoming researchers to accurately understand the nature of mixed or conflicting findings approached by former researchers. Thus, enhancing chances of more effective research contributions on this particular front compared to existing knowledge. In that accordance, this paper classifies concerned literature review into six categories as follows: Firstly, studies that had reported a direct or complete positive impact of CSR dimensions on CFP in general. Secondly, studies that had reported a typical impact for all or some of CSR elements on certain dimensions of CFP. Thirdly, studies that had reported occasional or conditional impact of CSR dimensions on CFP. Fourthly, studies that had reported a negative impact for CSR on CFP. Fifthly, studies that had reported a role for moderating and mediating factors on the impact of CSR on CFP. Finally, studies that had offered some explanations for the mixed results on the relationship between CSR and CFP in particular and other aspects of corporate performance in general. Introduction Companies' acknowledgement to its responsibilities to give back to the community (Grover, 2014) and to actively work to complement the governmental role in a sort of mutual partnership (Chang & Sam, 2015) is known by Corporate Social Responsibility (CSR). That term had evolved as a moralbased approach that had been regarded for decades as the original motive for companies to practice CSR. Thereafter, CSR was remarked as having a potential positive influence on companies' business performance. This has resulted in driving many companies worldwide to practice CSR mainly to realize those business-related benefits. Therefore, these companies usually employ CSR activities as a tool or an instrument to enhance its performances in terms of profitability and business growth alongside with managing stakeholders' interests (Madueno, et al., 2016). According to Cho and Lee (2017), corporate financial performance represents the company's value in terms of the joint effect of monetary (tangible) and non-monetary (intangible) value drivers. However, the monetary part of financial performance has received much attention by practitioners as well as by researchers in which accounting measures had been intensively applied. This has been also reflected in many of the studies that had investigated the impact of CSR activities on corporate financial performance. For example, there were many studies that had defined corporate financial performance as made of accounting-based indicators and market-based indicators (Wang, et al., 2016;Karaye, Ishak & Che-Adam, 2014;Nollet, Filis & Mitrokostas, 2016). However, the monetary value drivers were demonstrated while the non-monetary ones were completely neglected. Regardless of how corporate financial performance had been measured, the majority of quantitative studies that had examined corporate social responsibility impact on corporate financial performance had approached empirical evidences for the presence of an impact for CSR practices on corporate financial performance, or as some studies referred as "corporate performance". However, the nature of that impact and its significance remained a scholarly debatable topic since the reported impact between the two constructs ranged from simply negative to strongly positive (Wang & Sarkis, 2017). Herein, this paper intends to critically review how the impact of CSR on corporate financial performance had been addressed by previous literature, and then to classify those literatures based on its findings. That classification is expected to facilitate a better understanding of the foundation of CSR impact on companies' financial performance and the multiple perspectives shaping the nature and magnitude of that impact. Classifying Literature Review on the Impact of CSR on Corporate Financial Performance Apparently, scholars that had empirically examined the impact of CSR on corporate financial performance had been grouped into two main categories based on the type of that impact as positive or negative. However, the reported positive impact can be further classified as direct, indirect (moderated and/ or mediated), conditioned or limited. This sub-classification constitutes an important approach for further grouping of literature to precisely address the range of that positive impact. In contrast, there were other studies that had reported a negative impact for CSR on corporate financial performance. Moreover, there were many studies that had offered some explanations for the mixed results on the impact of CSR on corporate financial performance in particular and other aspects of corporate performance in general. This section highlights as many valid classes of previous studies that had investigated the impact of CSR on corporate financial performance. A. Studies that had reported a direct or a complete positive impact of CSR dimensions on corporate financial performance in general: Literature that had reported a direct impact for CSR inputs on the corporate financial performance can be grouped into two categories according to the extent of how the impact was generalized across all the variables that had been measured. Therefore, the first category according to the researcher's review of concerned studies is the full direct impact for all CSR factors being measured on all factors of corporate financial performance as a dependent variable. The second group presented studies that had approached a typical or a selective positive impact between certain factors in both sets of variables. As an example of studies with direct and absolute positive impact for CSR on the financial performance was the study conducted by Reverte, Gomez-Melero and Cegarra-Navarro (2016). That study documented a positive and significant direct effect of CSR on organizational performance regardless of the industry's type, company's size, or the company's proactivity to do voluntary CSR. In that study, corporate performance referred basically to the corporate financial performance and the authors used monetary elements (accounting and market-based) as key indicators of that performance. However, much of the significance of that study stemmed from its inclusion of non-monetary or qualitative indicators obtained from certain management practices in the sampled companies beside the monetary measures. The results clearly highlighted the impact of CSR on the non-monetary part of the corporate financial performance that had been classified as internal and external benefits. The recorded internal benefits were mainly intangible organizational assets and capabilities such as the development of competitive know-how technologies and responsible corporate culture. Whereas the external benefits were identified to be an enhanced corporate reputation in the market in a way enabling companies to build good relations with external stakeholders and appearing as attractive employers. Furthermore, the study found companies' investments in CSR initiatives effective to increase employees' motivation, commitment, and loyalty. These findings were similar to those approached by (Sila and Cek, 2017;Wang et al., 2014). Moreover, in another study to the impact of CSR on the performance of multinational companies, Zhao, Teng and Wu (2018) acknowledged the role of CSR as an important factor in the competitiveness of these companies through enabling it to build long-term employees and consumers' trust as a basis for sustainable business models. Accordingly, that well-established employees and consumers' trust helps business leaders to create optimal environments for businesses' growth and innovation (ibid). These results on the impact of CSR practices on the non-monetary corporate performance can be matched with the results of Vong and Wong (2013) that had also showed a direct and significant impact of CSR practices on the company financial performance. That study also found nonmonetary positive impact for the social activities conducted by companies in the gaming industry in terms of creating employment opportunities and contributing to efforts of community development as per the perceptions of community's stakeholder (ibid). Furthermore, Xiong, et al (2016) found all the social dimensions they investigated (corporate investment in stakeholders' well-being, investments in environmental protection projects, and even monetary donations made by companies) leading to higher financial performance. The study reported important managerial implications that challenged the decision makers at companies to change their perceptions toward CSR as a cost center through providing solid evidences on CSR's positive impact on the corporate overall financial performance. According to the authors, these findings should motivate companies to behave as responsible social citizens and undertake CSR as a key business related strategy (ibid). Overall, there are many positive effects of CSR on monetary and non-monetary indicators of corporate financial performance that had been pointed out such as enhancing corporate image and reputation, developing the service quality, contributing to sustainable competitive advantage, having a better risk management, scoring high loyalty and retention rates for employees and customers, saving costs, and improving profit margins (Radhakrishnan, Chitrao & Nagendra, 2014;Gras-Gil, Manzano & Fernández, 2016;Huang et al, 2014). For instance, the CSR impact's model developed by Weber (2008) reflected in figure 1 below provided a summary of the direct and comprehensive positive impact that CSR has on corporate financial performance in general. The model classifies CSR business's benefits according to its type as monetary and nonmonetary and the nature of possible indicators as quantitative or qualitative as related to organizational functions. B. Studies that had reported a typical impact for all or some of CSR elements on certain dimensions of corporate financial performance: Blasi, Caporin and Fontini (2018) executed a research to analyze the relationship between CSR activities and the economic performance -that they had measured using both market and Business benefits from CSR accounting-based performance indicators-for companies disaggregated by sector of activity. The researchers found that companies' engagement in CSR to have a positive impact limited to two of the market-based performance indicators which were total stock return and financial risk associated with business investments. For example, the results revealed that CSR activities increased companies' total stock returns and reduced its financial risk in almost all dimensions of CSR across all sectors the study had examined. However, the indicators of accounting-based financial performance showed unstandardized response to CSR dimensions as the interaction between the various aspects of CSR and this component of the dependent variable had been inconsistent across sectors. The authors expected that inconsistency to be the result of sectoral effects on companies' motives to practice CSR and accordingly on its potential impact on accounting based economic performance (ibid). In contrary to the above findings, Wang, et al (2016) revealed a totally different interaction between CSR and the accounting-based indicators of corporate financial performance. In this regard, the study identified a positive and linear impact of CSR on return on assets (ROA) and earning per share (EPS) while it could not catch any reliable evidence for the existence of any impact for CSR on market-based financial performance measures such as the price-toearnings ratio (P/E) or stock return. The authors linked these results with findings of a previous study by Karaye, Ishak and Che-Adam (2014) that had suggested that the CSR-financial performance relationship appears to be more highly correlated when measured using accounting-based indicators of corporate financial performance than the market-based indicators. However, that explanation was completely rejected by the findings of Nollet, Filis and Mitrokostas (2016) that approached a conclusion of a similar impact for CSR on both accounting-based and market-based (Excess Stock Returns) performance indicators. The authors used return on assets (ROA) and earnings per share (EPS) as specific measures for the accounting-based corporate financial performance in which ROA in particular has become the most used measure of corporate financial performance by researchers (Karaye, Ishak & Che-Adam, 2014). On another hand, Vong and Wong (2013) introduced very typical results concerning the impact of certain CSR dimensions they used to measure that construct on different indicators of corporate financial performance. The authors placed more attention on assessing the importance of CSR to the company or a certain stakeholder group from different perspectives according to value created to each beneficiary party. The study's findings reflected that some of the CSR dimensions it used like business and employment, community development, and environmental protection were positively related to three of the financial indicators applied namely revenue, market share, and overall organizational performance. In contrast, the social dimension named management social practice was found related to earnings per share and the overall organizational performance. However, the CSR dimension named responsible gambling was associated with revenue and market share (ibid). However, the researchers did not provide any explanation for that typical impact of CSR on financial performance. On a relevant context, corporate environmental responsibility is considered an essential dimension of CSR when described from sustainability perspective or corporate responsible behavior. In this regard, Benavides-Velasco, Quintana-García, and Marchante-Lara (2014) asserted that a positive corporate environmental responsibility significantly affects both return on assets (ROA) and return on equity (ROE) for companies that maintain competitive environmental performance. The individual impact of certain social dimension measures on certain financial performance indicators was also reported by Chen, Feldmann and Tang (2015) when they found a significant positive correlation between three of the measured social dimensions which were human rights, society, and product responsibility in one hand; and return on equity on the other hand. Whereas, other financial indicators like sales growth and cash flow/sales ratio were not found influenced by any of the social dimension measures. C. Studies that had reported occasional or conditional impact of CSR dimensions on corporate financial performance: There are also some studies that had identified a positive impact for CSR on corporate financial performance but anticipated that impact to be occasional unless further procedures been taken by companies to sustain it. An example of these studies was the one conducted by Maqbool and Zameer (2018) that had investigated the impact of CSR initiatives delivered by banks in India. The findings revealed that CSR has a positive impact on the banks' financial performance. However, the authors insisted on the importance of strategically integrating CSR's objectives in the business strategy and developing a socially oriented corporate culture to sustain the impact of CSR on the corporate's financial performance (ibid). In fact, this example sheds the light on the significance of CSR-business integration and the influence of socially oriented corporate culture. Before presenting literature on the indirect impact of CSR on corporate financial performance, the researcher preferred to put ahead studies that have reported a direct but negative impact for CSR on financial performance. This is because part of that negative impact will be better explained after understanding the role of moderating and mediating factors that some quantitatively approached studies suggested to interfere the relationship between CSR and financial performance. D. Studies that had reported a negative impact for CSR on corporate financial performance: Despite the multiple benefits reported for CSR on corporate financial performance as discussed above, there are other studies that have approached reverse findings. Though literature on the impact of CSR on corporate financial or economic performance has been skewed towards a positive impact with varying differences among the studies due to methodical difference or interpretation bias as asserted by Chauhana and Amita (2014), however studies that reported reverse findings cannot be totally neglected. For example, the results of Bhandari and Javakhadze (2017) showed that CSR reduces both accounting as well as stock-based future corporate performance. That research's findings revealed that companies with active CSR contributions tend to heavily adopt the social preference view that prioritizes stakeholders' interests over the interest of shareholders which is a common phenomenon that the Agency Theory of CSR strives to effectively control. In such cases, the executive managers or agents of these companies may strategically work to exclude any investment opportunities if not anticipated to be of much value to other stakeholders even if such opportunities proved to be of high potential value to the shareholders. On another study, Han, Zhuangxiong and Jie (2017) analyzed the data of Chinese listed companies on both the Shenzhen and Shanghai stock exchange from 2008 to 2014 using corporate document analysis. That study aimed to examine the impact of CSR on a nonaccounting financial performance indicator called product market performance to determine the actual operating conditions in the sampled companies according to recorded growths in product sales. The findings reflected that CSR activities significantly decrease the product market performance of non-state-owned companies in noncompetitive industries where the abilities of those companies to finance its debts is usually very limited. The authors attributed that negative impact to a corporate governance as an internal organizational element. That element tends to be weak in private owned companies compared to state owned ones in noncompetitive markets which gives rise to management's engagement in more self-serving purposes using many practices including CSR. E. Studies that had reported a role for moderating and mediating factors on the impact of CSR on corporate financial performance: As discussed earlier in the above sub-sections, the mixed findings reported on the impact of CSR on corporate financial performance implied gaining a sufficient understanding of the indirect impact that CSR may have on corporate financial performance in particular as well as on other aspects of the corporate performance. The characteristics of these moderating factors in addition to the extent and nature of its interaction either individually with CSR and multiple components of the corporate performance, or collectively in the relationship between CSR and corporate performance, require advanced investigations by CSR researchers to better redefine the relationships between these variables in the CSR context. Among studies that highlighted the role of multi-moderating factors in the relationship between CSR and corporate financial performance was the study conducted by Javed, Rashid and Hussain (2016). That study identified a significant moderating role for dynamic business environmentswhich is almost an external factor to the organizational context-in the positive relationship the authors found between CSR and the financial performance of companies working in such environments. However, the study revealed that the manipulation of CSR activities may negatively influence that moderated relationship (ibid). The role of munificent business environments was also marked by Goll and Rasheed (2004) reporting that profitable companies can grow fast in munificent environments that are perceived effective to enhance the engagement of these companies in more CSR practices compared to environments characterized by scarce resources where companies' social performance tends to be very limited. Therefore, the authors suggested that a dynamic and munificent environment plays a significant moderating role on the relationship between corporate social responsibility practices and corporate financial performance (ibid). On the internal organizational context, Xie et al (2017) presented other moderating factors as they found a strong moderating effect for institutional environment factor in the relationship between CSR practices and financial performance as well as in the relationship between CSR practices and customer satisfaction which may also indirectly enhance corporate financial performance. In this respect, the presence of a well-established institutional environment enhances the positive relationship between CSR practices and these two variables (financial performance and customer satisfaction). The study assigned the significance of that moderating role for institutional environment to the ability of any well-established institutional environment to provide long term supporting policies and a healthy organizational culture that can be easily aligned with the corporate vision and the surrounding legal environment. In addition to its moderating effects in the relationship between CSR and corporate financial performance, institutional environment has a direct positive impact on the level of companies' engagement in CSR functions and strengthen stakeholders' perceptions about its social responsibility (ibid). These findings were also supported by Cavazotte and Chang (2016) that regarded institutional environment as an important factor that companies should understand its influence on the impact of CSR investments on financial corporate performance. This is because CSR investments are associated with financial and non-financial costs. Furthermore, it is argued that CSR impact on corporate performance ranges from negative to significant positive and is also influenced by many internal and external factors (ibid). On another hand, Mehralian et al (2016) identified a mediating role for total quality management (TQM)-as an internal management system that usually takes into considerations the interests of various stakeholder groups -in the relationship between CSR and corporate financial performance. The corporate financial performance in that study consisted of monetary and nonmonetary items and it was measured using the Balance Score Card techniques. The mediation effect reported for TQM in the aforementioned relationship was attributed to managers' employment of that managerial tool to improve the quality of their business processes and products to satisfy the interest of multiple stakeholders' groups. As a result, they can strengthen the relationships with those stakeholders and ultimately improve corporate financial performance. The importance of TQM has been also highlighted by Wang et al (2016) as a business strategy that enables companies to acquire a superior competitive advantage through the continuous process improvement. Therefore, socially responsible companies may apply TQM techniques it developed to improve the quality of its operations and the way it implements its corporate strategies which in turn lead to improve the corporate financial performance (Benavides-Velasco, Quintana-García and Marchante-Lara, 2014). Moreover, CSR is considered a driver to a sustainable competitive advantage as well as a sustainable quality advantage through motivating companies to implement competitive management practices such as TQM. Thus, Benavides-Velasco, Quintana-García and Marchante-Lara (2014) regarded TQM in conjunction with CSR as highly potential sources for sustainable competitive advantage targeted almost by all companies. The authors found that adopting these two approaches improves the capacity of the hotels they surveyed to create shared values with different groups of stakeholders. This is because both concepts almost share the same orientation toward paying high attentions to the needs and expectations of the company's stakeholders. Furthermore, the study asserted that the level of development of corporate social responsibility is positively influenced by TQM implementation (ibid) which reflected that TQM can also play a direct positive impact on CSR and corporate performance on individual basis. In general, strategic organizational tools used by companies to enhance the competitiveness of its practices such as TQM implies carful alignment between these tools and the key corporate strategies to make the tools of common values within the company's business model. In this respect, Bocquet et al (2013) asserted that corporate financial performance (as a dependant variable of CSR) is also affected by the degree of consistency that managers establish across strategic organizational and environmental elements while setting CSR implementation strategies. From the perspective that views customers as the most important external stakeholder group to the company, Xie et al (2017) suggested a full mediation effect for customer satisfaction in the positive relationship between CSR and corporate financial performance. To enhance customer satisfaction levels in an individual socially responsible company, the study highlighted the role of good institutional environments to positively strengthen the impact of CSR practices on customer satisfaction. In other words, institutional environment played a moderating role between CSR and customer satisfaction in the mediated relationship between CSR and corporate financial performance. Furthermore, customer satisfaction was acknowledged by García-Madariaga and Rodríguez-Rivera (2017); Jha and Cox, (2015) as a moderating variable in the relationship between CSR and corporate market value and financial performance respectively. These studies implied more corporate focus on meeting customers' needs and enhancing their loyalty levels through employing CSR practices as key tools to improve customer satisfaction when making strategic decisions and thereby improve corporate financial performance (Jha & Cox, 2015). However, according to the CSR Stakeholder Theory, companies are expected to seek the satisfaction of all concerned stakeholders considering the significant role of stakeholders' engagement in the success of social and sustainability strategies adopted by socially responsible companies. In fact, the impact of stakeholders' engagement in the relationship between CSR and the overall corporate performance has not been sufficiently investigated by researchers yet. In this regard, Javed, Rashid and Hussain (2016) expected a moderating role for stakeholders in the relationship between CSR and corporate financial performance in particular. However, in this author's assessment, it would make a good sense to identify also other factors that may control the extent of stakeholders' engagement in line with investigating a moderating or a mediating role for such an engagement in the CSR-corporate performance relationship. Furthermore, there are also studies that have changed the position of CSR as an independent variable in the moderated or mediated relationship with corporate financial performance. These studies reported significant moderating and mediating effects for CSR in this particular front either individually or compounded with other co-variables. For example, Wang et al (2015) suggested that CSR and brand equity together can enhance companies' market value and accordingly its financial and economic position. On another hand, García-Madariaga and Rodríguez-Rivera (2017) identified a strong mediating role for CSR in the relationship between customer satisfaction and corporate reputation in both directions. These CSR-moderated and mediated relationships lead over time to significant improvements in the financial aspect of the corporate performance. More interestingly, CSR was also found a motivator to companies' ethical practices. For example, Laguir, Stagliano and Elbaz (2015) pointed out a significant influence for CSR on the level of companies' tax aggressiveness that had been defined in that study as the process of encompassing all tax planning activities either legally or illegally which represents an irresponsible corporate behavior. The study found that, the higher the level of the CSR social dimension at an individual company, the lower the level of tax aggressiveness or the higher is its ethical obligation concerning financial aspects. In contrast, the higher the level of CSR economic dimensions, the higher the level of that company's engagement in tax aggressiveness practices. The study highlighted that the level of companies' commitments to a certain CSR dimension is a good determinant of the level of its ethical conduct to certain stakeholder groups as the impact tends to vary across different dimensions of CSR (ibid). On a limited scope of CSR, corporate environmental responsibility -which is defined as one of key dimensions of CSR or sustainability -was empirically found by Li et al (2017) to be of significant positive influence on the corporate financial performance. However, the study pointed out that relationship was significantly moderated by the level of stringency of government regulation. For example, the more stringent government regulation is, the more significantly positive is the impact of corporate environmental responsibility on the corporate financial performance. In contrast, organizational slack had been found playing a negative moderating role in the relationship between the mentioned constructs. The organizational slack is a term that is used to refer to access in resources (usually financials) available to an individual company other than those necessary to achieve immediate business and operational requirements. In other words, the positive environmental performance of companies with abundant organization slack does not allow these companies to enhance its financial performance. However, the findings revealed that the negative moderating effects of organizational slack in this relationship can be significantly weakened by stringent government regulation. The study demonstrated that this duplicated moderating effect of stringent government regulation represents a key motivation tool for companies to improve its environmental performances, especially if those companies have few or reasonable organization' slacks and working in environments controlled by stringent government regulations. However, these findings may not be generalizable to other dimensions of CSR especially the social one that requires a working environment with limited stringency in government regulations to be of more positive influence on the three aspects of corporate sustainability performance. In this respect, this research supported literature that argued on the importance of measuring each of the CSR main dimensions individually instead of aggregating those dimensions into a single measure to accurately identify the impact of each dimension and avoid losing any important information (Laguir, Stagliano & Elbaz, 2015). F. Studies that had offered some explanations for the mixed results on the relationship between CSR and corporate financial performance in particular and other aspects of corporate performance in general: Given the previously discussed findings of Wang et al (2015), that study had provided a remarkable contribution to CSR literature on the relationship between CSR and corporate financial performance despite its shortage to address any impact for CSR on market-based indicators of the corporate financial performance. For instance, the authors confirmed in their study to international construction companies over a seven years' period, that there is a curvilinear relationship between the two constructs that had been anticipated by previous researchers but had not been tested before. The authors simplified that U-shape relationship as follow: at low levels of social practices; companies will not be eligible to achieve any remarkable financial benefits since the cost of conducting such activities will be more than the foreseen gains. This stage can provide a logical explanation to the negative impact that shows up when measuring CSR impact on the financial performance that had been reported by some research as communicated above. However, the U shape impact suggests that as these companies keep improving its CSR activities, it will financially approach a breakeven point and will start realizing increasing financial benefits as it passes that point in which CSR benefits offset the associated costs. The authors suggested that the curvilinear relationship between CSR and financial performance in the international construction industry is also applicable to other industries, where the relationship between costs and benefits of CSR similarly apply. The findings of that study implied that companies should be sufficiently motivated to improve its CSR strategies further and allocate enough financial and non-financial resources to run its CSR programs since the CSR impact is of mutual values between companies and the society though it may take some time to pay back. Moreover, the reported curvilinear relationship between these two constructs necessitated setting a regulated starting point by policy makers from where companies' commitment to CSR should commence. In this author's opinion, in addition to using the indicated threshold in the said curvilinear relationship as a time factor breakeven point, there is also a logical suggestion to set another point but to be in a time free mode to differentiate between ad hoc CSR activities and strategic CSR activities that have been described above. Under that point, any individual company should not be able to generate any valuable benefits from causal or standalone CSR programs falling below the suggested strategic commencement point no matter how much time it takes considering that the CSR spending in such situations would most likely remain cost ineffective. On an alternative hand, Reverte et al (2016) provided a more extended explanation to the source of the mixed findings on the impact of CSR on corporate financial performance and attributed that to two main reasons. The first reason was related to differences of measurement methods of corporate financial performance being applied by researchers. For example, some studies had focused on measuring the financial performance using accounting based indicators only. While others had applied only market based measures. On another hand, a third group of studies had applied both of these financial performance indicators. Furthermore, there was a high tendency to adopt monetary measures (accounting or market based measures) over the non-monetary measures that many studies had identified as an original component of corporate financial performance (ibid). Moreover, this author has noticed while reviewing those studies that there was inconsistency in the way of measuring some accounting based indicators such as profitability. This has been noticed in the method adopted by Chauhana and Amita (2014) to measure profitability when testing the relationship between CSR and corporate profitability. The researchers had used indicators more suiting per-unit analysis of profitability and capital budgeting instead of using more accurate profitability ratios that capture the full economic value added by the company such as return on assets, return on equity, return on invested capital, and return on capital employed (Segal, 2019). The second reason according to (Reverte, Gomez-Melero & Cegarra-Navarro, 2016) was the low attention paid to potential mediating and/or moderating effect of other internal and external factors that may direct or influence the relationship between CSR and corporate financial performance (ibid). There were few studies that had measured the influence of mediating and moderating variables in this relationship. Therefore, this implies conducting extended investigations that acquire more internal and external factors of potential indirect influence on the direction and significance of the assumed relationship between CSR and companies' financial performance. Furthermore, this author noticed that some studies were tackling CSR as a lump sum factor without considering the wide differences between philanthropic versus strategic CSR (Maqbool and Zameer, 2018), or proactive versus reactive CSR. Therefore, the measured impact of CSR becomes of conflicting outcomes because the perspective from which CSR should be defined had not been sufficiently considered. This was happening despite the common agreement of the majority of CSR literature that CSR as a term consists of multiple dimensions and can be defined and measured using many different perspectives. Therefore, understanding the ins and outs of the above mentioned reasons may put a logical end for the debate over the nature of CSR impact on corporate financial performance. Conclusion CSR has an impact on corporate financial performance. However, literature showed mixed results when measuring that impact in terms of type and characteristics due to many organizational and external factors intervening that impact or the way the constructs had been measured. Therefore, it is important to understand how researchers had tackled these factors while examining the impact of CSR on corporate financial performance. That understanding allows upcoming researchers to accurately identify the position from which the said impact had been measured or evaluated; and therefore obtain the right explanations to diverse findings on the same issue to make useful contribution to knowledge. The Study Contribution This study has theoretically contributed to the knowledge base on CSR field by extending the understanding on the impact of CSR on corporate financial performance to be classified as positive, negative, direct, indirect, conditioned and typical. That extended classification implies the adoption of the most accurate measurement tools and to identify clearly-defined variables. Thus, approaching reliable findings that are sufficiently logical to explain the nature or the extent of CSR impact on corporate financial performance under each of the above classified relationships between both variables. Furthermore, the study offers a conceptual mapping between CSR and corporate financial performance based on a multi-dimensional factors that is useful for future researchers to use in order to enhance the quality of their research methodologies and the reliability of their findings.
7,989.8
2021-09-26T00:00:00.000
[ "Business", "Economics" ]
Benzothiazole heterogeneous photodegradation in nano α-Fe2O3/oxalate system under UV light irradiation The photodegradation of benzothiazole (BTH) in wastewater with the coexistence of iron oxides and oxalic acid under UV light irradiation was investigated. Results revealed that an effective heterogeneous photo-Fenton-like system could be set up for BTH abatement in wastewater under UV irradiation without additional H2O2, and 88.1% BTH was removed with the addition of 2.0 mmol l−1 oxalic acid and 0.2 g l−1 α-Fe2O3 using a 500 W high-pressure mercury lamp (365 nm). The degradation of BTH in the photo-Fenton-like system followed the first-order kinetic model. The photoproduction of hydroxyl radicals (·OH) in different systems was determined by high-performance liquid chromatography. Identification of transformation products by using liquid chromatography coupled with high resolution tandem mass spectrometry provided information about six transformation products formed during the photodegradation of BTH. Further insight was obtained by monitoring concentrations of the sulfate ion (SO42−) and nitrate ion (NO3−), which demonstrated that the intermediate products of BTH could be decomposed ultimately. Based on the results, the potential photodegradation pathway of BTH was also proposed. XH, 0000-0003-0634-2661 The photodegradation of benzothiazole (BTH) in wastewater with the coexistence of iron oxides and oxalic acid under UV light irradiation was investigated. Results revealed that an effective heterogeneous photo-Fenton-like system could be set up for BTH abatement in wastewater under UV irradiation without additional H 2 O 2 , and 88.1% BTH was removed with the addition of 2.0 mmol l −1 oxalic acid and 0.2 g l −1 α-Fe 2 O 3 using a 500 W high-pressure mercury lamp (365 nm). The degradation of BTH in the photo-Fenton-like system followed the first-order kinetic model. The photoproduction of hydroxyl radicals (·OH) in different systems was determined by high-performance liquid chromatography. Identification of transformation products by using liquid chromatography coupled with high resolution tandem mass spectrometry provided information about six transformation products formed during the photodegradation of BTH. Further insight was obtained by monitoring concentrations of the sulfate ion (SO 4 2− ) and nitrate ion (NO 3 − ), which demonstrated that the intermediate products of BTH could be decomposed ultimately. Based on the results, the potential photodegradation pathway of BTH was also proposed. are used as slimicides in the paper and pulp industry [1], as fungicides in lumber and leather production [2], as vulcanization accelerators in the manufacture of rubber products and tyres [3], and as stabilizers in the photo industry [4]. Owing to the widespread use and poor elimination of BTHs by conventional wastewater treatment processes [5], sewage is considered as their main pathway to the aquatic environment [6]. An additional source of BTHs in water includes street runoff containing abrasion residues of tyres [7]. An average concentration of BTHs in an effluent from a Greek wastewater treatment plant was 254 ng l −1 [6], and a survey done in China revealed that the occurrence of BTHs in river water was in the range of 158-473 ng l −1 [8]. It was found that most of BTHs not only inhibited the activity of microorganisms [9] but also showed toxic effects to mammals. The advanced oxidation processes (AOP), such as H 2 O 2 /UV, photo-Fenton and ozone, have been used to oxidize benzothiazole compounds [10][11][12][13][14], suggesting that AOP can be efficient for elimination of BTHs. Here, we aim to investigate the photodegradation behaviour of BTH and define the best conditions to improve the BTH degradation in a heterogeneous system composed of iron oxides and oxalic acid. Meanwhile, based on the data obtained from high-performance liquid chromatography coupled with quadrupole time-of-flight mass spectrometry (HPLC-QTOF-MS) analysis and the calculation of the frontier electron density of BTH, the initial steps of degradation of BTH and its resulting transformation products were proposed. Reagents BTH (technical grade, 96%) and oxalic acid (AR, 98%) were purchased from Shanghai Aladdin Biochemical Technology Co., Ltd, China. α-Fe 2 O 3 (99.5%, 30 nm) was obtained from Shanghai Ziyi Reagent Co., Ltd, China. Other analytical-grade chemicals were purchased from Sinopharm Chemical Reagent Co., Ltd, China. Methyl alcohol (HPLC grade) was used for HPLC analysis. Chromatographicgrade methyl alcohol was purchased from Tedia Company, USA. All chemicals were used without further purification and all solutions were prepared using double-distilled water. Experiments of benzothiazole photodegradation The photodegradation experiments of BTH were carried out in an XPA-7 photochemical reactor (Xujiang Electromechanical Plant, Nanjing, China). Throughout the experiments, the experimental solution temperature was maintained at 20 ± 1°C by cooling water circulation. The irradiation source was a 500 W high-pressure mercury lamp with a maximum light intensity output at 365 nm. The lamp was placed into a hollow quartz trap located at the centre of the reactor. The light intensity at quartz tube positions was measured to be 8.96 × 10 2 mW cm −2 by a UV irradiation meter (UV-A, Beijing Normal University, China), and illumination to be 7.9 × 10 4 lx by a lux meter (AS-813, Smart Sensor, China). Before irradiation, the suspension was sealed and agitated for 30 min to reach adsorption equilibrium. The initial pH of reaction solutions was regulated with sulfuric acid solution (with hydrochloric acid when acid ions were measured) and sodium hydroxide solution, and the final solution volume was adjusted to 50 ml with double-distilled water. Then, the solution was placed into the photochemical reactor and stirred with magnetic stirrers. At fixed time points, analytical samples were withdrawn from the suspension with a pipette, immediately centrifuged at 10 000 r.p.m. and then filtered by using a syringe equipped with a 0.45 µm membrane filter for further analysis. Analysis methods The concentrations of BTH during the experiments were quantified by a PerkinElmer HPLC equipped with a SPHERI-5RP-18 column (4.6 × 150 mm, 5 µm) at a wavelength of 254 nm, and the retention time of BTH was 6.4 min. The mobile phase was methanol-water (90 : 10, v/v), and the flow rate was set as 0.6 ml min −1 . Identification of transformation products (TPs) in the solution was performed by employing a Waters Acquity G2 Q-TOF LC-MS instrument, which was composed of a Waters Acquity ultra-performance liquid chromatography (UPLC) system coupled to a QTOF mass spectrometer. Analytes were eluted with a gradient programme using MeOH (A) and water (B), both acidified with 0.1% formic acid. And the gradient programme was: held at 15% A for 0-2 min; 2.0-16.0 min, linear increase 15-95% A; 16.0-21.0 min, held at 95% A; 21.0-21.1 min, immediately reduced to 15% A to equilibrate the column [11]. All samples were kept refrigerated at 10°C in the UPLC auto sampler, and a 1.0 µl injection volume was used with a total flow rate of 0.2 ml min −1 over a total run time of 12 min. Mass spectrometry was performed on a Waters Synapt G2S Q-TOF (Micro mass MS Technologies, Manchester, UK) equipped with an electrospray ionization source operating both in positive and negative modes. The high-purity nitrogen as the nebulization gas was set at 800 l h −1 at a temperature of 500°C, and the cone gas was set at 50 l h −1 . The capillary voltages under positive and negative modes were set at 5.0 kV and −4.5 kV, respectively. Argon was used as the collision gas. The cone voltages were both set at 35 V, but the energies for collision-induced dissociation in positive and negative ion modes were set at 5.0 eV and 7.0 eV respectively for the fragmentation information. Scavenging of ·OH by excess benzene was introduced into different reaction systems to determine the ·OH quantum yield under irradiation of a 500 W Hg lamp. Phenol produced from the reaction of benzene and ·OH was detected at 254 nm by HPLC, in which 25% (v/v) acetonitrile was used as a mobile phase at a flowing rate of 1.0 ml min −1 under isocratic conditions at 25°C. Samples of 10 µl were injected into the column through the sample loop for analysis [25]. Analyses of sulfate ion and nitrate ion were performed according to standard methods proposed by PRC State Environmental Protection Administration [27]. Kinetic study The kinetic description of BTH degradation processes through the pseudo-first-order approach was made, and the first-order rate constants of phototransformation (k[s −1 ]) of the investigated compound were obtained by linear regression of the natural logarithmic relative residual concentration over irradiation time t[s], which is described by the following equation: where C t is the concentration of BTH at given time, C 0 is the initial concentration, and k is the rate constant. Calculation of the frontier electron density of benzothiazole By means of the calculation of BTH at the B3LYP/6-311G** level with the density functional theory method, the frontier electron densities (FEDs) of the highest occupied molecular orbital (HOMO) and the lowest unoccupied molecular orbital (LUMO) were both obtained. For the purpose of predicting the reaction sites for hydroxyl addition, values of FED 2 HOMO + FED 2 LUMO were also calculated. BTH significantly increased up to 88.1%. Therefore, with only α-Fe 2 O 3 or only oxalic acid, the reaction system shows low photocatalytic activities for BTH degradation. While BTH can be efficiently degraded with the synergistic effect of iron oxides and oxalate under UV light irradiation, for the reason of a heterogeneous photochemical Fenton-like system being set up. It is reported that fenuron [17] and mesotrione [25] can be efficiently photodegraded in such a system. Production of hydroxyl radicals in different reaction systems The generation of hydroxyl radicals (·OH) in photochemical reactions with high oxidation potential is critical to the degradation of organic pollutants. Particularly, yield of ·OH could be an indicator for photochemical degradation in the α-Fe 2 O 3 /oxalate system. Therefore, the concentration of ·OH was detected during the photochemical reaction process in the present study. The concentration of ·OH in the reaction system depends on the rates of generation and consumption. As shown in figure 1b, the yield of ·OH produced in the system of α-Fe 2 O 3 or oxalate alone is much lower than that with coexistence of α-Fe 2 O 3 and oxalate system. The ·OH was generated quickly in the initial 10 min, and the maximum ·OH concentration detected was about 6 μmol l −1 after 10 min. To understand the photochemical reaction process of BTH degradation in such a α-Fe 2 O 3 /oxalate complex system, the interaction of α-Fe 2 O 3 and oxalate under UV light irradiation was discussed in detail [26] other Fe(III) species. Therefore, the BTH photodegradation was improved significantly in the system of oxalic acid and α-Fe 2 O 3 . Radical quenching experiments are very useful methods for proving the effect of hydroxyl radical. Chen et al. [25] selected benzene as the hydroxyl radical scavenger to show that ·OH produced from the photocatalysis was the key to lead the degradation of organics. Effect of the dosage of α-Fe 2 O 3 on benzothiazole photodegradation As shown in figure 3a, the effect of α-Fe 2 O 3 dosage on BTH photodegradation was investigated in the presence of oxalic acid with an initial concentration of 2.0 mmol l −1 under irradiation of a 500 W highpressure mercury lamp. With the absence of α-Fe 2 O 3 , the degradation of BTH was very slow, and the degradation rate was only 12.3% (curve 0.0 g l −1 ). Nevertheless, the degradation of BTH was obviously accelerated after adding α-Fe 2 O 3 in the reaction system, indicating that α-Fe 2 O 3 was an effective photocatalyst for BTH degradation with the assistance of oxalic acid. The removal percentage of BTH rose up to 92.88% when the concentration of α-Fe 2 O 3 was increased to 0.2 g l −1 . However, the removal percentage declined slightly when the dosage of α-Fe 2 O 3 increased from 0.2 to 0.6 g l −1 , because the excessive amount of α-Fe 2 O 3 might restrain the UV light scattering in the reaction suspension and reduce the generation of ·OH. The kinetics of the reaction process was also studied. The photodegradation of BTH in the α-Fe 2 O 3 /oxalate system under UV irradiation was in accordance with first-order kinetics. The first-order kinetic constants (k) were calculated to be 0.5 × 10 −2 , 6.8 × 10 −2 , 5.9 × 10 −2 and 5.3 × 10 −2 min −1 with 0.0, 0.2, 0.4 and 0.6 g l −1 α-Fe 2 O 3 , respectively. The changes of k versus α-Fe 2 O 3 dosage (figure 3a inset) reveal that the optimum concentration of α-Fe 2 O 3 was 0.2 g l −1 in the proposed α-Fe 2 O 3 /oxalate system for the best BTH photodegradation performance. As a heterogeneous photocatalyst, α-Fe 2 O 3 could remarkably accelerate the generation of [≡Fe(C 2 O 4 ) n ] 3−2n . Under UV irradiation, ·OH could be produced more with more [≡Fe(C 2 O 4 ) n ] 3−2n generated during the photochemical reaction. However, excessive dosage of α-Fe 2 O 3 might restrict the penetration of UV light in the solution and decrease UV light intensity, which is confirmed by Wu et al. [29]. . However, The rate of BTH photodegradation was improved markedly as a consequence of oxalate increase in the suspension of α-Fe 2 O 3 /oxalate. However, the degradation rate of BTH is not always increased with the initial oxalate concentration, which means excessive oxalate could inhibit the degradation of BTH. The excessive oxalate would lead to the occupation of the adsorbed sites on the iron oxide surface. Besides, the excessive oxalate also can result in a lower pH at the beginning, so a large amount of Fe 3+ would form [26,30]. The photodegradation of BTH in the α-Fe 2 O 3 /oxalate system was fitted with first-order kinetics and the first-order kinetic constants (k) versus C 0 ox are shown in figure 3b (inset). When the initial concentrations of oxalic acid were 0.0, 1.0, 2.0, 3.0 and 4.0 mmol l −1 , the k values of BTH degradation were calculated to be 0.5 × 10 −2 , 1.9 × 10 −2 , 6.7 × 10 −2 , 2.7 × 10 −2 and 2.6 × 10 −2 , respectively. The results revealed that the BTH photodegradation rate increased with initial oxalic acid concentration increase firstly, but reached maximum value when the initial concentration of oxalic acid was increased to 2.0 mmol l −1 . Therefore, it is necessary to control the concentrations of α-Fe 2 O 3 and oxalate for BTH photodegradation, because excessive oxalic acid would overwhelmingly occupy the active sites on the surface of α-Fe 2 O 3 and facilitate the competitive reaction with the generated ·OH, while less oxalic acid would lead to incomplete reaction. Effect of the initial pH value on benzothiazole photodegradation To study the effect of the initial pH value on BTH photodegradation, a series of experiments were carried out in this study. Initial pH of the solution was adjusted by NaOH or H 2 SO 4 before reaction. And the initial concentration of BTH is 100 mg l −1 with the presence of 0.2 g l −1 α-Fe 2 O 3 and 2.0 mmol l −1 oxalic acid under UV irradiation (500 W Hg lamp). At pH = 7.0, the degradation efficiency of BTH changes less. When the pH value was decreased, the degradation efficiency is gradually improved. Especially, when pH value reaches 2.0, the degradation efficiency is increased to maximum value of 90.57% (figure 3c). The first-order kinetic constants (k) were 6.7 × 10 −2 , 2.1 × 10 −2 , 0.4 × 10 −2 , 0.2 × 10 −2 when the initial pH values were 2.0, 3.0, 5.0, 7.0, respectively. In system of α-Fe 2 O 3 /oxalate/UV, a high concentration of [≡Fe(C 2 O 4 ) n ] 3−2n with high photocatalytic activity might appear at a lower pH value. Identification of the photodegradation intermediates and products Various TPs are often produced in advanced oxidation processes, because the reaction between ·OH and organic pollutants is non-selective. Degradation intermediates were determined by UPLC and QTOF analysis. And the chromatographic retention time, relative molecular weight and ion information of the intermediates were comprehensively analysed using the method of extracting mass spectrometry. Based on the comparison of the mass spectra of the photodegradation solution at 0 min and 50 min during the reaction process, a host of new peaks appeared (figure 4). The major TPs included such hydroxylation products as the mono-hydroxylated BTH with mass-charge ratio (m/z) of 150.02, di-hydroxylated BTH at m/z 166.01 and tri-hydroxylated BTH at m/z 182.00, among which the peaks at m/z 150.02 might also correspond to benzothiazol-2(3H)-one. To correctly characterize the positions of hydroxylation in mono-hydroxylated compounds, the FEDs of BTH were calculated to predict the reaction sites for ·OH attack. The results are summarized in table 1. According to the frontier orbital theory, the prior ·OH addition probably occurs on the atom with the highest FED 2 HOMO + FED 2 LUMO value [31], which has been testified to be reasonable by published work [32]. As shown in table 1, 6C, 8C and 9C sites in phenyl ring and 2C in thiazole had the highest FED 2 HOMO + FED 2 LUMO value, suggesting benzene was likely to be attacked by ·OH, thus resulting in the generation of mono-hydroxylation products. However, it should be noted that the possibility for ·OH addition to thiazole moiety is much higher than addition to phenyl moiety. . Possible photodegradation pathway of BTH in UV irradiated α-Fe 2 O 3 /oxalate system (TPs marked with dashed frame were detected in none of the samples, but Borowska et al. [11] had detected 1 and 2 ). The concentration change of benzothiazol-2(3H)-one, one of the intermediates, was determined by liquid chromatography (LC), as shown in figure 5a. As seen, the concentration of benzothiazol-2(3H)one increased with time during 20-70 min followed by a gradual decay, indicating that the formation and transformation of benzothiazol-2(3H)-one were accompanied with the degradation of BTH. The concentration change of inorganic ions during the BTH photocatalysis process is depicted in figure 5b. As clearly seen, the sulfur atom and nitrogen atom in the thiazole structure could be converted to sulfate ions (SO 4 2− ) and nitrate (NO 3 − ), respectively. Thus it was illustrated that intermediates can be decomposed ultimately. Data obtained above were used to propose a schematic pathway of BTH degradation by α-Fe 2 O 3 /oxalate ( figure 6). The degradation of BTH starts with the hydroxylation, and then produces mono-, di-or tri-hydroxylated BTH. However, hydroxylation of the aromatic ring makes it more unstable and prone to ring opening. The oxidation products of tri-hydroxylated BTH may be found at m/z 200.00 or 171.97 (neither of them has been detected in the samples, but reported in the literature [11]), and the latter corresponds to the loss of one atom of carbon and gain of four atoms of oxygen, which suggests the benzene ring opening and subsequent decarboxylation [33]. Conclusion The photocatalytic degradation of BTH has been investigated in UV irradiated α-Fe 2 O 3 /oxalate system in this study, as a photo-Fenton-like system without additional H 2 O 2 . The optimum degradation
4,485.6
2018-06-01T00:00:00.000
[ "Environmental Science", "Chemistry" ]
Therapeutic Efficacy of a Subunit Vaccine in Controlling Chronic Trypanosoma cruzi Infection and Chagas Disease Is Enhanced by Glutathione Peroxidase Over-Expression Trypanosoma cruzi-induced oxidative and inflammatory responses are implicated in chagasic cardiomyopathy. In this study, we examined the therapeutic utility of a subunit vaccine against T. cruzi and determined if glutathione peroxidase (GPx1, antioxidant) protects the heart from chagasic pathogenesis. C57BL/6 mice (wild-type (WT) and GPx1 transgenic (GPxtg) were infected with T. cruzi and at 45 days post-infection (dpi), immunized with TcG2/TcG4 vaccine delivered by a DNA-prime/Protein-boost (D/P) approach. The plasma and tissue-sections were analyzed on 150 dpi for parasite burden, inflammatory and oxidative stress markers, inflammatory infiltrate and fibrosis. WT mice infected with T. cruzi had significantly more blood and tissue parasite burden compared with infected/GPxtg mice (n = 5-8, p<0.01). Therapeutic vaccination provided >15-fold reduction in blood and tissue parasites in both WT and GPxtg mice. The increase in plasma levels of myeloperoxidase (MPO, 24.7%) and nitrite (iNOS activity, 45%) was associated with myocardial increase in oxidant levels (3-4-fold) and non-responsive antioxidant status in chagasic/WT mice; and these responses were not controlled after vaccination (n = 5-7). The GPxtg mice were better equipped than the WT mice in controlling T. cruzi-induced inflammatory and oxidative stress markers. Extensive myocardial and skeletal tissue inflammation noted in chagasic/WT mice, was significantly more compared with chagasic/GPxtg mice (n = 4-6, p<0.05). Vaccination was equally effective in reducing the chronic inflammatory infiltrate in the heart and skeletal tissue of infected WT and GPxtg mice (n = 6, p<0.05). Hypertrophy (increased BNP and ANP mRNA) and fibrosis (increased collagen) of the heart were extensively present in chronically-infected WT and GPxtg mice and notably decreased after therapeutic vaccination. We conclude the therapeutic delivery of D/P vaccine was effective in arresting the chronic parasite persistence and chagasic pathology; and GPx1 over-expression provided additive benefits in reducing the parasite burden, inflammatory/oxidative stress and cardiac remodeling in Chagas disease. Trypanosoma cruzi-induced oxidative and inflammatory responses are implicated in chagasic cardiomyopathy. In this study, we examined the therapeutic utility of a subunit vaccine against T. cruzi and determined if glutathione peroxidase (GPx1, antioxidant) protects the heart from chagasic pathogenesis. C57BL/6 mice (wild-type (WT) and GPx1 transgenic (GPx tg ) were infected with T. cruzi and at 45 days post-infection (dpi), immunized with TcG2/TcG4 vaccine delivered by a DNA-prime/Protein-boost (D/P) approach. The plasma and tissue-sections were analyzed on 150 dpi for parasite burden, inflammatory and oxidative stress markers, inflammatory infiltrate and fibrosis. WT mice infected with T. cruzi had significantly more blood and tissue parasite burden compared with infected/GPx tg mice (n = 5-8, p<0.01). Therapeutic vaccination provided >15-fold reduction in blood and tissue parasites in both WT and GPx tg mice. The increase in plasma levels of myeloperoxidase (MPO, 24.7%) and nitrite (iNOS activity, 45%) was associated with myocardial increase in oxidant levels (3-4-fold) and non-responsive antioxidant status in chagasic/WT mice; and these responses were not controlled after vaccination (n = 5-7). The GPx tg mice were better equipped than the WT mice in controlling T. cruzi-induced inflammatory and oxidative stress markers. Extensive myocardial and skeletal tissue inflammation noted in chagasic/ WT mice, was significantly more compared with chagasic/GPx tg mice (n = 4-6, p<0.05). Vaccination was equally effective in reducing the chronic inflammatory infiltrate in the heart and skeletal tissue of infected WT and GPx tg mice (n = 6, p<0.05). Hypertrophy (increased BNP and ANP mRNA) and fibrosis (increased collagen) of the heart were extensively present in chronically-infected WT and GPx tg mice and notably decreased after therapeutic vaccination. We conclude the therapeutic delivery of D/P vaccine was effective in arresting the chronic parasite persistence and chagasic pathology; and GPx1 over-expression Introduction Chagas disease caused by Trypanosoma cruzi is endemic in Latin America and an emerging disease in the US and other developed countries. Overall prevalence of human T. cruzi infection is at~16-18 million cases, and~120 million people are at risk of infection [1]. Several years after infection, 30-40% of the infected individuals develop chronic cardiomyopathy with progressive irreversible tissue destruction, arrhythmia, thromboembolic events, and congestive heart failure [2], suggested to be associated with pathologic outcomes of persistence of parasite, inflammatory infiltrate, and oxidative stress in the heart [3,4]. The current knowledge on protective immunity and vaccine development efforts against T. cruzi [5] and the pathologic role of oxidative stress in Chagas disease [4,6] have recently been reviewed. Briefly, effective immune response for the control of T. cruzi infection requires activation of CD4 + and CD8 + T cells secreting Th1 cytokines, phagocytic activity of macrophages, lytic antibody response, and T lymphocytes cytotoxic activity [6]. Subsequently, several antigens, antigen-delivery vehicles, and adjuvants have been tested to elicit immune protection to T. cruzi in experimental animals (reviewed in [5,[7][8][9]). We employed an unbiased computational approach for the identification of potential vaccine candidates [10] and through rigorous analysis over a period of several years, demonstrated that three candidate antigens (TcG1, TcG2, TcG4) were maximally relevant for vaccine development [10,11]. Co-delivery of these antigens as a prophylactic vaccine elicited greater immunity and protection from T. cruzi infection than was noted with individual candidate antigens [11][12][13][14][15]. Mice immunized with the DNA-prime/protein-boost vaccine constituted of TcG1, TcG2 and TcG4 were capable of controlling challenge infection as evidenced by a 90-97% decline in acute parasitemia and tissue parasite burden; and, subsequently, inflammatory infiltrate and tissue fibrosis were particularly absent in the heart and skeletal muscle of vaccinated mice [13]. We and others have reported that a pro-oxidant milieu is present in the myocardium in chronic Chagas disease [16,17]. Treatment with ROS scavengers or enhancement of the antioxidant capacity was beneficial in preventing myocardial oxidative adducts and hypertrophic responses and preserved the left ventricular function that otherwise was compromised in the chronic disease phase in experimental models [18,19]. A decline in oxidative stress in human Chagas disease patients given antioxidant supplement has also been shown [20,21]. These observations have supported the idea that the sustained oxidative stress contributes to pathologic outcome in Chagas disease. In this study, we have sought to determine if therapeutic delivery of the vaccine along with enhancement of antioxidant status would be an effective treatment of chronic Chagas disease. C57BL/6 (wild-type (WT) and GPx1 transgenic (GPx tg ) mice were infected with T. cruzi, and at the end of acute parasitemia, mice were given two-dose therapeutic vaccine, delivered by DNA-prime protein-boost (D/P) approach. We included in the therapeutic vaccine TcG2 and TcG4 antigens shown to elicit potent anti-parasite antibodies and CD8 + T cell immunity [13,15]. GPx1 is a key enzyme for the cellular and mitochondrial defense against oxidative stress and it uses glutathione to reduce H 2 O 2 and lipid peroxide [22]. Mice were harvested during chronic disease phase, and we examined whether the D/P therapeutic vaccine and/or enhanced antioxidant status were beneficial in controlling parasite persistence, chronic inflammation, and oxidative stress in chagasic disease. Materials and Methods Parasites and mice T. cruzi trypomastigotes (SylvioX10/4 strain) were maintained and propagated by continuous in vitro passage in C2C12 cells. T. cruzi isolate and C2C12 cells were purchased from American Type Culture Collection (ATCC, Manassas VA). Mice over-expressing human glutathione peroxidase 1 (GPx-transgenic [GPx tg ] have previously been described [23,24], and were kindly provided by Dr. R. Ann Sheldon, University of California San Francisco. The GPx tg mice (CD1 background) were back-crossed with C57BL/6 WT mice for more than ten generations to generate GPx tg mice on C57BL/6 genetic background. All animal experiments were performed according to the National Institutes of Health Guide for Care and Use of Experimental Animals. The protocol was approved by the Institutional Animal Care and Use Committee (IACUC) at the University of Texas Medical Branch, Galveston (Permit number: 805029). The cDNAs for TcG2 and TcG4 were cloned in-frame with a C-terminal His-tag into a pET-22b plasmid (Novagen, Gibbstown, NJ). Plasmids were transformed in BL21 (DE3) pLysScompetent cells, and recombinant proteins purified by using the poly-histidine fusion, peptidemetal chelation chromatography system [13]. After purification, proteins were exchanged out of elution buffer by dialysis, and we validated that LPS contamination in the proteins was <1.0 EU/ml determined by toxin sensor limulus amebocyte lysate (LAL) assay kit (GenScript Inc. Piscataway, NJ). All cloned sequences were confirmed by restriction digestion and sequencing at the Biomolecular Core Facility at UTMB. Infection and immunization Mice (GPx tg and WT littermates, 6-8-weeks old) were infected with T. cruzi (10,000 trypomastigotes per mouse, intraperitoneal). Forty-five days later, mice were immunized with the 1 st vaccine dose consisting of the TcG2-and TcG4-encoding plasmids with IL-12-and GM-CSF-expression plasmids (25-μg each plasmid DNA/mouse, intramuscularly). Twenty one days after the primary immunization, mice were given 2 nd vaccine dose constituted of recombinant proteins (TcG2 and TcG4, 25 μg of each protein emulsified in 5 μg saponin/ 100 μl PBS/mouse, intradermally). Mice were harvested at 150 dpi corresponding to chronic phase of disease development, and blood, sera/plasma, and tissue samples stored at 4°C and -20°C. Quantitative PCR and Real-time RT-PCR For the measurement of parasite burden, blood DNA was isolated with a QiAamp Blood DNA mini kit (Qiagen, Chatsworth, CA). Skeletal muscle and heart tissue (50 mg) were subjected to proteinase K lysis, and total DNA was purified by phenol/chloroform extraction and ethanol precipitation. Total DNA (50 ng) was used as a template, and real-time PCR performed on an iCycler thermal cycler with SYBR Green Supermix (Bio-Rad, Hercules, CA) and Tc18SrDNAspecific oligonucleotides. Data were normalized to murine GAPDH and fold change in parasite burden (i.e. Tc18SrDNA level) calculated as 2 −ΔCt , where ΔC t represents the C t (infected)-C t (control) [19,25]. Total RNA from tissue samples (50 mg) was extracted using Trizol reagent (Invitrogen, Carlsbad, CA) and reverse transcribed using an iScript cDNA Synthesis Kit (Bio-Rad). Firststrand cDNA was used as a template in a real-time PCR on an iCycler thermal cycler with SYBR Green Supermix (as above), and specific oligonucleotides were used for amplification of atrial natriuretic peptide (ANP) and brain natriuretic peptide (BNP) hypertrophy markers [26]. The threshold cycle (Ct) values for the target mRNAs were normalized to GAPDH mRNA, and the relative expression level of each target gene was calculated as above. Myeloperoxidase (MPO) MPO activity was determined by a dianisidine-H 2 O 2 method, modified for 96-well plates. Briefly, plasma samples (10-μg protein) were added in triplicate to 0.53 mM o-dianisidine dihydrochloride (Sigma, St. Louis, MO) and 0.15 mM H 2 O 2 in 50 mM potassium phosphate buffer (pH 6.0). After incubation for 5 min at room temperature, the reaction was stopped with 30% sodium azide, and the change in absorbance was measured at 460 nm (ε = 11,300 M −1 Ácm −1 ) [28]. Results were expressed as units of MPO/mg protein, whereby 1 unit of MPO was defined as the amount of enzyme degrading 1 n mol H 2 O 2 per min at 25°C. Antioxidant and oxidant levels Heart and skeletal muscle tissue sections (50 mg) were washed with ice-cold Tris-buffered saline. Tissues were suspended in lysis buffer consisting of 50 mM Tris-HCl (pH, 7.5), 150 mM NaCl, 1 mM EDTA, 1 mM EGTA, 1% Nonidet P-40, 2.5 mM KH 2 PO 4 , and protease inhibitor cocktail (tissue: buffer ratio, 1:10, w/v), and homogenized on ice using a Omni tissue homogenizer. Homogenates were centrifuged at 3000 g at 4°C for 10 min to remove cell debris and the homogenates were stored at −80°C [17]. Protein concentration was determined by Bio-Rad Protein Assay. Total antioxidant capacity in tissue lysates was examined by using the Cayman Chemical Antioxidant Assay Kit (Ann Arbor, MI). Briefly, tissue homogenates (10 μg protein) were mixed in triplicate with 2,2'-azino-di-[3-ethylbenzthiazoline sulphonate] (ABTS, 150 μM) / metmyoglobin (2.5 μM) reagent, and the reaction was started with H 2 0 2 (75 μM). The ability of antioxidants in the sample to inhibit the formation of ABTS• + radical was monitored at 405 nm [29]. The capacity of the antioxidants in the sample to prevent ABTS oxidation was compared with that of Trolox, a water-soluble tocopherol analogue, and quantified as molar Trolox equivalents. We measured advanced oxidation protein products (AOPPs) in tissue lysates by spectrophotometry [30]. Briefly, in 96-well plates, plasma samples (1:10 dilution in phosphate-buffered saline [PBS]; 200-μl/well) were mixed in triplicate with 10 μl of 1.16 M potassium iodide and 20 μl of 100% acetic acid. The formation of chloramine-T, which absorbs at 340 nm in the presence of potassium iodide, was immediately read at 340 nm using a M2 SpectraMax microplate reader (Molecular Devices, Sunnyvale, CA). A standard curve was prepared using chloramine-T (linear range, 1 to 100 μ mol, Sigma), and the AOPP concentration was expressed as μ mol chloramine-T equivalents. Statistical analysis All experiments were conducted with triplicate observations per sample (n = 4-8 mice/group), and data are expressed as mean ± standard deviation (SD). All data were analyzed using InStat version 3 (GraphPad, La Jolla, CA), SPSS version14.0 (SPSS Inc., Chicago, IL), or SigmaPlot version 13.0 (Systat Software, San Jose, CA). Normally distributed data were analyzed by the Student's t test (for comparison of 2 groups) and one-way analysis of variance (ANOVA) with Tukey's post hoc test (for comparison of multiple groups). Data sets that were found not to be normally distributed were analyzed with the Kruskal-Wallis test followed by the Mann-Whitney test to assess the differences between pair-wise comparisons. Significance is presented as #, Ã p <0.05 and ##, ÃÃ p <0.01 ( Ã,ÃÃ wild-type-versus-GPx tg ; #,## normal-versus-infected, or infected-versus-infected/vaccinated). Results C57BL/6 wild-type mice infected with 10,000 parasites exhibit peak parasitemia during 14-45 dpi, and develop chronic disease by~120 dpi [15,25]. We employed this well-established experimental model to examine the therapeutic efficacy of a 2-component D/P vaccine in controlling parasite persistence and chronic disease. Further, we included GPx1 tg mice in the study to determine if enhancing the cellular antioxidant status would alter the host's resistance against chronic Chagas disease. All mice, irrespective of the antioxidant status were infected by T. cruzi, evidenced by an increase in blood and tissue parasite burden (Fig 1). Therapeutic delivery of vaccine resulted in 14.7-fold and 29.5-fold decline in blood and skeletal muscle levels of Tc18SrDNA levels, respectively, in chronically-infected/vaccinated WT mice when compared to that noted in chagasic/non-vaccinated WT mice (Fig 1A & 1B, all # p<0.05-0.01). Interestingly, chronicallyinfected GPx tg mice exhibited 2.4-2.7-fold and 5.6-8-fold lower level of blood and skeletal tissue parasite burden, respectively, than was noted in infected/WT mice before as well as after therapeutic vaccine delivery (Fig 1A & 1B, ÃÃ p<0.01). A similar decline in heart tissue parasite burden (4.2-fold) was observed in response to therapeutic vaccine in chronically-infected WT and GPx tg mice (Fig 1C, ## p<0.01). Together, these data suggested that a) therapeutic D/P vaccine was efficacious in arresting parasite persistence in chagasic mice, and b) GPx tg mice were better equipped than the WT litter-mates in controlling T. cruzi that resulted in enhanced efficacy of the therapeutic vaccine. Oxidative and inflammatory stresses are known to be of pathologic significance in Chagas disease [6,31]. We evaluated plasma levels of MPO activity and nitrite content and myocardial antioxidant status to gain a quantitative measure of vaccine efficacy in controlling chronic stress in chagasic mice. The chronically-infected WT mice showed a 24.7% increase in plasma levels of MPO activity that was not significantly controlled after therapeutic vaccine delivery and heart (C) of the infected/ vaccinated and infected/non-vaccinated mice were subjected to quantitative real time PCR amplification for Tc18SrDNA sequence. Bar graphs show the Tc18SrDNA level normalized to murine GAPDH gene. In all figures, data are expressed as mean ± SD, and significance is presented as #,* P < 0.05 and ##,** P < 0.01 (* wild-type versus GPx tg ; # wild-type versus infected, or infected versus infected/vaccinated). (Fig 2A, p<0.05-0.01). The GPx tg mice exhibited a lower basal plasma level of MPO activity than was noted in WT mice. The MPO activity was only marginally increased post-infection (12% increase) and normalized to control level after vaccination in GPx tg mice (Fig 2A). The plasma nitrite levels were increased by 45% and 39% in chronically-infected WT and GPx tg mice, respectively, and remained high after therapeutic vaccination (Fig 2B). The basal level of oxidants (data not shown) and antioxidants ( Fig 2C) were not different in WT and GPx tg mice. In response to chronic infection, WT mice exhibited 3-4-fold increase in myocardial oxidants, yet no increase in antioxidant levels was observed after infection or after vaccinationdependent control of chronic parasite persistence (Fig 2C). In comparison, GPx tg mice exhibited significantly lower level of oxidative stress associated with >2-fold increase in antioxidant status during chronic infection when compared to that noted in chagasic WT mice (Fig 2C, ÃÃ p<0.01). Therapeutic vaccination of chronically-infected GPx tg mice was particularly effective and resulted in normal basal levels of myocardial oxidant (data not shown) and antioxidant status (Fig 2C). Together, the results presented in Fig 2 suggested that macrophage (NOS2/ • NO) and neutrophil (MPO) activation contribute to chronic inflammatory state in chagasic mice and these responses were not subdued by therapeutic vaccine delivery. MPO, produced by activated neutrophils that use H 2 O 2 and chloride to produce reactive hypochlorous acid, was not activated in chronically-infected GPx1 tg mice. Further, therapeutic vaccine mediated control of persisting T. cruzi did not prevent the antioxidant/oxidant imbalance in chagasic WT mice. However, the enhanced cellular antioxidant capacity in GPx tg mice was beneficial in preventing myocardial oxidative stress caused by chronic T. cruzi infection. Next, we determined the effects of therapeutic vaccine (± GPx over-expression) on T cruziinduced myocarditis. Histological studies showed pronounced inflammatory infiltrate with diffused inflammatory foci in heart (score: 1-4, average: 2.3, Fig 3A.c) and skeletal muscle (score: 1-3, average: 2, Fig 4A.c) of chronically-infected WT mice. Myocardial degeneration with enlarged myocytes was particularly evident in chagasic WT mice. Therapeutic vaccination Fig 4A.d) that was further controlled after therapeutic vaccination (Fig 3A.f & Fig 4A.f). Together, the data presented in Figs 3 & 4 suggested that GPx tg mice were better equipped than the WT mice in preventing the T. cruzi-induced inflammation of the heart, and therapeutic vaccination was effective in arresting the chronic infiltration of the inflammatory infiltrate in the heart in chagasic mice. ROS and inflammatory mediators have been suggested to promote the development of interstitial and perivascular fibrosis, as well as myocardial hypertrophy. Histological staining with Masson's Trichrome of the tissue sections detected a high degree of myocardial remodeling associated with a significant disruption of cardiomyocytes in chronically-infected WT and GPx tg mice (Fig 5). Semi-quantitative measurements of collagen area (blue, 3 regions per heart, n ! 4) suggested the fibrotic area, specifically around the vasculature, was significantly increased in chagasic WT (1.28-13.06%, average 6.65%) and GPx tg (3.09-16.52%, average 7.42%) mice when compared to that noted in normal mice (compare Fig 5A.a&b with Fig 5A. c&d, p<0.01). The increase in collagen deposition was associated with up to 50% increase in mRNA levels for ANP and/or BNP hypertrophy markers in the myocardium of chronically infected WT and GPx tg mice ( Ã p < 0.01). Therapeutic vaccination resulted in a notable decline in myocardial collagen area in WT (0.29-4.79%, average 1.39%) and GPx tg (0.486-3.658%, average 1.87%) chagasic mice (compare Fig 5A.c&d with Fig 5A.e&f). These data suggested that therapeutic vaccination was beneficial in controlling the chronic evolution of hypertrophic and fibrotic responses in chagasic mice, and GPx1 tg mice may be marginally better in preventing the chagasic cardiac remodeling. Discussion Before setting the goal of new therapy development, an important question is whether it will fill gaps in achieving health benefits and if it will be an economically viable approach. Several Therapeutic Vaccine and Antioxidants in Chagas Disease studies, including our published reports (reviewed in [5]), have shown the prophylactic subunit vaccine mediated control of infection and disease in experimental models generally resembles that noted in the 60-70% of the chagasic patients that remain seropositive and maintain residual parasites for their entire lives, but do not develop a clinically symptomatic form of the disease [3]. In terms of treatment, acutely-infected patients, irrespective of their ages, are shown to respond to treatment with the anti-parasite drug benznidazole and be cured, defined by the control of acute parasitemia and myocarditis [32]. However, benznidazole and nifurtimox (anti-trypanosomal drugs) have not shown efficacy in the treatment of indeterminate status or chagasic cardiomyopathy [33]. The intolerance and unacceptable side effects in 30-50% of the treated individuals [34], mutagenic properties, and contra-indication in pregnancy have restricted the use of these drugs as a standard therapy in chagasic patients [35]. An effective therapeutic vaccine for human Chagas disease could prevent cardiac complications among the estimated 40,000 new cases of Chagas disease that occur in Latin America annually, avert over 600,000 DALYs annually that result from cardiomyopathy and gastrointestinal disease, and prevent 10,000 deaths or more annually [36]. Computer simulation modeling studies suggest that even when the risk of infection is only 1% and protective efficacy is only 25%, a vaccine would be economically viable and beneficial as long as the cost is US$20 or lower per vaccine dose [37]. The potential savings in term of cost of treatment per patient-year are estimated to be US$1028, with lifetime costs averaging US$11,619 per patient [38]. Thus, we believe that vaccination to reduce the frequency and severity of clinical disease by decreasing the extent of persistent parasite burden is urgently needed to improve the health outcomes in chagasic patients; and continuing efforts towards developing a prophylactic and therapeutic vaccine against T. cruzi infection and Chagas disease are economically justifiable. Based upon several studies that we have conducted, we believe TcG2 and TcG4 candidate antigens are an excellent choice for therapeutic vaccine development, and a heterologous prime/boost approach for vaccine delivery is highly efficacious against T. cruzi and Chagas disease. The selected candidates TcG2 and TcG4 tested in this study are highly conserved in clinically relevant T. cruzi strains, and expressed (mRNA/protein) in infective trypomastigote and intracellular amastigote stages of T. cruzi [10]. We have shown the delivery of candidate antigens as a DNA-prime/protein-boost or DNA-prime/MVA-boost preventative vaccine was highly effective in generating protective immunity consisting of parasite-and antigen-specific lytic antibodies and type 1 CD8 + cytotoxic T lymphocytes against challenge infection and chronic disease in mice [14,15]. The enhanced efficacy of a heterologous prime/boost approach for vaccine delivery could be because delivery of antigens as DNA vaccine elicits robust T-cell responses, which are critical for the development of T-cell-dependent antibody responses [39,40]. Delivery of vaccine candidates as recombinant proteins is generally more effective at eliciting antibody responses and may directly stimulate antigen-specific memory B cells to differentiate into antibody-secreting cells, resulting in production of high-titer, antigen-specific antibodies [41,42]. Therefore, DNA-prime plus protein-boost is a complementary approach that overcomes each of their respective shortcomings. This, to the best of our knowledge, is the first study demonstrating the therapeutic efficacy of a subunit vaccine delivered by a DNAprime/protein-boost approach in arresting T. cruzi persistence and chagasic disease in a murine model. We vaccinated mice during the indeterminate state when, similar to humans, the acute parasitemia was controlled and clinical disease symptoms have not yet developed. A majority of human patients are generally identified during this phase to be seropositive and carrying T. cruzi infection. Importantly, therapeutic delivery of TcG2/TcG4-encoding DNAprime/protein-boost vaccine arrested the progression to chronic disease phase, evidenced by up to 10-fold control of peripheral and tissue levels of parasite burden as determined by a highly sensitive qPCR approach (Fig 1), and a significant decline in tissue infiltration of inflammatory infiltrate (Figs 3 & 4), and cardiac remodeling (Fig 5) that were otherwise pronounced in WT infected/non-vaccinated chagasic mice. Others have shown that immunotherapy with a Tc24-or TSA1-encoding DNA vaccine immediately after lethal infection led to survival of >70% of mice [43] and when given after acute parasitemia led to reduced cardiac inflammation [43,44] in infected mice. We surmise that therapeutic vaccines targeting T. cruzi parasites provide a relevant approach for reducing the cardiac tissue damage in Chagas disease. We included mice overexpressing GPx in this study to test the idea that these mice will be better equipped in controlling chagasic pathology. Our observation of a low grade inflammation and a decline in the expression of hypertrophic markers and collagen deposition in chronically-infected/GPx tg mice suggests that ROS signals inflammatory responses and hypertrophic remodeling in chagasic myocardium. Indeed, ROS is known to signal inflammatory responses in diverse disease models and both ROS and inflammatory cytokines, e.g., TNF-α, IL-1β, and MCP-1, can promote fibrosis and tissue remodeling. The treatment with an antioxidant or enhanced mitochondrial antioxidant capacity is shown to result in a significant decline in myocardial oxidative adducts concurrent with preservation of a cardiac hemodynamic state in chagasic rodents [19,45]. A decline in oxidative stress in human Chagas disease patients given Vitamin A is also shown [20]. These observations, thus, support the idea that sustained oxidative stress is of pathological importance in chagasic cardiomyopathy, and antioxidants should be considered as adjunct therapies along with anti-parasite treatments in arresting chronic chagasic pathology. Surprisingly, we observed a decline in peripheral and tissue parasite burden in GPx tg mice with or without therapeutic vaccine delivery, suggesting that oxidative state of the host determines the quality and efficacy of immune responses in arresting parasite survival and persistence. A recent study has shown a slower increase in blood parasitemia in mice given antioxidant (vitamin C) therapy, though authors noted no significant differences in tissue levels of T. cruzi and inflammatory infiltrate in chronic disease phase [46]. GPx deficiency has been linked to mutation of coxsackie virus B3 viral genome, with replacement of G base (most vulnerable to oxidation) at three of the seven sites [47], and enhanced myocarditis in Gpx1 -/mice, though authors did not conclude enhanced viral burden. Likewise, GPX1 -/mice developed higher levels of influenza A virus induced lung inflammation but it was not associated with increased viral load [48]. Thus, our observation of a better control of parasite burden in GPx tg chagasic mice provide the first indication of a role of oxidative state of the host in eliciting appropriate immune responses. How ROS may alter host immunity in the context of T. cruzi infection and Chagas disease is not known. Besides signaling of NF-κB pathway of cytokine gene expression, ROS is suggested to enhance proliferation and activation of proinflammatory immune cells via activation of glycolytic metabolic pathways. However, over-production of ROS can alter the function of immune cells by direct oxidative damage or oxidative inhibition of transcription factors and cell-surface receptors involved in activation of adaptive immunity. Future studies delineating the kinetics of ROS signaling of metabolic pathways for altering the pro-inflammatory function of immune cells will help design novel strategies for achieving pathogen control [49,50] versus controlling chronic inflammatory state as is suitable for diverse heart diseases. In summary, we have shown the therapeutic utility of a subunit vaccine against T. cruzi demonstrated in WT and GPx tg mice. Our data showed that therapeutic vaccination provided >15-fold reduction in blood and tissue parasites, and subsequently, chronic myocarditis was controlled in mice receiving therapeutic vaccine. The Gpx tg mice were better equipped than the WT mice in arresting the chronic parasite persistence, inflammatory/oxidative stress and cardiac remodeling in Chagas disease.
6,361.6
2015-06-15T00:00:00.000
[ "Medicine", "Biology" ]
On the modelling of self-gravitation for full 3-D global seismic wave propagation SUMMARY We present a new approach to the solution of the Poisson equation present in the coupled gravito-elastic equations of motion for global seismic wave propagation in time domain aiming at the inclusion of the full gravitational response into spectral element solvers. We leverage the Salvus meshing software to include the external domain using adaptive mesh refinement and high order shape mapping. Together with Neumann boundary conditions based on a multipole expansion of the right-hand side this minimizes the number of additional elements needed. Initial conditions for the iterative solution of the Poisson equation based on temporal extrapolation from previous time steps together with a polynomial multigrid method reduce the number of iterations needed for convergence. In summary, this approach reduces the extra cost for simulating full gravity to a similar order as the elastic forces. We demonstrate the efficacy of the proposed method using the displacement from an elastic global wave propagation simulation (decoupled from the Poisson equation) at 200 s dominant period to compute a realistic right-hand side for the Poisson equation. I N T RO D U C T I O N The last decade has witnessed an unprecedented increase in highquality long-period seismic data. This is because of the occurrence of several very large Earthquakes that were recorded on an exponentially growing number of broad-band seismic stations that are installed in very dense networks such as the USArray. These new data have led to an ever increasing resolution in tomography for seismic velocities. Density however, even though it is a key parameter in models of the geodynamical evolution of Earth's mantle as density differences drive mantle flow, remains poorly constrained. While tomography based on full 3-D simulations of the seismic wavefield has become a standard tool in local to continental scale seismology (e.g. Fichtner et al. 2009;Tape et al. 2009;Virieux & Operto 2009;Zhu et al. 2012;Warner et al. 2013;Virieux et al. 2017), this is not yet the case for whole planet models. Early attempts (Lekić & Romanowicz 2011;French & Romanowicz 2014) used approximations for the gradients and only recently the first global model fully based on the adjoint method became available Lei et al. 2020). To the best of our knowledge, no such study exists for the normal mode frequency range. However, it is the long-period range in which density is more likely to be accessible than for other seismic observables (Ishii & Tromp 1999;Koelemeijer et al. 2017). The main reason for this discrepancy is the fact that there is no established method to model the full physics of long-period seismology in 3-D and compute gradients with respect to material properties at reasonable computational cost. Because density information is contained in the normal mode spectra at relatively small amplitudes compared to the seismic velocity structure, an accurate implementation of the underlying physics is required. The widely used splitting functions are based on the self-or group-coupling approximation, which introduces an error on the same order of magnitude as the effect of density perturbations itself (Akbarashrafi et al. 2018). Another difficulty in normal mode coupling theory arises from the linearization of the effects of boundary perturbations such as topography, ellipticity and crustal thickness, though recent work by Al-Attar et al. (2018) addresses this issue by means of a particle relabling transform. In a related work, Maitra & Al-Attar (2019) apply the same spatial mapping to the Poisson equation to derive an efficient and numerically exact solution in aspherical planets. In contrast, time domain methods such as the spectral element method, usually avoid the full implementation of gravity and use the Cowling approximation instead (e.g. Komatitsch & Tromp 2002a). This approximation assumes that the gravitational potential is constant in time and ignores gravity perturbations caused by the seismic displacement itself. The theory describing the complete gravity physics is however well established and demonstrated to work with the spectral element method (Chaljub & Valette 2004). On the modelling of self-gravitation 633 Fig. 1 shows the relative error caused by the Cowling approximation in eigenfrequency or, equivalently, phase velocity for all spheroidal modes up to 15 mHz in PREM (Dziewonski & Anderson 1981) in a dispersion diagram (a) and as a function of frequency (b). As expected, a clear tendency of a decreased error with increased frequency is apparent and the error is largest for the lowest frequency modes. However, especially at higher frequencies the error also depends on the mode type: the lowest error is seen for the ScS equivalent modes with very low group velocity at low angular degree. A slightly higher error level is exposed by the fundamental mode Rayleigh waves and their overtones. Most sensitive are the CMB and ICB Stoneley modes as well as the core modes. This pattern can be explained by the displacement eigenfunctions: while vertical displacement across density contrasts directly perturbs the gravitational potential (e.g. Rayleigh and Stoneley modes), the perturbation is smaller for horizontal displacements where the density change is only due to the normal strain (e.g. ScS modes). That said, the ICB Stoneley and core modes are of lower importance in many applications focused on the mantle, as they cannot be easily excited directly by an earthquake in the crust or upper mantle. The Cowling approximation can be considered sufficiently accurate for frequencies above 5-10 mHz in most applications. While not yet common, it is in principle possible to compute long-period spectra using time domain methods (e.g. Nissen-Meyer et al. 2014;. Fig. 2 shows amplitude spectra for the vertical component seismogram recorded at the black forest observatory in Germany for the Tohoku Oki event. The spectra are computed from 32 hr of elastic 3-D time domain spectral element wave propagation and ignore the effects of gravity, which we are addressing in the following. The incremental change from addition of surface and Moho topography, 3-D velocities and density from S20RTS (Ritsema et al. 1999) is computed as the absolute value of the complex difference of the spectra. Large phase shifts thus cause differences to exceed the 1-D reference spectrum in amplitude. It is apparent that the strongest effect is from surface and Moho topography (including ellipticity) and the effect of lateral velocity variations is only marginally smaller. The effect of density however is an order of magnitude smaller, confirming the necessity for accurate modelling. As the dispersion error in time domain simulations accumulates over long propagation distances, correction for the dispersion error caused by the time stepping scheme (e.g. Koene et al. 2018) is an attractive alternative to more accurate time integration (e.g. Nissen-Meyer et al. 2008). Two main challenges arise in including full self-gravitation in the 3-D spectral element method: while the purely elastic simulations can be formulated in a fully explicit way and only require the computation of matrix-vector products, self-gravitation couples a Poisson equation into the system that needs to be solved in each time step. The cost of solving this system exceeds the cost of the elastic terms by far (Chaljub et al. 2007), rendering this unpractical, in particular for inverse problems. Secondly, the spatial domain of the equation extends to the full space, with a boundary condition at infinity that cannot be treated directly by a standard space-discretized method. We address both these issues in an effort towards the inclusion of the full physics for long-period seismology in an efficient spectral element method. In Section 2, we introduce the problem in quantitative terms and detail the steps towards an efficient solution. In Section 3, we then apply these methods to the Poisson problem using the wavefield from a purely elastic simulation as a test case. Problem statement The Poisson equation defining the perturbed gravitational potential is given by (e.g. Chaljub & Valette 2004) where G is the gravitational constant, ρ is the mass density, u is the seismic displacement and the Earth is denoted by e . ψ vanishes at infinite distance from the Earth and is continuous everywhere including at Earth's discontinuities and surface. The normal derivative of ψ is discontinuous at these interfaces and the jump is controlled by In the fluid parts, the seismic displacement can be computed from the displacement potential(s). The right-hand side (RHS) of eq. (1) has two contributions: (1) the change of density due to compression or dilation of the medium and (2) the displacement of material with heterogeneous density, including motion perpendicular to internal discontinuities and the free surface. Boundary conditions One difficulty for the numerical solution of Poisson's equation is that the domain is the full R 3 with a homogeneous Dirichlet boundary condition that needs to be applied at infinity. Chaljub & Valette (2004) approach this problem using a Dirichlet-to-Neumann operator to derive a Robin-type boundary condition at a spherical boundary large enough to compensate for Earth's non-spherical shape. This method requires computation of the spherical harmonic transform of the potential at this boundary in each iteration to couple it to the analytical solution in the outer domain. As an alternative, Gharti & Tromp (2017); Gharti et al. (2018) propose to use infinite elements to virtually extent the domain by mapping one face of the finite elements to infinity. The efficacy of this method is based on the fact that the potential decays with distance and this can be accommodated in the space of test functions. To maintain the convergence order, a Gauss-Radau quadrature rule is then used in the radial direction which avoids the evaluation of the basis polynomials on the outer boundary. In contrast, we use the same quadrature in all elements both in the interior and exterior domain. Here we argue for a different approach: it is the low-order terms of the solution that are most sensitive to the outer boundary condition, as they decay with low powers of 1/r. Higher-order terms decay quickly with distance from Earth's surface. We thus use the analytical solution given by the multipole expansion to derive the Neumann boundary condition at a finite radius. This is similar to the approach by Chaljub & Valette (2004) but instead of computing the spherical harmonic expansion over the surface of the domain in each iteration of the conjugate gradient solver, it requires computation of the expansion of the RHS of eq. (1) over the volume of the Earth only once per time step. The Neumann boundary condition only determines the potential up to an additive constant in contrast to a Dirichlet condition at infinity, however, the resulting forces in the wave equation only require the gradient of the potential and are insensitive to such a constant. While in theory a Dirichlet condition may be preferred, it requires the splitting of the solution into a homogeneous and inhomogeneous solution, elimination of the degrees of freedom on the surface from the linear system or introduction of (A) (B) a penalty term on the boundary. As we found no convergence issues with the Neumann condition, it is the preferred solution due to its simplicity. The Neumann boundary condition can be written as where with e ⊂ ⊂ R 3 denotes the finite computational domain. As the RHS of eq. (1) is compactly supported within the Earth, the exterior field can be expanded into the multipole series in powers of 1/r where Y lm are the complex valued spherical harmonic functions and q lm are the spherical multipole moments, The radial derivative needed to obtain the Neumann boundary condition according to eq. (3) if we assume to be spherical can then be calculated directly: For the problem to be well posed, the Neumann boundary condition needs to fulfil the compatibility condition, that is we need to demonstrate that As the surface integral over spherical harmonics vanishes for all terms other than the monopole term with l = 0, it can be shown with the help of eqs (5) and (6) that g indeed satisfies the compatibility condition and In practice, the multipole expansion needs to be truncated to a maximum degree l max . Fig. 3 shows the maximum angular degree of the exterior solution as a function of the radius based on a threshold ε of the multipole expansion (4): This relation can be used to determine both the l max for the boundary condition given the domain size and the lateral element size in the exterior domain. Similar to the truncation of the series we employ here, infinite elements also incorporate terms up to a maximum power of 1/r only and the exponent is equal to the polynomial order of the Lagrange basis in radial direction in the layer of infinite elements (eq. 25-27, Gharti et al. 2018), and this order can be chosen independently of the polynomial order in the volume elements. With a polynomial order of 2, however, as used in the numerical examples by Gharti & Tromp (2017) and Gharti et al. (2018), a significant buffer layer of normal spectral elements between the Earth's surface and the infinite elements is hence needed to achieve the required accuracy for the perturbed gravitational potential at all angular degrees present in the solution. Unfortunately, the authors are not aware of a quantitative test of infinite elements for high angular degrees, but assume that the required size of the domain and hence the total number of elements needed in the mesh will be similar to the one needed with the truncation of the Neumann boundary condition at the same angular degree. Discretization In order to apply the spectral-element method (SEM, Patera 1984;Chaljub et al. 2007) to the Poisson equation, it needs to be written in the weak form. This is achieved by multiplication of eq. (1) with a test function ϕ and integration by parts: The surface integral over ∂ on the RHS and surface integrals over internal interfaces vanish due to the jump condition eq. (2). The jump condition is hence readily and implicitly included in the weak form, which is equivalent to the strong form of the equation provided that it holds for all test functions ϕ from an appropriately chosen set. The last term can be identified with the Neumann boundary condition: We then subdivide the domain into hexahedral elements and use the standard SEM with Gauss quadrature on the Gauss-Lobatto-Legendre (GLL) points with Lagrangian interpolation functions to write the eq. (10) in matrix form: Here, K is the stiffness matrix, ψ is the vector of degrees of freedom and the RHS f contains both the density perturbation and the Neumann boundary condition. For this study, we explicitly compute K and assemble it as a sparse matrix. While most codes that aim at optimal performance avoid this assembly and use matrix-free implementations, we chose the matrix-based approach for its simplicity, allowing for an implementation in Python and relying on available libraries to achieve acceptable performance in the solver for the purpose of this paper. For a production code that would couple directly to the seismic simulation, a large part of the implementation is equivalent to acoustic wave propagation that can hence be reused. Here we only discuss performance in terms of number of iterations and not in terms of actual runtime, so that this less efficient implementation can be ignored. Meshing Cubed sphere meshing (Ronchi et al. 1996) in combination with deformed regularly gridded cube and lateral refinements by doubling or tripling layers has been established as the standard method in numerical global seismic wave propagation (e.g. Komatitsch & Tromp 2002b;Chaljub & Valette 2004). This approach can naturally be extended to also include the outer domain as shown in Fig. 4. While doubling layers in theory allow to better approximate the desired element size based on the S-wave length, we find tripling layers to be easier to locate. This is because fewer refinements are need to achieve the same change in element size and tripling layers only span one element in radial direction compared to three elements for the doubling on the full sphere due to the continuity conditions at the boundaries of the cubed sphere chunks (see fig. 4 of Komatitsch & Tromp 2002b). The resulting meshes have very similar numbers of elements for long-period meshes. Furthermore, the smallest elements that determine the time step due to the Courant-Friedrichs-Lewy (CFL) condition are always located in the crust in this application, as the radial element size is constrained by the crustal thickness to have an element boundary conforming with the Moho. Also note that we employ elements that approximate the spherical shape with polynomials at the same order as the test functions so that much fewer elements are required to achieve acceptable accuracy (van Driel et al. 2020, figs 8 and 9). While the element size within Earth is constrained by the local S-wave length and we assume this to be a conservative choice for the perturbed potential, there is no such constraint on the element size in the exterior domain. As can be seen from Fig. 3, the lateral complexity of the perturbed potential decays rapidly with increasing distance from Earth's surface and this can be accounted for by coarsening the mesh using a tripling layer in the same way as in the mantle. Furthermore, the potential decreases monotonically as a function of the radius in the exterior domain. While the decay can be on short distances just above the free surface, the complexity is extremely low at larger radii. This suggests that the radial element size needs to be smaller close to Earth's surface and can then increase rapidly with distance. To accommodate these constraints, we extend the mesh with a first layer of approximately isotropic elements, where the size is given by the S-wave length in the crust. This buffer layer is followed by a coarsening layer that increases the lateral element size by a factor of three. Finally, we add a configurable number of elements that increase in size with distance, where the radial nodes are computed as r i = r 0 + h 0 · dr i . Here r 0 and h 0 are the radius and lateral element size of the preceding layer of elements and dr is a parameter to tune the element growth rate to the complexity of the solution. For the range of number of elements in radial direction (up to about 10) and values for dr between 1.3 and 3, this keeps the aspect ratio of the elements in an acceptable range to avoid potential ill-conditioning of the system. We will determine the necessary domain size as well as an appropriate element shape empirically in Section 3.1. With this meshing approach the number of elements in the outer domain is only a fraction of the total number of elements (21 per cent for the mesh shown in Fig. 4 where the exterior domain has a radius 14.5 times the Earth radius). One third of these are located in the first layer above Earth's surface that is needed in any case to include topography in the approach by Chaljub & Valette (2004) or the infinite elements in the approach by Gharti & Tromp (2017). This shows that the potential gains from using the infinite element method instead of adaptive mesh refinement and the Neumann boundary discussion as discussed here are relatively small, even if a higher polynomial order in the radial direction was used in the infinite elements to avoid the additional buffer layers discussed above. Initial solution The number of iterations needed to solve the Poisson equation can be significantly reduced if a suitable initial solution is known. In the case of the self-gravitation, the RHS is computed from the timedependent displacement, which only changes marginally between time steps. For global simulations at long period, this is even more pronounced because the crust is much thinner than the wavelength. The explicit time-stepping used in the spectral element method is then limited by the CFL condition to values smaller than approximately 0.5 s, with the exact value depending on the crustal velocity and thickness model as well as the surface topography resolution. On the other hand, solutions from preceding time steps are only known to finite numerical accuracy and this may render extrapolation unstable for high order schemes. Here, we compare three extrapolation methods: First, we use the solution of the previous time step as initial solution, where the index indicates the time step. Second, linear extrapolation can be written as and finally a quadratic extrapolation is given by We evaluate these in a numerical experiment in Section 3.1. Multigrid solver The convergence rate of iterative solvers depends on the scale length of the solution, with a higher convergence rate typically associated to the shorter wavelength component of the solution (e.g. Wesseling 1992). Combining multiple discretizations that vary in their spatial resolution, this can be exploited to speed up the overall convergence. In the case discussed here, the element size is dictated by the S-wave velocity and the solution to the Poisson equation is dominated by longer scale components, which suggests that a multigrid approach may improve the convergence significantly. As we work with fully unstructured meshes, no straightforward coarsening of the mesh exists, in contrast to the hierarchical meshes often uses (e.g. Bank et al. 1988;May & Knepley 2011). However, the polynomial basis we employ typically has a polynomial degree p = 4. Thus, bases with lower polynomial orders on the same mesh can then be used to create a coarser spatial discretization (e.g. Craig & Zienkiewicz 1985;Foresti et al. 1989;Helenbrook et al. 2003; Bello-Maldonado & Fischer 2019). For two polynomial spaces P m and P n with different orders m and n, the projection of an element ϕ m from P m into P n is defined as the solution to ϕ n = arg min This is a strictly convex problem with a unique solution satisfying (ϕ n , ϕ) = (ϕ m , ϕ) for all ϕ ∈ P n , where On the modelling of self-gravitation 637 denotes the L 2 inner product on . Because the lower-order polynomial space is a subset of the higher-order space, the mapping to higher orders is exact and we simply obtain ϕ n = ϕ m for n ≥ m. However, in case n < m, the projection requires the solution of the linear system defined by eq. (17) with the Gram matrix for basis vectors ϕ i n , ϕ j n ∈ P n and an RHS b defined by Here, it is important to use exact integrals and not the Gauss-Lobatto quadrature rule as used by the SEM because the latter is only exact up to order 2n − 1 and thus not suitable for computing eq. (18). In the following we require the projection to lower orders only for the RHS of eq. (1). The RHS is allowed to be discontinuous at element boundaries, so we can use the locally optimal projection in each element. Furthermore, because the 3-D SEM basis is formed by the tensor product of the 1-D basis, the 3-D mapping matrices are obtained by solving the linear system mentioned above in 1-D and applying the resulting projection to the three dimensions subsequently. In the following, we refer to the mapping of the 3-D SEM basis from order m to n as P mn . In the application discussed here, a good initial solution is available from the extrapolation from previous time steps, see previous section. Hence, the residual only needs to be reduced by a small factor and this allows to apply a simplified multigrid approach with N stages, going through each stage exactly once. We denote the full resolution as stage 0 and indicate the stage with an upper index on all variables. The Poisson equation to be solved can then be written in the form Assuming that a good initial solution ψ 0 0 is available, the residual is defined as Due to the linearity of the equation, the residual can be used as a RHS to solve for the correction to the initial solution. To improve the convergence rate, we first solve this equation at lower polynomial order and use the result as an initial solution in the next higher order stage. This process is iterated until reaching the highest resolution stage. In each stage n of the multigrid starting at the coarsest discretization we hence first compute the RHS by restricting the residual to the polynomial order of this stage: Note that the RHS does not need to be continuous across element boundaries so the restriction to the lower order is local to each element. Then, the solution is smoothed using several conjugate gradient iterations to reduce the error on where the subscript on ψ indicates on which stage and corresponding RHS it was computed and the superscript indicates the discretization stage. While on the coarsest stage we use a zero initial solution, the initial solution for the other stages is computed by interpolation: Ultimately, the final solution at the full resolution is obtained by correcting the initial solution ψ 0 0 accordingly: Solver verification To verify the correct implementation of our numerical solver, we compute the gravitational potential for the 1-D PREM density and compare it to the semi-analytic solution obtained by numerical integration over the radius of the planet (e.g. Dahlen & Tromp 1998). While the homogeneous Dirichlet condition at infinity was assumed to require a large computational domain in previous work (Gharti & Tromp 2017), we apply the Neumann boundary condition for the monopole term (the only non-zero term in the multipole expansion in this 1-D example) directly at Earth's surface. While this offsets the potential by a constant relative to a solution with homogeneous Dirichlet condition at infinity, the absolute value of the potential has no physical significance. The resulting gravitational force is given by the gradient of the potential, rendering the force invariant under the addition of a constant to the potential. To measure the quality of the numerical solution, we hence subtract the mean value from the difference to the analytical solution, see Fig. 5. For a finite element based method, the more difficult challenge in this test case is the accurate representation of the spherical shape. While analytical mappings between the reference coordinates in each element and the physical coordinates could be used in this exactly spherical case (compare e.g. Chaljub & Valette 2004;Nissen-Meyer et al. 2007), we prefer a more generic approach using polynomial approximations to be able to include topography at a later stage. The accuracy of approximating the sphere by polynomials on the GLL points with relatively few elements is demonstrated by van Driel et al. (2020). Here, we use the same polynomial order for the shape representation as for the test functions, that is we use isoparametric elements. This choice is the reason that for polynomial orders n ≥ 2 the convergence rate seen in Fig. 5(d) is approximately n + 2: not only is the solution represented more accurately, but additionally the accuracy in representing the spherical shape improves with increased order and decreased element size. In the first-order shape representation n = 1 that is commonly used in seismology, when placing the nodes of the elements on the planet's surface, the volume of the sphere is systematically underestimated. This explains the particularly slow convergence. On the other hand, the meshes in the Poisson problem are designed for seismic wave propagation primarily, and assuming a crustal Swave velocity of 3.2 km s −1 , the lateral element size at the surface is h = 1600 km when using two elements per wavelength at 1 mHz. Using n = 4 for the shape approximation is hence a conservative choice to avoid errors in the potential and n = 2 is likely sufficient for meshes designed for shorter periods. A P P L I C AT I O N T O S E I S M I C WAV E S To verify our approach, we consider the Poisson problem where the RHS is computed from a purely elastic seismic wave propagation ignoring the coupling between the gravitational potential and the seismic displacement. Although self-gravitation has a significant effect on the longest period modes, we consider this to be a realistic test case to evaluate the performance of the Poisson solver separately. A snapshot from such a simulation is shown in Fig. 6. The seismic simulation is based on the mesh shown in Fig. 4, designed to resolve S waves at 200 s with two elements per wavelength in the PREM velocity model and uses the Salvus wave-propagation software package (Afanasiev et al. 2019). This results in a total of 105K elements, 21 per cent of which are in the exterior domain. The source is the centroid solution of the Tohoku Oki earthquake with a (A) (B) (C) (D) half-duration of 100 s and the snapshot is taken 900 s after the quake. With a time step of 0.34 s governed by the stability condition for elements in the crust, the computational time was 16 s on two Nvidia Titan X GPUs. In the fluid part of the core, the displacement shown in A is computed as the gradient of the displacement potential times the density. From the displacement, we compute the RHS according to eq. (1) and then solve the discrete system (eq. 12) using the conjugate gradient method with a diagonal Jacobi pre-conditioner and homogeneous Neumann conditions for simplicity. We use the same mesh (within the volume of the Earth) and polynomial order as in the elastic simulation, resulting in a total of approximately 800 iterations to achieve a residual of 10 −5 , which was determined by Chaljub & Valette (2004) as sufficiently accurate. Fig. 6 shows both the fields as well as their time derivatives. While the potential is dominated by the static displacement close to the source, the time derivative also shows significant contributions from Rayleigh and P waves. Love and S waves have a very small contribution to the RHS of eq. (1) as they have no associated divergence and only contribute by translating material with a density gradient. However, at the free surface and the core mantle boundary, P to S converted phases are clearly visible both in the RHS and the resulting potential. These observations confirm the expectations discussed in Fig. 1. In the following subsections, we use several time steps around this snapshot to evaluate the efficiency of the numerical approach described in Section 2. Boundary condition verification To verify the Neumann boundary conditions introduced in Section 2.2 and choose appropriate values for the exterior domain radius and maximum degree l max in the multipole expansion of the RHS, we perform a convergence test, see Fig. 7. The reference solution was calculated with l max = 16 and a radius of r = 38.1 · 6371 km, so that the error due to the finite domain size can be neglected. The element growth parameter is constant dr = 1.4 for all cases (see the following subsection and Fig. 8). This test shows empirically, that the L 2 error within the volume of the Earth computed relative to the reference solution converges depending on the radius of the computational domain r as r −(2lmax+3) . Due to this fast convergence, relatively low values for both r and l max lead to sufficient accuracy of the solution. A final choice depends on the trade-off between using more elements or more expansion coefficients, but r ≈ 3r Earth and l max = 4 appear to be reasonable values. Higher values of l max were suggested by Chaljub & Valette (2004), presumably because in their numerical tests they assumed a spherical Earth and applied the boundary condition directly on the free surface. As can be seen from Fig. 7, it is likely more efficient to extent the domain to some degree rather than just increasing l max : the cost of the multipole expansion scales with l 2 max and the number of elements scales subproportionally with r due to the increasing radial element size even without a coarsening layer. Fig. 7 also gives an indication for the required lateral resolution as a function of the radius, confirming our assumption that lateral coarsening can be applied relatively close to Earth's surface. The remaining question particularly concerns the radial element size in the exterior domain. Fig. 8 shows the L 2 error computed within Earth relative to a reference solution computed with dr = 1.2. In all cases, the exterior domain had 8 elements in radial direction and l max = 16, to ensure that the boundary condition does not contribute to the error. Exterior mesh verification The result suggests that dr should be chosen slightly below a value of 2 to achieve an accuracy of 10 −5 , that is a bit less aggressive than what was used to generate the mesh in Fig. 4 Accuracy of initial solutions To evaluate the accuracy of the three different extrapolation schemes discussed in Section 9, we compute the perturbed potential corresponding to four consecutive time steps to a residual of 10 −5 and then compare the extrapolation from the preceding one, two or three steps, respectively, to the numerical solution of the last one. The time between two steps in this case is 0.34 s using a single crustal layer with a thickness of 25 km. For applications with crustal thickness variations the CFL condition will dictate a smaller time step, which will improve the extrapolation relative to the results discussed here. The error level of the extrapolated potential is about three orders of magnitude below the field itself for the constant extrapolation and one order of magnitude lower for linear and quadratic extrapolation. Quadratic extrapolation is most accurate for body waves at depth and surface waves, but less so for body waves at the free surface. The near source region is likely dominated by numerical noise introduced by the point source approximation at this accuracy level and hence does not behave physically in the extrapolation. To quantify this visual impression and evaluate the quality of the extrapolated field as initial solution, we compute the residual for 20 consecutive time steps preceding the one discussed above. The cumulative distribution of these residuals is shown in Fig. 10 for the linear and quadratic scheme. The residuals for the constant extrapolation are beyond the scale of the figure and take values of approximately 1.2 × 10 −3 , almost two orders of magnitude above the other two schemes. The figure shows the result both for a single discretization at fourth order (p = 4) and the multigrid scheme (mg). The quadratic extrapolation appears to perform slightly better . Cumulative distribution of the initial residual from linear and quadratic extrapolation. For constant extrapolation, the value is consistently around 1.2 × 10 −3 and not shown here. Importantly, the efficacy of the extrapolation also depends on the spatial scheme. than the linear scheme in both cases, however, probably less than expected from the visual impression in Fig. 9. Additionally, the performance of the linear scheme is much more predictable for the multigrid approach with significantly less variance of the residual over the iterations. Efficiency of MG The final crucial component of our approach is the polynomial multigrid method introduced in Section 2.6. Fig. 11 shows how the four different stages contribute to the final solution starting from a linearly extrapolated initial solution. In the stages using polynomial orders p = 1 to p = 3, we empirically chose the convergence criterion to be the relative residual of 0.01, 0.05 and 0.05 solving for the update of the initial solution using the residual as the RHS. In the last step, we converge to a residual of 10 −5 in terms of the full potential. While all the long-scale structure of the solution in particular in the exterior domain is readily present in the first stage, the higher orders are needed for a detailed representation of the body and surface waves. Also, the amplitude of the solution in each stage is significantly reduced for the higher-order stages by up to an order of magnitude. To quantify the performance gained by using multiple stages, we compare the number of iterations required using different time extrapolation schemes as well as the multigrid and the fourth order system. For the multigrid scheme, we consider both the number of iterations at the highest resolution as it dominates the computational cost, as well as a weighted sum of the iterations in all stages. The weighting is estimated from the leading order scaling in the number of FLOPs in a matrix free implementation of the stiffness terms as would be used in a production code, that is p 4 . Fig. 12 shows cumulative distributions of these numbers of iterations; in all cases, the linear extrapolation performs better than both the quadratic and the constant one. Additionally, as already seen in the residual, the quadratic extrapolation again exhibits the largest variance, making the linear extrapolation the best choice. The median value for the multigrid method with linear extrapolation is 5 iterations in the highest order and 8.2 iterations in the weighted sum. This confirms about a factor 5 improvement in the performance by using multigrid in comparison to the same extrapolation used with fourth order only, which is comparable to previous results (e.g. Barker & Kolev 2021). The linear extrapolation itself leads to a factor 3 reduction in cost in comparison to the constant extrapolation for the fourth order approach and to a factor 10 for the multigrid approach. This however mostly suggests that the simplified multigrid approach we use here with a single cycle through the different orders is not a good choice if the initial solution has a higher residual. With higher initial residuals, we expect cycling through the stages multiple times to be more efficient. As a reference, for a zero initial solution and using fourth order, the number of iterations are approximately 700 to 900. Chaljub et al. (2007) report a range of 50-100 iterations, though using wavefields with significantly lower frequency (dominant frequency of 1 mHz versus 5 mHz used here, corresponding to 443K versus 6.8M degrees of freedom for the potential). They also use a spatially variable polynomial order from 2 to 10 to improve the condition number of the matrix and do not specify the initial solution, so that a comparison to their numbers requires careful interpretation. Cumulative distribution of the number of iterations needed to reach a residual of 10 −5 for constant, linear and quadratic time extrapolation and using the four-stage multigrid method as well as just the highest order (4). For the multigrid method, both the number of iterations at the highest order and a weighted sum over all stages based on a FLOP count estimate are shown. C O N C L U S I O N S A N D O U T L O O K In summary, linear extrapolation together with the simplified multigrid approach reduces the cost of solving the Poisson equation for the perturbed gravitational potential significantly. As the elastic stiffness terms in the wave propagation are about an order of magnitude more expensive than the stiffness term in the Poisson equation, the reduction to a cost equivalent to less than 10 iterations means that the cost for the Poisson solver is on the same order of magnitude as the elastic terms and no longer dominates the numerical cost. All computations in this paper both for the wave propagation and for the Poisson equation were run on a workstation. Future work includes the implementation of the presented method into a production software with direct coupling between the gravitational and elastic forces as well as its verification against established solutions. In order to apply this method to full waveform based tomography at normal mode frequencies, classical methods of extracting information from the seismograms (Laske & Widmer-Schnidrig 2015) need to be revised and adapted for this framework and the corresponding adjoint sources need to be derived. The check pointing approach used for the computation of gradients in the adjoint method needs to be verified for simulations with very high numbers of time steps and potentially extended to use multiple levels (Walther & Griewank 2004). In any case, the work presented here constitutes an important step towards the inclusion of full self-gravitation in routine calculations of long-period seismograms. A C K N O W L E D G E M E N T S We would like to thank editor Andrew Valentine, reviewer David Al-Attar and one anonymous reviewer for their thoughtful and constructive comments. This work was supported by grants from the Swiss National Supercomputing Centre (CSCS) under project ID s922, the European Research Council (ERC) under the EU's Horizon 2020 Framework Programme (grant No. 714069) and the Swiss National Science Foundation (SNF projects 172508 'Mapping the internal structure of Mars' and 197369 'Towards a self-consistent Earth model from multi-scale joint inversion: Revealing Earth's mantle elasticity and density with seismic full-waveform inversion, tidal tomography and homogenization'). DATA AVA I L A B I L I T Y There are no new data associated with this paper.
9,585.8
2021-06-21T00:00:00.000
[ "Geology" ]
The recent development of soft x-ray interference lithography in SSRF This paper introduces the recent progress in methodologies and their related applications based on the soft x-ray interference lithography beamline in the Shanghai synchrotron radiation facility. Dual-beam, multibeam interference lithography and Talbot lithography have been adopted as basic methods in the beamline. To improve the experimental performance, a precise real-time vibration evaluation system has been established; and the lithography stability has been greatly improved. In order to meet the demands for higher resolution and practical application, novel experimental methods have been developed, such as high-order diffraction interference exposure, high-aspect-ratio and large-area stitching exposure, and parallel direct writing achromatic Talbot lithography. As of now, a 25 nm half-pitch pattern has been obtained; and a cm2 exposure area has been achieved in practical samples. The above methods have been applied to extreme ultraviolet photoresist evaluation, photonic crystal and surface plasmonic effect research, and so on. Introduction Soft x-ray interference lithography (XIL) is a novel micro/ nanoprocessing technique that utilizes the interference fringes of two or more coherent x-ray beams to expose the photoresist and obtain a periodic nanopattern [1][2][3]. The technique is based primarily on the third-generation synchrotron sources, of which high throughput and good coherence provide the basis for high performance XIL techniques. In the field of micro/nanomanufacturing, XIL is a unique parallel fabrication technique independent of mainstream extreme manufacturing techniques. It focuses on the manufacture of strictly periodic patterns. Moreover, large areas of high-resolution nanostructures can be achieved efficiently. Compared with traditional high resolution fabrication techniques [4][5][6][7], such as electron beam lithography (EBL), focused ion beam (FIB), and nanoimprint lithography (NIL), XIL has the characteristics of strict periodicity, large-area fabrication, large depth of focus, and no need for substrate conduction [8,9]. In terms of resolution, the XIL technique's theoretical limit of resolution can reach less than 4 nm [10]. Compared with EBL and FIB, at the same resolution level, the throughput of XIL is much higher. Using XIL, the fabrication ofcm 2 nanostructures can be achieved within several hours. The service life of the mask and the defect control effect of the exposed sample are better than NIL because of the noncontact exposure method. In extreme ultraviolet lithography (EUVL) technology, the XIL technique is used to evaluate the performance of novel extreme ultraviolet (EUV) photoresists. Soft XIL with an EUV synchrotron radiation source (EUV-IL) is a powerful tool for EUV photoresist evaluation under working conditions. Other extreme manufacturing techniques that do not use EUV light sources cannot replace its role. Moreover, on a commercial EUV lithography machine worth hundreds of millions of dollars, no manufacturer can tolerate the risk of potential contamination of the equipment by the novel photoresist. Therefore, the EUV-IL technique is currently the only feasible EUV photoresist evaluation tool. In the Shanghai synchrotron radiation facility (SSRF), the XIL beamline (BL08U1B) applies a high brilliance undulator source and an achromatic diffraction scheme to obtain high quality interference fringes and practical throughput for a variety of scientific research and industrial applications [11]. Similarly, in the XIL-II beamline of the Swiss Light Source and BL-9 beamline of the SUBARU light source, XIL techniques have been employed in research of EUV photoresists, nanomagnetics, block copolymers, and silicon nanodevices [12][13][14][15][16][17][18][19][20]. The researchers in this field have ongoing efforts to develop XIL by obtaining higher exposure pattern resolution, efficiently fabricating large-area exposure patterns, and developing the exposure methods that facilitate subsequent pattern transfer [21][22][23][24]. In this article, we introduce our contributions to the above three objectives, including exposure tool optimization, new fabrication techniques, and the related applications based on these techniques. The vibration condition control of the exposure system A large number of experiments have proven that suppressing the vibration of the exposure system is the key to the success of the soft XIL experiment [25,26]. At the XIL beamline, a laser interferometer (attocube systems AG, FPS3010) was installed on the exposure system to monitor the displacement between the mask stage and the wafer stage in the horizontal and vertical directions, thus realizing a precise real-time vibration evaluation system, as shown in figure 1. Based on this evaluation system, we determined we could monitor and further improve the stability of the exposure system and the surrounding environment and selected appropriate experimental conditions. In the current experiment, the relative position fluctuation of the mask and the sample can be controlled below 2.5 nm root mean square during a ten-minute exposure, which provides a good experimental guarantee for the subsequent XIL experiment. Interference lithography with high-order diffraction beams The stable exposure of high-resolution patterns is the most important technical capability of the XIL beamline [27]. The XIL mask plays a key role in achieving high-quality pattern exposure. Figure 2 illustrates the basic structure of the mask grating and the principle of the dual-grating interference. The incident light is diffracted by the gratings in the mask; and the diffraction angle is set to θ, which satisfies where λ is the wavelength of the incident light, P g is the grating pitch, N is the diffraction order, and the pitch of the interference fringe is As shown by equation (3), the fringe pitch is independent of the wavelength and is only determined by the grating pitch and the diffraction order. A smaller fringe pitch can be obtained by the interference with higher order diffraction beams. At present, interference lithography based on firstorder diffraction has been widely used in EUV photoresist evaluation, and interference lithography based on high-order diffraction is not widely used due to the low efficiency of mask grating, as the height and duty cycle of the grating is difficult to precisely control in the grating fabrication process [28]. Traditionally, XIL masks are first defined by EBL and then fabricated by corresponding post-processing. In the XIL-II beamline at the Swiss Light Source, a negative photoresist hydrogen silsesquioxane was adopted to form gratings directly through the EBL and obtain high-resolution exposed patterns [29], eliminating the need for cumbersome postprocessing. A set of traditional processes has been developed for the XIL masks in the SSRF XIL beamline. Furthermore, in order to meet the small pitch requirement of EUV photoresist evaluation, we have developed novel fabrication techniques that do not rely on post-processing to obtain a good quality XIL mask of smaller pitches with less line edge roughness. At present, two methods of fabricating high-order diffraction gratings have been developed. One is to fabricate gratings on the EBL-defined photoresist patterns by means of atomic layer deposition (ALD) of titanium dioxide (TiO 2 ) to improve the grating's diffraction efficiency, and the other is to directly fabricate the grating by the metal-oriented deposition technique without relying on EBL [30]. As shown in figure 2, nanostructures with a half-pitch of about 25 nm have been obtained by the above gratings; and these masks can work stably for a long service time under high irradiation conditions. Large-area stitching exposure method capable of deep exposure To satisfy the large-area sample requirements for practical micro/nanodevices and for some scientific tests [31][32][33], it is necessary to develop a novel XIL technique which can efficiently expose the large-area patterns with high aspect ratios. The patterns obtained by the usual XIL are determined by the pitch and arrangement of the mask gratings, and a single exposure area is determined by those of the grating. In order to accurately stitch the single-exposure fields and break the limitation of the mask grating area, we propose observing the position of the interference zone and zero-order spots on-line by a small amount of high-order harmonic x-rays before exposure and then accurately blocking the zero-order beam by an order-sorting aperture. Thus a large-area stitching technique has been developed in the SSRF XIL beamline [34]. The principle and experimental setup are shown in figure 3. Using this technique, the zero-order spots around the interference pattern were eliminated; and thus a large-area pattern was obtained with micron precision stitching. In order to achieve high aspect ratio patterns, an incident light was selected with a photon energy 140 eV, higher than the usual 92.5 eV. Thus, a set of masks with permalloy blocking layers was developed to deal with the higher photon energy [35]. Patterns with an aspect ratio of up to 3 at 100 nm half-pitch were achieved using this fabrication process. Combining the above two methods, we further obtained a large-area (1.44 cm 2 ), nanoperiodic structure with an aspect ratio of 3 [36] and achieved pattern transfer using different processes, such as etching and lift-off. In order to meet the demands of different applications, we have successfully fabricated nanostructures on different substrates, such as silicon (Si), yttrium aluminum garnet (YAG), and silicon dioxide (SiO 2 ). Parallel direct writing achromatic Talbot lithography (ATL) Periodic structures with complex cells can be widely applied to various research fields, such as the polarization adjustment of light, large-area magnetic recording elements, and full absorption modulation of broadband light [37][38][39]. However, the normal XIL technique can only fabricate the nanoarrays with simple lines or dots. For this reason, a parallel direct writing achromatic Talbot lithography (DW-ATL) was developed for the complex structures in the above applications [40,41]. The principle and preliminary experimental results of DW-ATL are shown in figure 4. The light spot arrays obtained by ATL are employed as the basic exposure units. Periodic patterns with complex cells can be achieved by scanning the wafer stage with nanometer precision. The scanning accuracy is controlled through the laser interferometer feedback during the exposure. Periodic nanostructures with a resolution 60 nm were fabricated, with shortline, L-shaped, and tri-shaped cells. EUV photoresist evaluation EUVL is a candidate for large-scale integrated circuits moving toward 7 nm and below process nodes, and EUV photoresist is considered to be one of the most important key techniques for EUVL [42,43]. Due to the high cost of EUVL tools and the high risk of damage to EUV tools' internal environment, it is unrealistic to use EUVL tools for EUV photoresist evaluation during development. EUV interference lithography based on a 92.5 eV synchrotron radiation source is recognized as the most effective EUV photoresist evaluation tool [44][45][46][47], which can greatly accelerate the development of EUV photoresists. The XIL beamline at SSRF has established a complete evaluation platform for EUV photoresist sensitivity, resolution, line edge roughness, and outgassing analysis [48]. A large amount of EUV photoresist research work has been conducted based on the platform, and figure 5 shows the EUV photoresist test results [49]. Scintillator extraction efficiency enhancement A scintillator plays an important role in radiation detection systems and has various applications in high-energy physics experiments and nuclear medicine imaging [50][51][52]. The efficiency of a scintillator-based detector is highly dependent on luminescence conversion efficiency and light extraction efficiency. At present, a large-area stitching exposure XIL technique is applied to fabricate nanoscale periodic structures on scintillators to improve light extraction efficiency. For example, the photonic crystal structures fabricated on the surface of a BGO scintillator using a combination of XIL with ALD [53], as shown in figure 6, can achieve significantly enhanced light extraction based on the outcoupling of the evanescent field with the photonic crystal structures. A 95.1% enhancement was obtained in the present study. A high refractive index due to the conformal TiO 2 layer enables the efficient coupling and thus the enhanced extraction efficiency with a relatively low height of the structured layer. This method is very promising for light extraction of devices in which a large-area surface is required for practical applications. Optical filtering Surface plasmons (SP) have become the focus of various fields due to their extraordinary ability to manipulate light beyond the optical diffraction limits [54][55][56]. As shown in figures 7 and 8, the large area stitching exposure XIL technique has played an important role in SP-based plasma color filter research [57,58]. Conclusions In the field of micro/nanomanufacturing, XIL is a unique parallel fabrication technique independent of mainstream extreme manufacturing techniques. Large areas of highresolution, strictly periodic nanostructures can be achieved efficiently by this technique. As the only feasible tool for novel EUV photoresist evaluation, the EUV-IL technique plays an important role in EUVL technology. A precise realtime vibration evaluation system has been established in the SSRF XIL beamline to suppress the vibration sources and ensure the stability of the exposure process. On this basis, novel XIL techniques have been developed, such as highorder diffraction XIL exposure, deep XIL exposure, largearea stitching exposure, and parallel DW-ATL. At present, photoresist patterns of 25 nm half-pitch, cm 2 exposure area, and an aspect ratio of 3 have been achieved. Based on these new methods, many applications have been carried out, such as EUV photoresist performance testing, photonic crystal preparation, and SP effect devices.
2,954
2020-02-26T00:00:00.000
[ "Physics", "Engineering" ]
Mood Sensitive Stocks and Sustainable Cross-Sectional Returns During the COVID-19 Pandemic: An Analysis of Day of the Week Effect in the Chinese A-Share Market This study examines two stock market anomalies and provides strong evidence of the day-of-the-week effect in the Chinese A-share market during the COVID-19 pandemic. Specifically, we examined the Quality minus Junk (QMJ) strategy return on Monday and FridayQuality stocks mean portfolio deciles that earn higher excess returns. As historical evidences suggest that less distressed/safe stocks earn higher excess returns (Dichev, 1998).. The QMJ factor is similar to the division of speculative and non-speculative stocks described by Birru (2018). Our findings provide evidence that the QMJ strategy gains negative returns on Fridays for both anomalies because the junk side is sensitive to an elevated mood and, thus, performs better than the quality side of portfolios on Friday. Our findings are also consistent with the theory of investor sentiment which asserts that investors are more optimistic when their mood is elevated, and generally individual mood is better on Friday than on other days of the week. Therefore, the speculative stocks earned higher sustainable stock returns during higher volatility in Chinese market due to COVID-19. Intrinsically, new evidence emerges on an inclined strategy to invest in speculative stocks on Fridays during the COVID-19 pandemic to gain sustainable excess returns in the Chinese A-share market. INTRODUCTION A significant boost in economic uncertainty was observed after the outbreak of Corona virus and the consequences of the virus led the world toward a turbulent economy (Baker et al., 2020). To control the spread of the pandemic, the Chinese government has enacted strict measures, such as lockdown, that may have had some negative impact on the economy (Fareed et al., 2020;Shehzad et al., 2020). A substantial impact was observed on the economic activities of the country and economic slowdown was expected. Capital markets are an important part of economic development and, therefore, the Chinese stock markets were also badly influenced by this pandemic. US stock markets have also observed their highest ever levels of volatility (Baker et al., 2020). Though many studies have observed the economic policies (Huang et al., 2020) and economic consequences (Chen et al., 2020) of the pandemic in China, observing the impact of COVID-19 on speculative stocks and its effect on particular days of the week is still an unexplored area of research. Therefore, this study presents an examination of the effect of specific days of the week on young and distress anomalies during the time of Covid-19 in light of the sentiment hypothesis. Birru (2018) provided a striking pattern of anomaly returns in the U.S. stock market. Specifically, Birru's (2018) findings explained that the speculative leg of portfolios gains the highest (lowest) returns on Mondays when they fall on the long (short) leg of the anomalies 2 . Speculative stocks are young or distressed stocks or those that are not easy to value, which are suited for speculation and highly affected by investor sentiment. Psychology research predicts that individuals' moods comparatively improve on Fridays and worsen on Mondays. Hence, the speculative side of stocks tends to outperform (underperform) due to increases (decreases) in moods on Fridays (Mondays). This scenario leads to the day-of-the-week effect in cross-sectional stock returns. It is difficult for academicians and practitioners to understand all factors that should be considered in relation to the theory of asset pricing (Elton et al., 1998). This phenomenon has been addressed by considering individual and market rationality in the Efficient Market Hypothesis and Capital Assets Pricing Model existing finance theories (Rasheed et al., 2016). An ambiguous, uncertain, volatile, and complex investment environment triggers investors to speculate outcomes and challenge the existing market efficiency assumptions of rational and well-informed investors. In such situations, investors are constrained from cognitive resources and timing effects. The limited capacity to process information results in poor judgment and decisionmaking. This situation is addressed in behavioral finance research under the three distinct pillars of sentiment, biases, and heuristics (Hirshleifer, 2001). Behavioral finance researchers critique traditional finance theories and argue in favor of the psychological aspect of investors as a core determinant of asset pricing. Sentiments, which are part of human psychology, affect investors' decision-making both individually and collectively (Peterson, 2016). Therefore, researchers have explored the role of individual and market sentiments in the mispricing of stocks. Research on investor sentiment and its role in explaining cross-sectional variations led researchers to develop divergent viewpoints (Bormann, 2013). Various scholarly research outcomes discuss the empirical explanations of asset pricing bubbles as external factors of capital markets, like macroeconomic factors (Ying et al., 2019(Ying et al., , 2020. These mainly originate from regulatory reforms made to correct the flaws in investment regulation norms. However, factors affecting the asset pricing bubbles cannot solely be observed through the endogenous factors of investor behaviors that lead to excitement or losing confidence over financial markets (Öztürk et al., 2020). This study contributes to the literature: it presents a specific explanation of stocks as being sensitive to sentiment, and it provides evidence of cross-sectional variations of returns on particular days by linking a speculative leg of anomalies with mood theory from psychology literature. This study also provides different investment strategies to earn excess returns across the days of the week by investing in speculative stocks. Economists have always argued that the decision-making process cannot be properly analyzed in the absence of knowledge about the psychological aspect of an individual's thought process, as an individual's thinking is shaped by the coexistence of both internal and external sentimental factors (Hume and Hendel, 1955). Psychology research goes one step further than statistics and traditional economics in its treatment of the process of decision-making under conditions of uncertainty by centering on the nature of the stimulus rather than simply focusing on the outcomes resulting from that stimulus. In order to objectively examine a stimulus, every event that has happened should be considered equally likely to occur, and extreme variations in statistics should be considered outliers to the stimulus. However, this assumption puts finance scholars on the blind path of behavioral responses under varied stimuli by avoiding the psychological facets of information processing and learning (Estes and Burke, 1953). Sentiments facilitate the process of decision-making by influencing information processing and the learning phase (Davidson et al., 2000). The process of learning can be a gradual phase or sometimes can be a sudden occurrence that is dependent upon information processing from a variety of perspectives (Kahneman, 2011). Therefore, using the QMJ factor developed by Asness et al. (2019), this study explored the cross-sectional variations in speculative anomalies on different days of the week in the Chinese A-share market during the COVID-19 pandemic. The following are our motivations for conducting this study. First, the speculative characteristics of stocks exist at the same time as the peculiarity between junk and quality stocks, according to Asness et al. (2019). The findings of Asness et al. (2019) revealed that data from multiple countries indicated that quality stocks outperformed junk stocks. Quality stocks are those that are easy to value and safe to invest in, whereas junk stocks are those that are not easy to value, are accompanied by investment risks, and are suitable for speculation or have speculative characteristics. Meanwhile, the process of identifying the day-of-the-week effect in QMJ stocks resembles the analysis conducted by Birru (2018). Second, Birru's (2018) analysis is appropriate for the U.S. market but is an unresolved question for other markets, like China's, that have unique structures. With the economic development of China, rapid growth was observed in the Chinese stock market, raising it to the distinction of being the second largest stock market in the world. However, regardless of its size, trading patterns in the Chinese stock market are most chaotic amongst emerging markets, with higher volatility and highest and lowest cycles determined by individual investors and huge interference from the government. Chinese firms hold maximum individual shareholding patterns and less institutional shareholding; therefore, it is important to verify this relationship in the Chinese market because individuals are more sensitive to investor sentiment than institutions are. Our third motivation for conducting this investigation is that our findings will contribute an additional explanation for cross-sectional variations to the literature. Asness et al. (2019) provided the explanation that cross-sectional variations occur due to mispricing; our results elaborate on this explanation by connecting the QMJ factor with investor sentiment and making this explanation more specific than those elicited through prior research. Lastly, it is very important to determine that how speculative stocks behave during the pandemic and post-pandemic time period because speculative stocks are prone to sentiment. Therefore, it is expected that speculative stocks should perform better on Fridays due to higher mood and investor sentiment should also be high because stock markets contain higher volatility due to the effect of the pandemic. MATERIALS AND METHODS In this section, we discuss the data sources and methodology used to calculate portfolios based on anomalies for which the speculative leg exists in the junk factor. Data were obtained from the Wind Database, which is the largest financial database for Chinese data. The time period covered in our analysis is from February 2020 to September 2020, and we targeted the A-share market of China and collected data from both the Shanghai and Shenzhen stock exchanges. We chose the post-January timeframe for data collection for the following reasons. First, we needed to consider the pandemic time period. On January 23, just a day before the Chinese New Year, Wuhan was sealed and the government suspended all Chinese New Year festival activities. Additionally, the government suspended all public gatherings and all schools were closed. The pandemic also affected economic activities and an economic slowdown was observed in the Chinese economy. On February 3rd, the Chinese stock markets reopened and faced a decline in the index on the first day. Therefore, the pandemic lockdown due to COVID-19 is the main event window of the stock market and it is important for researchers to test the impact of COVID-19 on the speculative stocks on specific days of the week. Although different trading principles were introduced years ago, most companies did not know how to implement them, which created several discrepancies at the time. Another reason we selected the data during COVID-19 is that we imposed the condition to take a minimum number of observations for each portfolio cut point. The analysis of the QMJ effect on different days of the week is similar to the analysis of the anomalies that are sensitive to investor sentiment referred to in Birru's (2018) work. In this paper, the QMJ factor is used to analyze the distress anomaly and stocks that are young. The distress anomaly was measured following Ohlson's (1980) O-Score method, and the age anomaly was measured using the method from Baker and Wurgler, 2006) [Sections "Anomaly 1: Distress (O-Score)" and "Anomaly 2: Age" elaborates on both anomalies]. By following these methods for each anomaly, 10 equal deciles were generated for the purpose of analyzing portfolios. Further, we considered only the first and tenth deciles of each anomaly because quality stocks are those that fall in the tenth decile and junk stocks fall in the first decile of each anomaly. Portfolios based on both anomalies are rebalanced annually. Therefore, the first decile of all anomalies was predicted to perform better, as the speculative leg exists in the junk side of the anomaly. Moreover, a new strategy emerged with the prediction of higher QMJ returns on Mondays compared to Fridays because of the existence of the speculative leg in the junk side of the anomalies. Details of the anomalies and predicted returns are provided in Table 1. Anomaly 1: Distress (O-Score) Financially distressed stocks are sensitive to sentiment and highly affected by sentiment because higher distressed stocks are more risky and riskier stocks are prone to sentiment. The variation in sentiment will have a contemporary effect on returns and highly affect the prices of stocks that are not easy to value or very subjective to value or are difficult to arbitrage (Baker and Wurgler, 2006). According to Dichev (1998), more highly distressed firms outperform stocks that are not distressed. Therefore, speculative characteristics fall in the junk side of the anomaly, so the predicted QMJ returns should be greater on Mondays than on Fridays. The speculative leg should perform better on Fridays and have a reversal effect on the QMJ factor. Ohlson's (1980) O-Score model is measured as Here in Eq. 1, TA denotes Total Assets, while TLTA represents the leverage ratio and comprises the book value of Total Debt to Total Assets. WCTA is the ratio of Total Working Capital divided by Total Assets. CLCA refers to the inverse of the liquidity ratio and is measured by Current Liabilities over Current Assets. If the value of Total Debt is greater than the value of Total Assets, then the ENEG will be 1, and if it is less than the Total Assets, then it will be 0. NITA is the ratio of Net Income to Total Assets and is measured as Net Income divided by Total Assets. FUTL is the ratio of funds received through operations divided by Total Liabilities. INTWO will be 0 if the Net Income is positive in either of the 2 previous years, and it will be 1 if the Net Income is negative for the last 2 consecutive years. CHIN is estimated by (NI t − NI t−1 ) (|NI t | + |NI t−1 |), and here NI denotes Net Income. Anomaly 2: Age Stocks that are comparatively young are sensitive to sentiment. Moreover, young stocks will be most affected by sentiment. Historic evidence suggests that old stocks earn higher returns than young stocks. For example, in the long run, Initial Public Offerings tend to underperform (Ritter, 1991). Therefore, older stocks are classified as quality stocks and young stocks are classified as junk stocks, The table describes the division of samples for anomalies and speculative strategies. It indicates the division of anomalies into Quality and Junk stocks, and also indicates the expected speculative leg for each anomaly and offers a brief explanation for speculative reasons. Table 1 also reports the expected returns for speculative leg on a particular day. according to the QMJ strategy. For the age anomaly, the speculative leg is in the junk side of the strategy and predicts that QMJ strategy returns will be higher for Mondays than for Fridays. Age is calculated using the Baker and Wurgler (2006) method, where age is the number of months since the firm appeared in the Chinese stock market. For the age anomaly, portfolios are rebalanced annually at the end of December. leg falls in the junk side of the anomaly. Fridays alone account for 109 basis points in excess of returns for young stocks than for older stocks, and distress stocks also earn 66 basis points in excess of returns for distressed stocks than for non-distressed stocks on Fridays. Therefore, findings are consistent for both anomalies based on the theory that Fridays should see higher returns for the speculative leg of the anomaly than for the nonspeculative leg. Our results are also consistent when we compare return patterns across days. Fridays earn a higher magnitude return for both anomalies than Mondays. Furthermore, our findings are consistent with the theory of investor sentiment and the psychology literature that finds that junk stocks perform better on Fridays because of happier moods and worsen on Mondays due to lower moods. Additionally, the findings are also consistent with the explanation that speculative stocks perform better on Fridays during the risky time period and Chinese stock markets faced higher risk during the COVID-19 time period. Table 3 again verified our results that Mondays see higher QMJ returns than Fridays because speculation characteristics fall in the junk side of the anomalies. Therefore, the Friday minus Monday strategy returns contain negative alpha values in order to provide a robust explanation for investor sentiment on a particular day. The results for age and O-Score anomalies are consistent with the mood theory that Mondays gain higher QMJ strategy returns in comparison to Fridays because moods are higher on Fridays than on Mondays and speculative characteristics fall in the junk side of the anomalies. Friday Minus Monday Here, the results also confirm the explanation of the day-of-the-week effect for young and distress anomalies that asserts that junk stocks earn higher returns on Fridays due to the existence of the speculative leg in the junk side of the anomaly, and the speculative leg earns higher returns with higher moods during the COVID-19 period. Asymmetric Returns of Junk Side Panels A, B, and C of Table 4 present the results of the difference in returns in the junk side of Fridays and Mondays. The theory of mispricing based on investor sentiment must provide asymmetric results when returns of the speculative leg are compared for both days. The sentiment-based explanation must be endorsable with the returns trend of the speculative leg. Hence, Panels A, B, and C display only the junk side of the anomalies based on young stocks and distress stocks. The strategy returns of the junk side portfolios for both days are separately given in Panels A and B, while Friday junk minus Monday junk weekly portfolio excess returns are presented in Panel C of Table 4. The results are robust with the explanation that returns should be asymmetric when the speculative leg on Fridays is tested against the speculative leg on Mondays, and our results reconfirm that young and distress stocks are sensitive to investor sentiment. Focusing on the alpha values of both anomalies, the results suggest that the explanation for the difference in returns in the junk side for young stocks and distress stocks (Panel C) is consistent with the sentiment hypothesis, based on investor mood that the day-of-the-week effect prevails in cross-sectional returns for the speculative leg. The literature predicts that the junk side of the anomalies contain the speculative leg; therefore, we only focus on it to verify that returns are asymmetric within the speculative leg across days. Macroeconomic News Impact It is rarely possible for a systematic pattern to be found in the announcement of better or worse news on a specific day of the week, but it is likely that, due to these announcements, a systematic pattern of cross-section returns will be generated against these macro announcements. It is also possible that some anomalies are more sensitive to these announcements than others. Hence, we collected the data of the Producer Price Index and Consumer Price Index on a weekly basis following the methodology of Savor and Wilson (2013). We focused on the dates when these announcements were publicly declared. Panels A, B, and C of Supplementary Table S1 provide results of the strategy returns for all anomalies. The specific return dates of these announcements are omitted from the portfolios. Our results are again robust with the existing explanation for both days that the prevailing variations are due to mood fluctuations, and macroeconomic news does not significantly impact the anomaly returns. Firm Specific News Impact There is a possibility that existing cross-sectional variations in the anomalies are due to non-random announcements of organization-specific news. Hence, it is necessary for the validation of this argument that non-speculative and speculative stocks are clearly differentiated with regard to good news and bad news. To verify this explanation, we collected the firm-specific data of dividend announcements and earnings announcements and followed the methodology of Dellavigna and Pollet (2009), which recommends eliminating the earning announcement dates. We used a conservative approach for this analysis and omitted 2 days before and 2 days after the declaration. This approach is useful because ignoring these 5 days will not affect any day of the week because the elimination of 5 days will stand the balance of the week. Panels A, B, and C of Supplementary Table S2 present the results of the anomaly returns and exclude the data from the announcement dates. Our results provide a robust explanation for existing variations that organizationspecific news did not significantly change the magnitude or nature of existing relationship. A more direct explanation from our results is that cross-sectional variations were not observed as an impact of organization-specific news. DISCUSSION The psychological explanation of greater mood elevation on Fridays than on other days of the week predicts that the portfolios' returns should be higher on Fridays than on Mondays. The sentiment hypothesis explains that investor sentiment is higher during times of elevated moods for the speculative leg of the anomalies, and psychological research provides several significant findings that indicate higher moods are experienced on Fridays, while lower moods are experienced on Mondays. Therefore, portfolio returns based on both anomalies for the speculative leg should be greater for Fridays than for Mondays. The speculative leg of the anomalies exists in the junk side of both anomalies. The findings are robust for all the asset pricing models, and other models also provide a striking magnitude of excess in returns like those realized for the Capital Assets Pricing Model. Moreover, compared to the distress anomaly, young stocks earn a greater magnitude of returns for weekly portfolios. The outcomes of direct analysis of Fridays and Mondays indicates that investors earn higher returns on Mondays when Monday's QMJ strategy returns are adjusted/deducted from Friday's QMJ strategy returns. The Friday minus Monday strategy returns contain negative alpha values in order to provide a robust explanation for investor sentiment on a particular day. The results are also robust with the explanation that returns should be asymmetric when the speculative leg on Fridays is tested against the speculative leg on Mondays. Moreover, the same findings were attained for both days that the prevailing variations are due to mood fluctuations, and macroeconomic news and Firm Specific news do not significantly impact the anomaly returns. CONCLUSION This study revealed a substantial anticipated relationship between anomaly returns and different days of the week. Chinese A-share market data from both the Shanghai and Shenzhen stock exchanges were used to evaluate the QMJ factor strategy that resembles the mutual measure of quality (non-speculative/long leg) stocks and junk (speculative/short leg) stocks. The QMJ strategy provided negative anomaly returns on a Friday, which confirmed our findings that junk stocks that also contain the speculative leg perform better than quality stocks due to elevated moods during the COVID-19 period. Our findings are consistent with the work of Birru (2018), who found that short leg/speculative leg stocks are sensitive to investor sentiment, and we have contributed to his findings in a way by testing a new investment strategy for quality and junk stocks in place of the long minus short strategy. The findings are also consistent with a new explanation that a higher risk prevails in Chinese stock markets due to the COVID-19 pandemic and speculative leg of the anomalies perform better on Friday than Monday as literature suggested that speculative stocks perform better on Friday during a higher risk time period. Therefore, we also observed that the QMJ strategy performed in the same way as Birru's (2018) long minus short strategy. The findings also reveal that the QMJ strategy premium is probably an indicator of behavioral mispricing. Limitations and Future Directions There are several limitations to this study. Firstly, a limited time period has been used for the current study because the postpandemic time period is short. Therefore, this type of study would have more generalized findings with a larger time span. Secondly, this study has taken data from China, but the pandemic has affected almost the entire world so further studies can take data from several countries on a larger scale. Further, the future studies can test several anomalies to verify the effect of the pandemic on a larger scale. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. AUTHOR CONTRIBUTIONS TY performed the formal analysis, the methodology of the manuscript, and technique applications through software. TA analyzed the data. QA conceptualized the idea and reviewed the manuscript. MZ reviewed and edited the draft and also helped in the data collection. YA reviewed the write-up of the manuscript. All authors contributed to the article and approved the submitted version.
5,658.8
2021-02-26T00:00:00.000
[ "Economics", "Business" ]
Conjugates of cytochrome c and antennapedia peptide activate apoptosis and inhibit proliferation of HeLa cancer cells Polycationic cell-penetrating peptides (CPPs) deliver macromolecules into cells without losing the functional properties of the cargoed macromolecule. The aim of this study was to determine whether exogenous cytochrome c is delivered to HeLa cervical carcinoma cells by the CPP antennapedia (Antp) and activates apoptosis. HeLa cervical carcinoma cells were treated with conjugated Antp-SMCC-cytochrome c (cytochrome c chemically conjugated to Antp) or with non-conjugated Antp and cytochrome c. Sensitivity to the treatments was determined by the clonogenic assay (proliferation) and by immunoblot analysis (apoptosis activation). We report that conjugated Antp-SMCC-cytochrome c activated apoptosis in HeLa cells as demonstrated by poly (ADP-ribose) polymerase 1 (PARP-1) cleavage and inhibited their proliferation. The Antp-SMCC-cytochrome c-induced apoptosis was inhibited by z-VAD-fmk, a pan-caspase inhibitor peptide. Unconjugated Antp or cytochrome c demonstrated no inhibitory effect on survival and proliferation. Our results suggest that chemical coupling of cytochrome c to CPPs may present a possible strategy for delivering cytochrome c into cells and for activating apoptosis. Introduction Cytochrome c is a highly conserved, water-soluble protein of 12.3 kD with a net positive charge at neutral pH, residing loosely attached in the mitochondrial intermembrane space. It has a dual function; it is involved in energy production in mitochondria by interaction with redox partners and it also has a critical function in the induction of intrinsic (cytochrome c/mitochondria-mediated) apoptosis. Intrinsic apoptosis is activated by cellular stress originating from inside the cell (e.g. DNA damage or the presence of reactive oxygen species) and is strictly dependent on the release of cyto-cytochrome c from the mitochondria into the cytoplasm upon an intrinsic (i.e. of intracellular origin) stimulus. Cytochrome c is then, together with other cytosolic factors, including apoptotic protease activating factor 1 (Apaf-1) and pro-caspase-9, assembled into the apoptosome. Following apoptosome assembly and activation of pro-caspase-9 (initiator caspase), the downstream caspases-3 and -7 (effector caspases) are cleaved and thereby activated. This leads to the execution of the apoptotic program, culminating in the dismantling of the cell (1)(2)(3)(4). Apoptosis also occurs through the extrinsic (cyto-cytochrome c-independent) Fas/FasL-mediated pathway, which merges with the intrinsic pathway at the level of the effector caspases-3 and -7 (5). The findings that exogenous cytochrome c, either micro-either microinjected directly into the cytoplasm or delivered into the cytoplasm by electroporation, activates apoptosis without the requirement for additional apoptotic stimuli supports the critical role of cytochrome c in apoptosis (6)(7)(8). Related studies have demonstrated that apoptosis in tumor cells is activated by cytochrome c delivered by nanoparticles, including nanotubes or polylactic-co-glycolic acid (PLGA) microspheres (9,10). This suggests that the cytoplasmic delivery of exogenous cyto-cytochrome c through suitable carriers with subsequent apoptosis activation is a potential therapeutical approach against cancer. Contrary to necrosis, apoptosis does not induce an immune response of the surrounding tissue, which may be of clinical significance. Cell-penetrating peptides (CPPs) are a group of peptides that are often ~20 amino acids long and contain a cluster of basic residues. Based on their property of translocating across the hydrophobic cell membrane, they are also capable of delivering protein-and DNA-based macromolecules and drug molecules to cells without the loss of biological activity of the conveyed materials. CPPs are intensively studied and considered as important carriers in drug delivery (11)(12)(13)(14)(15). Antennapedia (Antp) is one member of the family of CPPs. Antp was originally derived from the 60 amino acid long homeodomain of the Drosophila transcription factor Antennapedia (16). Later on, its translocation ability was narrowed down to a 16-mer, termed as penetratin (Antp PTD, 43-58 residues, RQIKIWFQNRRMKWKK) present in the homeodomain (17). In the present study we describe the effects of Antp-SMCC-cytochrome c, a conjugate molecule synthesized from cytochrome c and Antp on apoptosis activation and proliferation inhibition in HeLa cervical tumor cells. Materials and methods Cell culture and compounds. HeLa cervical cancer cells (obtained from Dr G. Marra, Institute of Molecular Cancer Research, University of Zurich) were routinely cultured in Iscove's modified Dulbecco's medium (IMDM)-21980 (Invitrogen, Basel, Switzerland) containing 10% fetal calf serum (Oxoid, Basel, Switzerland) at 37˚C and in an atmosphere of 5% carbon dioxide and 95% humidity. Horse heart cytochrome c was purchased from Sigma-Aldrich Chemie GmbH (Buchs, Switzerland) and a stock solution (20 mg/ml, 1.63 mM) was prepared in sterile water and stored at -20˚C. The 19-mer synthetically synthesized Antp peptide was purchased from Bachem (Bubendorf, Switzerland) and solutions were prepared in phosphate-buffered saline (PBS) containing 2 mM tributylphosphine prior to use. This Antp peptide (amino acid sequence, Ser-Gly-Arg-Gln-Ile-Lys-Ile-Trp-Phe-Gln-Asn-Arg-Arg-Met-Lys-Trp-Lys-Lys-Cys) was biotinylated at the 5'-carboxy terminus and functionalized at the 3'-amino terminus with a trifluoroacetate group. Sulfo-succinimidyl 4-(N-maleimidomethyl)cyclohexane-1-carboxylate (SMCC) was purchased from Pierce Biotechnology Inc. (Lausanne, Switzerland) and solutions were freshly prepared in PBS. The pan-caspase inhibitor peptide z-VAD-fmk was purchased from Enzo Life Sciences (Laufen, Switzerland) and a stock solution in dimethyl sulfoxide (DMSO) was stored at -20˚C. Conjugate synthesis. The Antp-SMCC-cytochrome c conjugate synthesis was a two-step reaction, where sulfo-SMCC was used as a cross-linker molecule (also referred to as a bifunctional coupling reagent). The conjugate synthesis was performed as follows: In the first step, cytochrome c was incubated with crystalline sulfo-SMCC in PBS at a molar ratio of protein molecules to succinimidyl groups of 1:4 for 60 min under continuous stirring at room temperature. This coupled the sulfo-SMCC covalently to cytochrome c. Excess sulfo-SMCC was removed by overnight dialysis at 4˚C against PBS. In the second step, the sulfo-SMCC-coupled cytochrome c was incubated with freshly prepared Antp solution containing 2 mM tributylphosphine (to prevent dimerization of the Antp peptides) at a molar ratio of cytochrome c-SMCC:Antp of 1:5 for 48 h under continuous stirring at 4˚C. The reddish conjugate solution was then filtered [Millex-HV polyvinylidene fluoride (PVDF) 0.45-µm pore-size sterile filter]. The concentration of cytochrome c in the conjugate was determined by a cytochrome c (human) enzyme-linked immunosorbent assay (ELISA) kit (Enzo Life Sciences) according to the manufacturer's instructions. Cell lysates and immunoblot analysis. Immunoblot analysis was performed in cell lysates to assess apoptosis on the basis of the treatment-induced proteolytic cleavage of the 116 kDa PARP-1 precursor into its 89 kDa fragment. Proteolytic PARP-1 cleavage is an acknowledged measure of ongoing apoptosis. Cell lysates were produced from untreated HeLa control cultures or HeLa cultures treated with either the Antp-SMCC-cytochrome c conjugate or the non-conjugated compounds (cytochrome c, Antp) for 24 h, washed in PBS and lysed according to standard laboratory protocols. In certain cultures the pan-caspase inhibitor peptide z-VAD-fmk was added (10 or 20 µM) 2 h before the addition of Antp-SMCC-cytochrome c. The protein concentration of cell lysates was determined using the BCA Protein Assay kit (Pierce Biotechnology Inc.). For immunoblot analysis (performed following standard laboratory protocols), 20 µg cell lysate protein was separated using 10% sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE), followed by blotting onto a PVDF membrane (Amersham Biosciences, Otelfingen, Switzerland). Proteins were detected by the specific primary antibodies and the respective secondary antibodies: horseradish peroxidase (HRP)-conjugated anti-mouse (M15345; BD Transduction Laboratories, Lexington, KY, USA) or HRP-conjugated antirabbit (7074, Cell Signaling Technology Inc./BioConcept, Allschwil, Switzerland). The primary antibodies used were PARP-1 (9542, Cell Signaling; recognizing the 116 kDa full-length PAPR-1 and the cleaved 89 kDa fragment) and anti-mouse β-actin (A5441, Sigma) or anti-rabbit α/β-tubulin (2148, Cell Signaling) as sample loading controls. Complexes were visualized by enhanced chemiluminescence (Amersham Biosciences) and autoradiography. A HeLa cell culture treated with 0.8 mM H 2 O 2 for 6 h served as the positive control sample for apoptosis. Clonogenic assay. The sensitivity of HeLa cells to the treatments was determined by the clonogenic assay. HeLa cells (500 cells in 2 ml culture medium) were plated in 35 mm cell culture plates. Then, 24 h after plating, the cells were treated with various concentrations of either the conjugate or the non-conjugated compounds for 24 h. Then, the drug-containing medium was replaced with drug-free medium. Seven days after treatment, cells were fixed with 25% acetic acid in ethanol and stained with Giemsa. Colonies of ≥50 cells were scored visually. Each experiment was performed three times. Clonogenic survival was presented as the percentage of the untreated control as a function of the compound concentration. Results Antp-SMCC-cytochrome c conjugate activates caspasedependent apoptosis. Immunoblot data (Fig. 1A) revealed that, in comparison with the untreated control sample, the treatment of HeLa cells with Antp-SMCC-cytochrome c resulted in the cleavage of the 116-kDa PARP-1 precursor into an 89-kDa cleaved fragment (a measure for ongoing apoptosis). A concentration of cytochrome c (contained in the conjugate and measured by cytochrome c-specific ELISA) as low as 5 µg/ml was sufficient to result in PARP-1 cleavage, i.e. to activate apoptosis. By contrast, PARP-1 cleavage was not observed when HeLa cells were treated with either cytochrome c or Antp alone at concentrations of up to 1,250 µg/ml or 270 µg/ml, respectively (Fig. 1B). This indicates that apoptosis is activated by treatment with the Antp-SMCC-cytochrome c conjugate but not with Antp or cytochrome c alone. The 2-h pretreatment of HeLa cultures with 10 or 20 µM z-VAD-fmk and the subsequent treatment with Antp-SMCC-cytochrome c (5 µg/ml) eliminated the Antp-SMCC-cytochrome c-induced apoptosis. This was manifested by the failure to detect PARP-1 precursor cleavage (Fig. 1C). As a broad spectrum caspase inhibitor peptide, z-VAD-fmk irreversibly inhibits the activity of the majority of the members of the caspase-family, indicating that the Antp-SMCC-cytochrome c-induced apoptosis was caspase-dependent. Antp-SMCC-cytochrome c conjugate inhibits clonogenic survival. The Antp-SMCC-cytochrome c conjugate reduced the clonogenic survival of Hela cells ( Fig. 2A). A concentration as low as 1.3 µg/ml cytochrome c (contained in the conjugate) was sufficient to completely block the clonogenic potential of HeLa cells. By contrast, cytochrome c alone (≤1,250 µg/ml) or Antp alone (≤275 µg/ml) did not produce a substantial negative effect on clonogenic survival ( Fig. 2B and C). Discussion Cytochrome c has been shown to activate apoptosis when directly microinjected or delivered into tumor cells via electroporation or nanoparticles. CPPs, including Antp, facilitate the penetration of various biomolecules and particles into cells. On this basis, we synthesized the conjugate molecule Antp-SMCC-cytochrome c from the respective compounds (cytochrome c and Antp) using the sulfo-SMCC crosslinker and determined the effects of this Antp-SMCC-cytochrome c conjugate on survival, i.e. apoptosis activation and proliferation in HeLa cervical cancer cells. The aim of the present study was to determine whether apoptosis in HeLa tumor cells is activated by exogenous cytochrome c delivered into the cytoplasm through the CPP Antp in the form of a conjugate molecule consisting of Antp covalently linked to cytochrome c. In the current study, we demonstrated that cytochrome c covalently conjugated to Antp applied to HeLa cervical cancer cell cultures activates caspase-dependent apoptosis and inhibits proliferation, whereas neither cytochrome c nor Antp alone affected survival and proliferation. Therefore, we conclude that the inhibitory effects on survival and proliferation are attributed to cytochrome c delivered to HeLa cells via Antp. This suggests that the Antp-aided delivery of cyto-cytochrome c into tumor cells may be a candidate strategy for activating apoptosis and consequently inhibiting the survival and proliferation of tumor cells. In a pilot set of experiments, we demonstrated that the presence of non-conjugated cytochrome c alone in the culture medium did not activate apoptosis nor substantially reduce the clonogenic potential at concentrations of up to 1,250 µg/ml, suggesting that cytochrome c is not accumulated in the cytoplasm. This suggestion is supported by findings that cytochrome c is unable to translocate across membranes on its own and therefore requires the so-called translocases in the outer membrane (TOM) complex for the translocation across the mitochondrial outer membrane (18). The presence of (non-conjugated) Antp (concentrations ≤270 µg/ml) alone in the culture medium had no effect on apoptosis and clonogenic potential. This suggests that Antp is not harmful in this experimental setting. It is known that CPPs are toxic to cells due to membrane perturbation at higher levels of the peptides (19). The key finding in the present study was that, unlike non-conjugated cytochrome c and Antp, the incubation of HeLa cultures with the Antp-SMCC-cytochrome c conjugate resulted in the activation of apoptosis and reduction of the clonogenic potential of HeLa cells. Antp-SMCC-cytochrome c-induced apoptosis is caspase-dependent, since it was inhibited by the pan-caspase inhibitor z-VAD-fmk. The following series of events that eventually lead to apoptosis may be proposed on the basis of the results of the current study. The Antp-SMCC-cytochrome c conjugate translocates across the cellular membrane and accumulates in the cytoplasm, where the conjugate is hydrolyzed into its components (the SMCC-crosslinker is pH-sensitive). Cytochrome c is then assembled into the apoptosome that, in turn, finally results in the activation and the execution of apoptosis. This implies that the structural integrity and the biological function of cyto-cytochrome c are not compromised by the chemical modifications made during Antp-SMCC-cytochrome c conjugate synthesis and its subsequent hydrolysis. Studies have shown that injection of ~10 fg cytochrome c is sufficient to activate apoptosis (6), corresponding to an estimated intracellular cytochrome c concentration of ~20 µM (7). Whether and to what extent cytochrome c molecules with covalently bound SMCC retain functional integrity in terms of proper apoptosome formation remains unclear. Likewise, the possible effects of the other products of the hydrolysis with respect to apoptosis activation and clonogenic survival are unknown, but may be marginal. It is important to acknowledge that the results of the present study should be considered as proof-of-concept only, and that more detailed studies should be performed. However, hypotheses towards important features related to antitumor studies may be proposed. Conventional chemotherapy is an indispensable therapeutic option for the treatment of a number of malignancies. It kills tumor cells through the activation of the apoptotic machinery by the use of foreign-to-body chemicals or biological compounds. These compounds are by definition toxic and are frequently of limited bio-tolerability and bio-degradability. Clinicians and patients are therefore often confronted with limitations, including adverse side-effect profiles. Cytochrome c as the therapeutically active compound against tumor cells appears appealing and may be a candidate alternative to conventional chemotherapy. It is intrinsic to cells and not toxic; however, it is able to activate apoptosis when delivered to cells from outside in femtogram quantities. Exogenous cytochrome c as the 'therapeutically' active compound may help overcome certain types of chemotherapy resistance. Resistance to chemotherapeutic compounds emerges through the expression of multidrug resistance drug efflux transporters or drug detoxifiers, or through the enhanced repair of damaged DNA (20). This leads to ineffective mitochondrial cytochrome c release due to the absence of apoptotic stimuli, to ineffective apoptosome assembly and caspase activation, and eventually to ineffective apoptosis execution. Absent release of intrinsic cytochrome c may be compensated by the exogenously delivered cytochrome c, thereby overcoming chemoresistance. It may also be hypothesized that exogenous cytochrome c does not cause the acquisition of drug resistance in tumor cells, a major problem of conventional chemotherapies. Despite its intriguing characteristics, there are critical issues with the concept of CPP-aided cytochrome c delivery. One is that CPPs have limited target specificity; CPPs are likely to deliver their cargo not only to tumor cells, but also to normal cells. Further studies are required to render CPP-aided delivery target cell-specific. An alternative to CPP-aided cyto-cytochrome c delivery may be cytochrome c delivery via tumor cell-targeted immunoliposomes; however, this approach may suffer from limitations associated with the intrinsic disadvantages of endocytotic-based mechanisms. Another issue is what the potential clinical application of the CPP-aided cyto-cytochrome c-therapy may be. We performed this study with HeLa cervical cancer cells; therefore, it may be applied as a therapy of inoperable, local cervical cancers or advanced primary inoperable vulvar and vaginal cancers that are easily accessible to, for instance, an Antp-SMCC-cytochome c-containing ointment. A similar application may also be suitable for superficial cancers, including skin cancer.
3,655.4
2013-07-04T00:00:00.000
[ "Biology", "Chemistry" ]
The Potential Use of Forensic DNA Methods Applied to Sand Fly Blood Meal Analysis to Identify the Infection Reservoirs of Anthroponotic Visceral Leishmaniasis Background In the Indian sub-continent, visceral leishmaniasis (VL), also known as kala azar, is a fatal form of leishmaniasis caused by the kinetoplastid parasite Leishmania donovani and transmitted by the sand fly Phlebotomus argentipes. VL is prevalent in northeast India where it is believed to have an exclusive anthroponotic transmission cycle. There are four distinct cohorts of L. donovani exposed individuals who can potentially serve as infection reservoirs: patients with active disease, cured VL cases, patients with post kala azar dermal leishmaniasis (PKDL), and asymptomatic individuals. The relative contribution of each group to sustaining the transmission cycle of VL is not known. Methodology/Principal Findings To answer this critical epidemiological question, we have addressed the feasibility of an approach that would use forensic DNA methods to recover human DNA profiles from the blood meals of infected sand flies that would then be matched to reference DNA sampled from individuals living or working in the vicinity of the sand fly collections. We found that the ability to obtain readable human DNA fingerprints from sand flies depended entirely on the size of the blood meal and the kinetics of its digestion. Useable profiles were obtained from most flies within the first 24 hours post blood meal (PBM), with a sharp decline at 48 hours and no readable profiles at 72 hours. This early time frame necessitated development of a sensitive, nested-PCR method compatible with detecting L. donovani within a fresh, 24 hours blood meal in flies fed on infected hamsters. Conclusion/Significance Our findings establish the feasibility of the forensic DNA method to directly trace the human source of an infected blood meal, with constraints imposed by the requirement that the flies be recovered for analysis within 24 hours of their infective feed. Methodology/Principal Findings To answer this critical epidemiological question, we have addressed the feasibility of an approach that would use forensic DNA methods to recover human DNA profiles from the blood meals of infected sand flies that would then be matched to reference DNA sampled from individuals living or working in the vicinity of the sand fly collections. We found that the ability to obtain readable human DNA fingerprints from sand flies depended entirely on the size of the blood meal and the kinetics of its digestion. Useable profiles were obtained from most flies within the first 24 hours post blood meal (PBM), with a sharp decline at 48 hours and no readable profiles at 72 hours. This early time frame necessitated development of a sensitive, nested-PCR method compatible with detecting L. donovani within a fresh, 24 hours blood meal in flies fed on infected hamsters. Introduction Leishmaniasis is a disease caused by the parasitic protozoan Leishmania. The disease is transmitted by the bite of the female Phlebotomine sand fly that requires a blood meal for egg production. Visceral leishmaniasis (VL), also known as "kala-azar", is the fatal form of Leishmaniasis that is characterized by bouts of fever, weight loss, enlargement of the spleen and liver and anemia. Approximately 300,000 new cases occur annually, with mortality rates of about ten percent (WHO, http://www.who.int/leishmaniasis/en). VL is caused by two closely related Leishmania species; L. infantum in the Mediterranean basin, North Africa and Central and South America, and L. donovani in the Indian subcontinent and east Africa. According to WHO, in India alone, around 10,000 cases were reported in 2014. Given the high rate of nonreported cases, the actual number is estimated to be around 100,000 (http://www.who.int/ neglected_diseases/news/SEARO_poised_to_defeat_VL/en/), the majority of whom live in poor, rural settings in the northeast state of Bihar [1]. In northeast India, VL is transmitted by Phlebotomus argentipes [2]. While the infection reservoirs for VL in Europe, North Africa, and Brazil are animals, mainly dogs [3,4], in the Indian subcontinent the disease is considered to have an anthroponotic transmission cycle, and no non-human reservoirs have been confirmed. Nonetheless, the relative importance of the different infected human populations in sustaining the transmission cycle is not known. There are four distinct groups of L. donovani-exposed human subjects that can serve as potential reservoirs for VL: active patients, cured cases, asymptomatic individuals and patients with post-kala azar dermal leishmaniasis (PKDL). PKDL is a complication of VL characterized by the appearance of skin nodules post-VL treatment. PKDL patients were thought to be the main reservoir for kala azar as their skin nodules contain large numbers of parasites that can be picked up by the vector [5]. As the low prevalence of PKDL cases in India, approximately 15% of VL patients [6], is unlikely to maintain the intensity of transmission observed, the potential contribution of asymptomatic cases has also been considered. Incident asymptomatic infections are far more frequent than incident disease, ranging from 6.1:1 to 17.1:1 in high-endemic villages of India and Nepal [7]. At least in the case of canine VL, it is known that asymptomatic infections can be highly transmissible to vector sand flies [8], and mathematical modeling suggests a major role of asymptomatics in driving transmission of human VL in India [9]. Identifying the human-subjects groups serving as infection reservoirs may be possible via xenodiagnostic studies using safe, laboratory-reared colonies of vector sand flies. Early studies employing direct xenodiagnosis of human VL patients were critical in helping to establish P. argentipes as the natural vector of L. donovani transmission in India [10]. And as mentioned, sand fly infections following exposure of the flies to nodular PKDL lesions has been accomplished on two occasions [5,11]. Extending xenodiagnostic studies to include human subjects from across the infection spectrum would establish the relative potential of each exposure group to maintain the transmission cycle. However, this approach does not directly identify the source of an infected blood meal in sand flies captured from a transmission focus. In addition, xenodiagnostic studies require substantial infrastructures to establish and maintain a large working colony of P. argentipes that is safe to feed on human subjects, and the willingness of human volunteers to submit to this protocol. Another way to address this question is by using forensic DNA methods. This approach would involve recovering human DNA from blood-fed L. donovani-infected flies captured in high VL transmission areas. The DNA profiles would then be matched to reference samples taken from infected individuals in the same areas. In the forensic community, the most commonly used method for human identification is the analysis of highly polymorphic Short Tandem Repeat (STR) markers. Forensic STR kits presently available have been developed to enable the generation of DNA profiles from as little as 100 pico grams of DNA. These kits can type multiple highly polymorphic STRs in a single reaction, producing profiles that are unique in a population and that can be matched to a reference database for source identification. Recovering human DNA and obtaining interpretable STR profiles from blood-sucking insects has been successfully demonstrated in mosquitos and has been used in epidemiologic studies. STR identification was used to determine the feeding preferences of Aedes aegyptii, the mosquito vector of the dengue virus, thus demonstrating the role played by a migrating population in spreading the virus [12,13]. A more recent study determined that Culicinae mosquitos can be relevant to a criminal investigation when present at a crime scene, as human DNA can be successfully typed from these insects 56 hours after a blood meal has been taken [14]. The application of a similar approach to sand flies has not been previously reported, and could be challenging due to the fact that the blood meals of these insects are smaller and their digestion is faster compared to mosquitos. The volume of a fully engorged sand fly like P. argentipes is 0.63-0.73μl [15] whereas the volumes of blood meals in an engorged mosquito vary between 2-3 μl and remain >1 μl even 48 hours post blood meal (PBM) [16,17]. The objective of the current study was to determine the feasibility of using a forensic DNA approach to identify the human source of a sand fly blood meal in a colonized population of P. argentipes. We conclude that a useful DNA fingerprint can be obtained within the first 24 hours of blood feeding, and that this time frame is compatible with detection of L. donovani in sand flies engorged on an infected host. Ethics statement The Office of Human Subjects Research Protections came to a determination of 'Excluded from IRB Review' per the requirements of 45 CFR 46 and NIH policy to obtain sand flies with human blood for the project, 'Application of forensic DNA methods to sand fly blood meal analysis'. Exempt #: 13124. All hamster studies were carried out in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. The protocol was approved by the Animal Care and Use Committee of the NIAID, NIH (protocol number LPD 68E). For anesthesia and sedation of hamsters, we used Telazol (100 mg/ml stock) and xylazine (20 mg/ml stock) given IP at a dose of 50 mg/kg Telazol and 5 mg/kg xylazine. Sand fly colonies and Leishmania parasites The laboratory colony of P. argentipes originated from Aurangabad in Maharashtra state in India and was established and maintained as a colony at the Department of Entomology in Walter Reed, Army Institute of Research. The L. donovani Indian strain Mongi (MHOM/IN/ 83/Mongi-142) was used in this study. The parasites were passed in hamsters and grown as promastigotes in Medium199 as described elsewhere [18]. All hamster studies were carried out in strict accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. The protocol was approved by the Animal Care and Use Committee of the NIAID, NIH (protocol number LPD 68E). All hamsters were maintained at the NIAID animal care facility under specific pathogen-free conditions. Sand fly feeding on human volunteers and infected hamsters Two adult human volunteers were used for this study, which, under a determination by The Office of Human Subjects Research Protections, NIH, was excluded from IRB review. To obtain human DNA profiles from blood fed flies, approximately 50, two to six-days-old Phlebotomus argentipes females were placed in a cylinder of clear polycarbonate plastic (1.5 inches high and 2 inches in diameter) closed at one end with a polycarbonate disk. The open end of the cup was covered with a piece of fine-mesh and held in place with an O-ring. The cylinder was attached to the leg of two subjects (D and P) with the open end facing their skin. Flies were allowed to feed for one hour at room temperature. Blood fed female were kept in 26°c incubators and sampled immediately, one, two, three and five days post feeding (T-0,1,2,3 and 5 respectively). After CO 2 anesthetization, each fly was put into an individual Eppendorf tube and frozen at -80°C until processed. For the detection of L. donovani in flies fed on infected hamsters, 5x10 7 ficoll-purified Leishmania donovani metacyclics were intravenously injected into Syrian Golden hamsters. After approximately two months when the hamsters had lost around 30% of their weight, they were anesthetized, their abdomens were shaved, and they were exposed to 100-150 P. argentipes flies for approximately one hour at 26°C in the dark. Non-fed females were removed from the cage and blood-fed flies were CO 2 anesthetized and immediately processed for DNA analysis. Some flies were maintained alive for microscopic evaluation of their infections at later time points. Preparation of DNA Whole flies were put in 1.5 mL Eppendorf tubes and DNA was extracted using a QiAamp DNA investigator kit (Qiagen # 56504: reference protocol on hair and fingernail clips extraction). Flies were incubated at 56°C for one hour on a thermomixer (900RPM), then homogenized with pestles, followed by incubation for an additional hour at 56°C. The elution was passed a second time through the column for maximal DNA recovery. Final elution volume was 30 μl. DNA yields and concentrations are summarized in Table 1. Reference DNA samples of the two subjects on whom the flies fed were collected using buccal swabs. DNA from the reference swabs was extracted using the EZ1 DNA Investigator Kit (Qiagen-952034) on a BioRobot EZ1 [19]. Total DNA from all samples was quantified with a NanoDrop 1000 (Thermo Scientific). PCR amplification of Leishmania and human DNA; DNA fingerprint analysis For the detection of L. donovani in blood-fed P. argentipes and specifically in flies with fresh blood meals, a "nested PCR" approach was used. The target gene was NADH dehydrogenase subunit 5 (ND5), which is located on the maxicircle DNA [20] and previously used for Leishmania genotyping. The primer sequences for the external PCR round [18] were: Fwd, 5'-GAYGCDA TGGAAGGACCDAT-3' and Rev, 5'-CCACAYAAAAAYCAYAANGAACA-3'. The PCR protocol was 25 Cycles of 94°C 30 sec, 60°C 30 sec and 72°C for 30 sec. The PCR products were purified in "Wizard SV Gel and PCR cleanup system (Promega A11-20)" and were used as templates for a second internal PCR. The primer sequences specifically designed for this work were: Fwd, 5'-ATACATGCAGCAACCTTAGTTG-3' and Rev, 5' CATATTGTACTAAATGCAACAT ACC-3'. The PCR protocol was 35 cycles of 94°C 30 sec, 62°C 30 sec and 72°C for 20 sec. Human DNA was quantified with Real Time PCR using Applied Biosystems' Quantifiler Human DNA Quantification Kit (#4482911) on an ABI 7900 Real Time PCR machine Reference DNA was extracted from buccal swab sampled from two individuals, D and P. Flies with full and partial blood meals were collected immediately according to the manufacturer's instructions. Ten μl of P. argentipes DNA extract and~250 pgrams of reference DNA were used as a template for STR amplification using AmpFlSTR Identifiler Plus (Applied Biosystems 4427368) following the manufacturer's instruction. Capillary Electrophoresis of the PCR products was performed on an ABI PRISM 3130 Genetic Analyzer, and profiles were analyzed with GeneMarkerHID 1.9 software. Electropherogram interpretation was performed mimicking standard operating procedures followed in forensic DNA analysis [21], taking into account only data that is above the analytical threshold (AT), which defines the minimum height requirement, expressed in relative fluorescent units (RFU) above which detected peaks can be reliably distinguished from background noise. The AT is empirically determined by an internal validation and peaks below this level are not considered reliable and not used for interpretation. Loci without peaks above the AT were not used for a comparison with the reference data. The random match probabilities (RMP) (or chance of a coincidental match) of the complete and partial STR profiles obtained were calculated using population allele frequencies published in the Identifiler Plus user manual. Results Quantity of human DNA in sand flies is proportional to blood meal size Eight P. argentipes flies were processed at each time point, four from each of the populations fed on the two human volunteers. The amount of total DNA that was extracted from the flies was relatively stable throughout the Experiment (Fig 1B and Table 1). On day zero (T-0), total DNA was 492.75 ± 217.8 ng and 482.6 ± 240.7 ng for flies with full and partial blood-meals respectively. On the following days post feeding, total DNA amounts were 402± 130.8, 313.50 ± 40.5, 387.75 ± 141.6 and 656.57 ± 338 ng, one, two, three and five (days T-1, 2, 3 and 5) post feeding respectively ( Fig 1B and Table 1). On day zero, the amount of human DNA in fully engorged flies was more than three orders of magnitude lower than the total DNA from the whole fly, and the human DNA rapidly declined with each following day. The decline in human DNA was correlated with the decline in the blood meal size (Fig 1A and 1B). Fully engorged flies at T-0 yielded an average of 1.5±1.3 ng human DNA. Interestingly, flies that fed on subject D had bigger blood meals then those fed on subject P, and this correlated directly with the amount of human DNA that was recovered; 2.4±1.3 and 0.6±0.2 ng human DNA from individuals D and P respectively (Table 1, T-0 full blood meals). We did not observe differences in blood meal size at T-0 between flies that were partially fed on the two individuals, nor between these groups of flies in the amounts of blood remaining in the following days. The average human DNA amounts decreased roughly ten-fold with each day PBM. Flies collected one and two days after the blood meal yielded 0.055±0.07 and 0.013±0.02 ng human DNA respectively (Fig 1B and Table 1). Three days post feeding, the flies had completely digested the blood meals (Fig 1A), and only one fly from this time point yielded a detectable amount of human DNA (0.004ng). Human DNA was not detected five days post feeding in any of the flies. Obtaining human DNA profiles is constrained to 0-48 hours PBM One third of the DNA extraction volume was used for the multiplex amplification of STR markers. An average of 0.49±0.43 ng DNA from fully engorged flies on day 0 were used for PCR. That amount enabled 7/8 (87.5%) quality profiles (Fig 1C; Table 2). PCR amplification of 0.48-1.43 ng human DNA from subject D yielded full STR profiles in all 4 T-0, fully engorged samples, generating a RMP of one in 2.4x10 21 ( Table 2). The RMP of a sample represents the overall chance for a random unrelated person to have the same genotype in the population and it is calculated based on allele frequencies. Only 0.1 to 0.29 ng of DNA from subject P was used for amplification, which resulted in three usable profiles out of four flies. Two were partial profiles yielding a RMP of 1 in 1.6x10 12 and 1 in 1.3x10 20 for fly b and d respectively. The third profile was full with a RMP of 1 in 6x10 20 (Tables 1 and 2). For samples collected the following day (T-1), an average of 0.018±0.0023 ng human DNA was used for PCR. Still, 5/8 (62.5%) flies produced sufficient human DNA profiles. Three flies fed on subject D, from which 0.003, 0.002 and 0.031 ng of human DNA were used for PCR, produced partial profiles with RMPs of 1 in 3.4x10 10 , 1 in 6.2x10 18 and 1 in 1.8x10 15 , respectively. From subject P, 0.034 and 0.063 ng human DNA from two flies yielded partial and full profiles with RMPs of 1 in 1.1x10 16 and 1 in 6.2x10 20 , respectively. The rest of the samples from this time point did not have sufficient human DNA to generate profiles suitable for comparison. For flies sampled at T-2, only one fed on subject P had enough human DNA to produce an STR profile. Using 0.007 ng of this DNA, a partial profile was generated which still gave a RMP of 1 in 3.5x10 12 . Three days post feeding, the flies had completely digested the blood meals ( Fig 1A). Only one fly from this time point yielded detectable amounts of human DNA (0.0003 ng), which was too low to produce a STR profile suitable for comparison (Table 1). No human DNA was found in any of the flies collected 5 days PBM. Altogether, the results show that above 0.5 ng of human DNA, high quality, full STR profiles can be obtained. This correlates with fully engorged flies sampled on the day of the bite. The presence of a partial blood meal, due either to incomplete engorgement or to blood meal digestion, and yielding between 0.004 and 0.5 ng human DNA, can generate partial STR profiles that are still sufficient for human identification. Results summarized in Table 1 suggest that the sensitivity of the assay is down to the single cell range, since the amount of DNA in a human cell is 0.006 ng. Thus, the first twenty-four hours PBM is the time frame in which human DNA profiles can be generated from most P. argentipes sand flies. We have also determined the effect of storage conditions on the integrity of human DNA that can be obtained from the flies. Flies were fed on a human blood meal source and collected immediately after the feeding (T-0) and 1 day PBM (T-1). At each time point, 12 flies were put in 96% ethanol, half of the flies were placed in -80°C and the other half were stored at 4°C for five days. The average of total human DNA obtained from flies collected at T-0 was 7.63±1.46 and 7.27±5.64 for flies kept in -80°C and 4°C, respectively; at T-1the total human DNA amounts were 3.71±3.32 and 4.37±3.81, respectively (Table 3). Thus, storage under conditions more applicable to the availability of resources in the field did not compromise human DNA integrity. (++) both alleles of heterozygous loci were readable, or in homozygous loci the single peak was above stochastic threshold (ST); (+/-) only one allele of heterozygous loci was readable or in homozygous loci the RFU value of the allele was below ST; (-/-) both alleles were not detectable; (Δ) total number of markers of which at least one allele was readable; (*) RMP was calculated for each sample based on population specific allele frequencies. doi:10.1371/journal.pntd.0004706.t002 Detection of L. donovani in fresh blood meals from P. argentipes The findings discussed above indicate that flies must be collected within the first 24 hours after the blood meal if human DNA profiles are to be reliably obtained. It was therefore important to determine whether the number of parasites present in a fresh blood meal obtained from a potentially infectious reservoir host, and prior to their expansion as promastigotes, is above the threshold detection limits of the PCR used to identify infected flies. When tested on serial dilutions of cultured L. donovani promastigotes mixed with a single P. argentipes fly, ND5-targeted PCR resulted in specific amplification with no false positives. However, significant amplification was observed only in DNA from flies mixed with 10 3 or a greater number of parasites (Fig 2A). To increase the sensitivity of the assay, a nested PCR approach was developed that allowed ND5 gene detection from flies mixed with as few as 1 parasite (Fig 2B). The ND5 nested-PCR assay was tested in two separate experiments (Fig 3A and 3B) on flies collected at the same day of the feeding (T-0). A total of ninety-two flies were tested, 47 of them fed on a L. donovani infected hamster and the remaining fed on a naïve hamster as control. No PCR amplifications were detected from the control flies, in both the external and nested amplifications. Out of the 47 flies fed on the infected the hamster, 8 yielded positive PCR amplification for the ND5 target, 5 in the first experiment and 3 in the second experiment. To further evaluate the reliability of the detection method, infections were allowed to develop in a portion of the flies. When flies were collected 9-11 days PBM and their dissected midguts examined microscopically, parasites were observed only in 1 out of 9 and 1 out of 6 flies in experiments A and B, respectively, consistent with the low frequency of infected flies determined by the nested-PCR approach. Discussion This study both confirms the feasibility of applying forensic DNA methods to epidemiological studies of human VL and points out some of the limitations of this approach. It is clear that the ability to obtain human DNA profiles depends on the size of the original blood meal and the kinetics of its digestion (or loss). We have shown the minimum amount of human DNA that is needed for the amplification of a reliable human profile and provided a time frame in which this amount of DNA can be obtained from blood-fed P. argentipes flies. The data show that in the first 24 hours PBM the flies carry enough blood to obtain a human profile suitable for comparison. It becomes significantly less likely to obtain comparable profiles from flies analyzed at 48 hours or longer after the bite. Although the role of secreted nucleases by midgut cells cannot be ruled out, the reduction in the efficiency of generating STR profiles each day PBM is more likely attributed to the excretion of the blood meal remnants from the gut. By comparison, STR human DNA profiles were recovered from all Culicinae mosquitos 48 hours PBM, and from 62% and 27% of the mosquitos at 56 and 72 hours PBM [14], indicating their slower rate of blood digestion and loss, and suggesting that DNA in the blood meals of blood sucking diptera is not completely degraded by nucleases even several days after the feeding. Should a fly feed off of two different individuals prior to being captured, the assay would yield a mixed profile. The interpretation of mixed profiles is more challenging and generally less informative than single source samples. Yet a mixed profile can still be useful for human identification, depending on the amount of DNA and the ratio between the two contributors, both subjects could be identified. The interpretation should be conducted following forensic STR interpretation procedures commonly used by practitioners (https://www.fbi.gov/aboutus/lab/biometric-analysis/codis/swgdam-interpretation-guidelines). The limited time frame in which flies can be used for human profile amplification raises a number of challenges in conducting an epidemiological study that attempts to determine the reservoir for anthroponotic VL. Studies conducted in high transmission areas in northeast India showed that among a random collection of 1397 P. argentipes flies, only 4 flies (0.28%) were both blood engorged and infected [22]. Moreover, as the age of the blood meals in those studies was not reported, the number of flies that would have been useful for DNA fingerprinting might have been lower. These findings and the present study suggest that in order to perform a successful epidemiological study that employs DNA fingerprinting, the sampling would need to be conducted on a much larger scale. Another issue associated with the short time frame in which STR profiling is effective is the potentially low number of the parasites present during the first 24 hours after the fly has acquired an infective blood meal, coinciding with only the very early stage of their transformation to and expansion as replicating promastigotes. To overcome this issue, a nested PCR targeting maxicircle kDNA was developed. A nested PCR targeting ribosomal DNA has been previously shown to substantially enhance the sensitivity and specificity of Leishmania detection in the skin compared to conventional techniques [23]. We chose to target maxicircle kDNA sequences because the copy number of the maxicircles is 20-50 per cell, which makes them more sensitive PCR amplification targets compared to chromosomal genes, though less sensitive than minicircle targets. On the other hand, as maxicircles are in much lower copy number than minicircles, contamination becomes less likely in environments routinely exposed to Leishmania. This observation concurs with Abbasi et al. [24], who reported inconsistencies and high rates of false positives when targeting the minicircles, particularly when applied to quantify parasite loads that were close to the detection threshold (1-10 parasites). The low number (17%) of PCR-positive flies detected in flies fed on infected hamsters was surprising considering the symptoms that indicated an advanced stage of VL in those hamsters (30% weight loss). This may raise a question about the sensitivity of the detection method, especially when applied to a fresh blood meal. However, this observation was consistent with the low number of L. donovani-positive flies detected when infections were allowed to develop further. The inconsistency with which blood fed flies picked up infections might indicate that the parasites were not acquired from peripheral blood but from focalized concentrations of parasitized cells in the skin. The final issue in the field application of this approach is the feasibility to obtain the appropriate reference DNA profiles so that the link between an infected fly with a readable STR profile and its human source can be made. Since sand flies are weak fliers and travel in short hops rather than in sustained flight, there is a high likelihood that their human blood meal source will be found living or working within close proximity to the location of the capture. While the infection histories of these individuals would no doubt also be obtained, their direct link to an infected blood meal could only be made using the forensic approach. It is important to add that the key unanswered question in the epidemiology of VL in India is whether healthy individuals with asymptomatic infections can transmit infection to the vector. There is still no uniformly accepted method to identify these individuals. For example, while a positive PCR for parasite DNA in peripheral blood is thought to provide the best evidence for sub-clinical infection, a negative PCR is meaningless for the purpose of identifying potential infection reservoirs if transmissions occur when flies pick up parasites in the skin. Lastly, based on the experience of the genome-wide association studies in which high quality genomic DNA was obtained from buccal swabs from over 2000 individuals in a high transmission area in Bihar [25], the selected sampling of individuals living or working close to the site of infected fly capture seems an achievable undertaking, and any ethical concerns can be fully met. In conclusion, this study demonstrates for the first time that the use of forensic DNA methods enables identification of the human source of a sand fly blood meal, and may therefore be used to directly trace the source of an infected blood meal in flies recovered from kala-azar endemic zones. Understanding the dynamics and epidemiology of anthroponotic transmission holds clear importance for the development of control strategies for human VL.
6,889.4
2016-05-01T00:00:00.000
[ "Biology", "Environmental Science", "Medicine" ]
Comparison of Heuristic Algorithms in Identification of Parameters of Anomalous Diffusion Model Based on Measurements from Sensors In recent times, fractional calculus has gained popularity in various types of engineering applications. Very often, the mathematical model describing a given phenomenon consists of a differential equation with a fractional derivative. As numerous studies present, the use of the fractional derivative instead of the classical derivative allows for more accurate modeling of some processes. A numerical solution of anomalous heat conduction equation with Riemann-Liouville fractional derivative over space is presented in this paper. First, a differential scheme is provided to solve the direct problem. Then, the inverse problem is considered, which consists in identifying model parameters such as: thermal conductivity, order of derivative and heat transfer. Data on the basis of which the inverse problem is solved are the temperature values on the right boundary of the considered space. To solve the problem a functional describing the error of the solution is created. By determining the minimum of this functional, unknown parameters of the model are identified. In order to find a solution, selected heuristic algorithms are presented and compared. The following meta-heuristic algorithms are described and used in the paper: Ant Colony Optimization (ACO) for continous function, Butterfly Optimization Algorithm (BOA), Dynamic Butterfly Optimization Algorithm (DBOA) and Aquila Optimize (AO). The accuracy of the presented algorithms is illustrated by examples. Introduction With the increase in computing power of computers, all kinds of simulations of various phenomena occurring, among others, in physics, biology and technology are gaining in importance. The considered mathematical models are more and more complicated and can be used to model various processes in nature, science and engineering. In the case of modeling anomalous diffusion processes (e.g., heat conduction in porous materials) or processes with long memory, fractional derivatives play a special role. There are many different fractional derivatives, in which the following are the most popular: Caputo, Riemann-Liouville and Riesz. Authors of the study [1] present a model dedicated risk of corporate default, which can be described as a fractional self-exciting model. The model and methods introduced in the study were used to carry out a validation on real market data. In result, the fractional derivative model became better. Ming et al. [2] used Caputo fractional derivative to simulate China's gross domestic product. The fractional model was compared with the model based on the classical derivative. Using the fractional derivative, the authors built a better and more precise model to predict the values of gross domestic product in China. Another applications of fractional derivatives in modeling processes in biology can be found in the article [3]. The authors presented the applications of the Atangan-Baleanu fractional derivative to create models of such processes as: Newton's law of cooling, population growth model and blood alcohol model. In the article [4], authors used Caputo fractional derivative to investigate and model population dynamics among tumor cells-macrophage. The study also estimated unknown model parameters based on samples which were collected from the patient with non-small cell lung cancer who had chemotherapy-naive hospitalized. De Gaetano et al. [5] presented a mathematical model with a fractional derivative for Continuous Glucose Monitoring. The paper also contains the numerical solution of the considered fractional model. Based on experimental data from diabetic patients, the authors determine the order of the fractional derivative for which the model best fits the data. The research shows that the fractional derivative model fits the data better than the integer derivative model (both first and second order). More about fractional calculus and its application can be found in [6][7][8]. In order to implement more and more accurate and faster computer simulations, it is necessary to improve various types of numerical methods or algorithms that solve direct and inverse problems. Solving the inverse problem allows to design the process and select the input parameters of the model in a way that make possible obtaining the desired output state. Such tasks are considered difficult due to the fact that they are ill conditioned [9]. Sensor measurements often provide additional information for inverse issues. Based on these measurements, the input parameters of the model are selected and the entire process is designed. In the study [10] a variational approach for reconstructing the thermal conductivity coefficient is presented. The authors also cite statements regarding the existence and uniqueness of the solution. Numerical examples are also provided. In the article [11] the solution of the inverse problem consists in identifying the coefficients of the heat conduction model based on temperature measurements from sensors. In addition, several mathematical models were compared, in particular fractional models with classical model. Under the study, the parameters like order of fractional derivative as well as thermal conductivity and heat transfer coefficient were identified. Considerations regarding solving the inverse problem are also included in the article [12]. The authors present the approach of the solution from the Deep Neural Network, in which they used deep-learning methods. It allowed for learning all free parameters and functions through training. The backpropagation of the training data can be one of the methods for training the deep network. More examples of inverse problems in mathematical modeling and simulations can be found in [13][14][15][16][17][18][19][20]. In this article, the mathematical model of heat conduction with Riemann-Liouville fractional derivative is presented. In the provided model, the boundary conditions of the second and third order are adopted. Then, a solution of direct problem is shortly described. To solve this problem a finite difference scheme is derived. The inverse problem posed in this article consists in the reconstruction of the third order boundary condition and the identification of such parameters as order of fractional derivative and thermal conductivity. In the process of developing a procedure that solves the inverse problem, a fitness function is created. It describes the error of the approximate solution. In order to identify the parameters, the minimum of this function should be found. The following algorithms are used and compared to minimize the fitness function: Ant Colony Optimization (ACO), Dynamic Butterfly Optimization Algorithm (DBOA) and Aquila Optimization (AO). The presented procedure has been tested on numerical examples. Anomalous Diffusion Model We consider an anomalous diffusion equation in the form of a differential equation with a fractional derivative with over spatial variable: In this approach, the considered anomalous diffusion equation describes the phenomenon of heat flow in porous medium [11,21,22]. In Equation (1) we assume the follow-ing notations: T [K]-temperature, x [m]-spatial variable, t [s]-time, c J kg K -specific heat, kg m 3 -density, β ∈ (1, 2)-order of derivative and λ = wλ W m 3−β K is scaled heat conduction coefficient, where w is scale parameter. Heat conduction λ had to be scaled to keep the units consistent. To Equation (1) an initial condition is added: On the left side of the spatial interval the homogeneous boundary condition of the second order is taken: and for the right boundary of the spatial interval the boundary condition of the third order is assumed: The symbols T ∞ , h appearing in the Equation (4) denote ambient temperature and the heat transfer coefficient. In the Equation (1) there is a fractional derivative with respect to space, which is defined as the Riemann-Liouville derivative [23]: Numerical Solution of Direct Problem In order to solve the direct problem for model (1)-(4) it is used finite difference scheme. The considered area is discretized by creating a mesh S = {(x i , t k ) : x i = x L + i ∆x, t k = k ∆t}, where ∆x = (x R − x L )/M and ∆t = t end /K and i = 0, . . . , M, k = 0, . . . , K. Then the Riemann-Liouville derivative has to be approximated [23]: as well as boundary conditions (3) and (4): where T ∞ is the ambient temperature, T k i is the approximate value of the function T in point (x i , t k ), and h is a function describing the heat transfer coefficient. Using the Equations (6)-(8) we obtain a differential scheme (a system of equations). By solving this system, the values of the function T will be determined in mesh points. Inverse Problem and the Procedure for Its Solution The problem considered in this article concern the inverse problem. It consists in establishing the input parameters of the model in a way that allows obtaining the temperature at the boundary corresponding to the measurements from the sensors. The identified parameters are: thermal conductivity λ, order of derivative β and heat transfer function h in the form of a second degree polynomial. In the presented approach, after solving the direct problem for fixed values of unknown parameters, we obtain approximation of T and compare it to the measurements data. This is a method of creation the fitness function: where N is a number of measurements, T j ( λ, β, h) are temperature values at the measurement point calculated from the model, and T m j are measurements from sensors. To find the minimum of function (9) we use selected metaheuristic algorithms described in Section 5. Meta-Heuristic Algorithms In this section, we present selected metaheuristic algorithms for finding the minimum of functions. These algorithms will be: Ant Colony Optimization (ACO) for continuous function optimization, Dynamic Butterfly Optimization Algorithm (DBOA) and Aquila Optimization (AO). ACO for Continuous Function Optimization The inspiration for the creation this minimum function search algorithm was the observation of the habits of ants while searching for food. In the first stage, the ants randomly search the area around their nest. In the process of foraging for food, ants secrete a chemical called a pheromone. Thanks to this substance, the ants have a chance to communicate with each other. The amount of secreted substance depends on the amount of food found. If the ant has successfully found a food source, the next step is to return to the nest with a food sample. The animal leaves a pheromone trail that will allow other ants to find the food source. This mechanism was adapted to create the ACO algorithm for continuous function optimization [24]. More on the algorithm and its applications can be found, among others, in articles [25][26][27][28]. There are three main parts to the algorithm: • Solution (pheromone) representation. Points from the search area R n are identified as pheromone patches. In other words, the pheromone spot plays the role of a solution. Thus, k-th pheromone spot (or approximate solution) can be represented as x k = (x k 1 , x k 2 , . . . , x k n ). Each solution (pheromone spot) has its quality calculated on the basis of fitness function F(x k ). In each iteration of the algorithm, we store a fixed number of pheromone spots in the set of solutions (establish at the start of the algorithm). • Transformation of the solution by the ant. The procedure of constructing a new solution, in the first place, consists in choosing one of the current solutions (pheromone spots) with a certain probability. The quality of the solution is a factor that determines the probability. The relationship here is as follows: with the increase in the quality of the solution, the probability of selection increases. In this paper, the following formula is adopted to calculate the probability (based on the rank) of the k-th solution: where L denotes number of all pheromone spots, and ω is expressed by the formula: The symbol rank(k) in the Equation (11) denotes the rank of the k-th solution in the set of solutions. The parameter q is a parameter that narrows the search area. In case of small value of q, the choice of the best solution is preferred. The greater q, the closer the probabilities of choosing each of the solutions. After choosing k-th solution, it is required to perform Gaussian sampling using the formula: where µ = x k i is i-th coordinate of k-th solution and σ = ξ Assignment of probability to pheromone spots according to the Equation (10). 8: for ant m = 1, 2, . . . , M do 9: The ant chooses the k-th (k = 1, 2, . . . , L) solution with probability p k . 10: for coordinate j = 1, 2, . . . , n do 11: Using the probability density function (12) in the sampling process, the ant changes the j-th coordinate of the k-th solution. 12: end for 13: end for 14: Calculation the value of the fitness function F for M new solutions. 15: Adding M new solutions to the set of archive of old, sorting the archive by quality and then rejection of the M worst solutions. 16: end for 17: return best solution x best . Dynamic Butterfly Optimization Algorithm Another of the presented heuristic algorithms is an improved version of the Butterfly Optimization Algorithm (BOA), namely the Dynamic Butterfly Optimization Algorithm (DBOA) [29]. In order to communicate, search for food, connect with a partner, and to escape from a predator, these animals use the sense of smell, taste and touch. The most important of these senses is smell. Thanks to the sense of smell butterflies look for food sources. Sensory receptors, called chemoreceptors, are scattered all over the body of a butterfly (e.g., on the legs). Scientists studying the life of butterflies have noticed that these animals locate the source of a fragrance with great precision. In addition, they can distinguish fragrances and recognize their intensity. Those were an inspiration for the development of the Butterfly Optimization Algorithm (BOA) [30]. Each butterfly emits a specific fragrance of a given intensity. Spraying the fragrance allows other butterflies to recognize it and then communicate with each other. In this way, a "collective knowledge network" is created. The global optimum search algorithm is based on the ability of butterflies to sense the fragrance. If the animal cannot sense the fragrance of the environment, its movement will be random. The key concept is fragrance and the way it is received and processed. The concept of modality detection and processing (fragrance) is based on the following parameters: stimulus intensity (I), sensory modality (c) and power exponent (a). I is the intensity of the stimulus. In BOA, fitness function is somehow correlated with the intensity of the stimulus I. Hence, it can be shown that the more fragrance a butterfly emits (solution quality is better), the easier it is for other butterflies in the environment to sense it and be attracted to it. This relationship is described as follows: where f denotes fragrance, c is the sensory modality, I denotes the stimulus intensity, and a is the power exponent, which depends on the modality. In this article, we assume values for the parameters a and c in the range [0, 1]. The parameter a is a modality-dependent power exponent. It has a variability in absorption and its value may decrease in subsequent iterations. Thus, the parameter a can control the behavior of the algorithm, its convergence. The parameter c is also important in the perspective of the BOA operation. In theory c ∈ [0, ∞), while in practice it is assumed that c ∈ [0, 1]. The values of a and c have a significant impact on the speed of the algorithm. Considering this, it should be noted that an important step here is the appropriate selection of these parameters. It should be carried out once for various optimization tasks. In the BOA we can distinguish the following stages: • Butterflies in the considered environment emit fragrances that differ in intensity, which results from the quality of the solution. Communication between these animals takes place through sensing the emitted fragrances. • There are two ways of movement of a butterfly, namely: towards a more intense fragrance emitted by another butterfly and in a random direction. • Global search is represented by: where x old is the position of the butterfly (agent) before the move, and x new is the transformation position of the butterfly, x best is the position of the best butterfly in the current population, and f is the fragrance of a butterfly x old and r denotes a number from the range [0, 1] selected in a random way. • Local search move is formulated by: where x r1 , x r2 are randomly selected butterflies from the population. At the end of each iteration modifying the population of agents (butterflies), the local search algorithm based on mutation operator (LSAM) is run. This is a significant modification compared to BOA. In this article, the operation of LSAM consisted in the selection of several individuals (solutions) and their transformation with the use of the mutation operator. In case of obtaining better solution after mutation, it replaces the old one. The LSAM algorithm is presented as pseudocode in Algorithm 2. More information regarding the applications of the butterfly algorithm can be found in [31][32][33]. Algorithm 2 Pseudocode of LSAM operator. 1: x r -random solution among the top half best agents in population (obtained from BOA). 2: Fit r = F(x r )-value of the fitness function for x r . 3: I-number of iterations, ξ-mutation rate. 4: Iterative part. 5: for iteration i = 1, 2, . . . , I do 6: Calculate: if Fit new < Fit r then 8: x r = x new , Fit r = Fit new . 9: else 10: Set a random solution x rnd from the population, but not x r . 12: if Fit new < Fit rnd then 13: x rnd = x new 14: end if 15: end if 16: end for Algorithm 2 includes the process of transforming the individual coordinates of the solution x = (x 1 , x 2 , . . . , x n ) with the use of the mutation operator. The transformation consists in drawing a number from the normal distribution and replacing the old coordinate with a new one. For j-th coordinate we use normal distribution: for iteration i = 1, 2, . . . , I do for k = 1, 2, . . . , N do Calculate value of fragnance for x k with the use of Equation (13). 5: end for Set the best agent x best among the butterflies. for k = 1, 2, . . . , N do Set a random number r from range [0, 1]. if r < p then 10: Convert solution x t k in accordance with the Equation (14). else Convert solution x t k in accordance with the Equation (15). end if end for 15: Change value of the parameter a. Adopt the LSAM algorithm to convert the agents population with mutation rate ξ. end for return x best . Aquila Optimizer Another of the considered algorithms is Aquila Optimizer (AO). This algorithm is a mathematical representation of the hunting behavior of a genus of bird called Aquila (family of hawks). Four main techniques can be distinguished in the way these predators hunt: • Expanded exploration. In the case that a predator is high in the air and wants to hunt other birds, it tilts vertically. After locating the victim from a height, Aquila begins nosediving with increasing speed. We can express this phenomenon with the use of the following equation: where x new is solution after transformation, x best is the best solution so far and symbolizes position of the prey, i is current iteration, I is number of maximum iteration and rd is random number from [0, 1]. In this case x best can also be defined as the optimization goal or approximate solution. Vector x mean is mean solution from all population: • Narrowed exploration. This technique involves circling the prey in flight and preparing to drop the earth and attack the prey. It is also known as short stroke contour flight. This is described in the algorithm by the equation: where x new and x best denotes the same as in expanded exploration point, x random is a random solution from population and rd is a random number from interval [0, 1]. Term r cos φ − r sin φ simulates spiral flight of Aquila. Expression Levy D is random value of the Levy flight distribution: where s, β are constants u, v denote random numbers from range [0, 1], and σ is formulated as follows [34]: In above equation Γ denotes gamma function. In order to determine the values of the parameters r and φ the following formula is used: where r 1 is a fixed integer from {1, 2, . . . , 30}, V, ξ are small constants, D 1 is an integer from {1, 2, . . . , n}. • Expanded exploitation. This hunting technique begins with a vertical attack on a prey, which location is known within some approximation defining the search area. Thanks to this information, Aquila gets as close to its prey as possible. It can be described as follows: where x new is the solution after transformation, x best is the best solution at the moment and x mean is the mean solution in all population determined with the use of the formula (18). As before, rd denotes a random number from range [0, 1], while lB, uB are lower and upper bound, α and δ are constants parameters of exploitation regulation. • Narrowed exploitation. The characteristic feature of this technique are the stochastic movements of the bird, which attacks the prey in close proximity. It can be described by the formula: where x new denotes solution before transformation, QF is quality function: G 1 and G 2 are described by: We can adjust the algorithm with the above parameters. The Aquila's food-gathering behavior consists of the four hunting techniques previously described. The Formulas (17)-(26) describing four transformations consists in AO algorithm. Algorithm 4 shows description of implementation of the AO algorithm. More about the Aquila Optimizer can be found in [34,35]. Iterative part. 5: for iteration i = 1, 2, . . . , I do 6: Determine values of the fitness function F for each agent in the population. 7: Establish the best solution x best in the population. 8: for k = 1, 2, . . . , N do 9: Calculate mean solution x mean in the population. 10: Improve parameters G 1 , G 2 , QF of the algorithm. 11: if iteration i ≤ 2 3 I then 12: if rd < 0.5 then 13: Perform step expanded exploration (17) by updating solution x k . 14: In the result solution x new,k is obtained. 15: if F(x new,k ) < F(x k ) then make substitution x k = x new,k 16: end if 17: if F(x new,k ) < F(x best ) then make substitution x best = x new,k 18: end if 19: else 20: Perform step narrowed exploration (19) by updating solution x k . 21: In the result solution x new,k is obtained. 22: if F(x new,k ) < F(x k ) then make substitution x k = x new,k . 23: end if 24: if F(x new,k ) < F(x best ) then make substitution x best = x new,k . 25: end if 26: end if 27: else 28: if rd < 0.5 then 29: Perform step Expanded exploitation (23) by updating solution x k . 30: In the result solution x new,k is obtained. 31: if F(x new,k ) < F(x k ) then make substitution x k = x new,k . 32: end if 33: if F(x new,k ) < F(x best ) then make substitution x best = x new,k . Numerical Example and Test of Algorithms In this section, we present a numerical example illustrating the effectiveness of the algorithms described above. On this basis, the algorithms are compared with each other regarding the inverse problem in the heat flow model. As described in the Section 4, the unknown model parameters that need to be identified are: λ-thermal conductivity, β-order of derivative and h-heat transfer function. Temperature measurements on the right boundary of the considered area ( Figure 1) are supplementary data necessary to solve the inverse problem. The process should be modeled in a way that allows obtaining temperature values from the mathematical model adjusted to the measurement data. The calculations in the inverse problem are performed on the grid ∆x × ∆t = 100 × 1995. In the considered example, the following data are assumed in the model (1)-(4): In the case of heat transfer function, the error between the exact function h, and the recreated h is defined by the following formula: In Table 1 the results obtained for individual algorithms are presented. Evaluating the tested algorithms according to the criterion of the value of the fitness function F (9), it is concluded that the DBOA algorithm turned out to be the most appropriate. The value of the fitness function for this algorithm is definitely and significantly lower than in the other cases. Also, the reconstruction errors of the parameters λ and h are the smallest for the DBOA algorithm. The second place belongs to the ACO algorithm. Based on the results, it can be seen that minimizing the fitness function is difficult, and the inverse problem is ill-posed. The value of the fitness function (9) is strongly dependent on changes in the values of the searched parameters. We compare the reconstructed temperature values at the measurement points with the measurement data afterwards. The Table 2 presents that the best results are obtained for the DBOA algorithm, and the worst for the BOA algorithm. Generally, these values are not high. Hence, it can be concluded that the reconstructed temperature is well matched to the measurement data, but also that the set problems are ill-posed and difficult to minimize. An important parameter evaluating the obtained results is matching the temperature values at the measurement points with the measurement data. Figures 4 and 5 show graphs of reconstructed temperature and graphs of measurement data for each of the algorithms. As can be seen, the reconstructed temperature values are well matched to the measurement data, despite the fact that the reconstructed values of the searched parameters λ and h differ significantly for considered algorithms. This proves that the graph of the objective function is flat in the vicinity of the exact solution. Thus, the considered inverse problem is difficult to solve. And the found solution (reconstructed parameter values) may contain significant errors. Conclusions The paper presents the inverse problem of heat flow consisting in the identifying parametric data of the model with given temperature measurements.The unknown parameters of the model are: thermal conductivity, order of fractional derivative and heat transfer function. To solve inverse problem, the function describing the error of the approximate solution should be minimized. Four meta-heuristic algorithms were used and compared, such as: ACO, DBOA, AO and BOA. DBOA turned out to be the best in terms of the value of the minimized function. In the case of DBOA, the value of the minimized function was 0.45, which is a satisfactory result. In the case of other algorithms, these values were much higher: ACO ∼ 273; BOA ∼ 2501 and AO ∼ 482. The DBOA also turned out to be the best in terms of errors in reconstruction model parameters and fitting reconstructed temperature to measurement data. In the case of DBOA, the error of reconstruction the temperature at the measurement points is equal to 0.0131, while for the other algorithms this error was of the order of 10 −1 . The considered problems turned out to be difficult to solve. The graph of the fitness function is very flat in the vicinity of the searched solution. Thus, even significant differences in the values of the reconstructed parameters have little impact on the differences in the values of the fitness function.
6,533.8
2023-02-01T00:00:00.000
[ "Engineering", "Physics", "Mathematics" ]
Sufficient conditions for certain subclasses of meromorphic p-valent functions In the present paper, we obtain certain sufficient conditionsfor mero- morphic p-valent functions. Several corollaries and consequences of the main results are also considered. Let where U is an open unit disk.A function f (z) in Σ p is said to be meromorphically p-valent starlike of order δ if and only if for some δ( 0 ≤ δ < p).We denote by Σ * p (δ) the class of all meromorphically p-valent starlike of order δ.Further, a function f (z) in Σ p is said to be meromorphically p-valent convex of order δ if and only if for some δ( 0 ≤ δ < p).We denote by Σ k p (δ) the class of all meromorphically p-valent convex of order δ.A function f (z) belonging to Σ p is said to be meromorphically p-valent close-to-convex of order δ if it satisfies for some δ(0 ≤ δ < p).We denote by Σ c p (δ) the subclass of Σ p consisting of functions which are meromorphically p-valent close-to-convex of order δ in U * . The object in the present paper is to obtain some sufficient conditions for meromorphic p-valent functions. In the proofs of our main results, we need the following Jack's Lemma [9]: Lemma 1.1.Let the (non constant) function w(z) be analytic in U with w(0) = 0. If |w(z)| attains its maximum value on the circle |z| where m is a real number and m ≥ n where n ≥ 1. Main Results With the aid of Lemma 1.1, we derive the next two theorems. Proof: Let the function w be defined by Then, clearly, w is analytic in U with w(0) = 0. We also find from Suppose there exists a point z 0 ∈ U such that|w(z 0 )| = 1 and |w(z)| < 1, when |z| < |z 0 |.Then by applying Lemma 1.1, there exists m ≥ n such that Then by using (2.4) and (2.5), it follows that which contradicts the given hypothesis.Hence |w(z)| < 1, which implies or equivalently This completes the proof of Theorem 2.1.✷ Theorem 2.2.Let the function f ∈Σ p , satisfies the inequality where (α, β ∈ R, λ ≥ 1, p, n ∈ N) . Proof: Let the function w be defined by Then by using (3.4) and (3.5), it follows that This evidently completes the proof of Theorem 2.2.✷ Sufficient conditions for certain subclasses of meromorphic... 13 Corollaries and Consequences In this concluding section, we consider some Corollaries and Consequences of our main results (Theorem 2.1 and Theorem 2.2).
604.8
2014-05-21T00:00:00.000
[ "Mathematics" ]
Learning Generalizable Light Field Networks from Few Images We explore a new strategy for few-shot novel view synthesis based on a neural light field representation. Given a target camera pose, an implicit neural network maps each ray to its target pixel’s color directly. The network is conditioned on local ray features generated by coarse volumetric rendering from an explicit 3D feature volume. This volume is built from the input images using a 3D ConvNet. Our method achieves competitive performances on real MVS data with respect to state-of-the-art neural radiance field based competition, while offering a roughly 50 times faster rendering. ABSTRACT We explore a new strategy for few-shot novel view synthesis based on a neural light field representation.Given a target camera pose, an implicit neural network maps each ray to its target pixel's color directly.The network is conditioned on local ray features generated by coarse volumetric rendering from an explicit 3D feature volume.This volume is built from the input images using a 3D ConvNet.Our method achieves competitive performances on real MVS data with respect to state-of-the-art neural radiance field based competition, while offering a roughly 50 times faster rendering. Index Terms-Novel view synthesis, neural light field, volumetric rendering INTRODUCTION The ongoing research in computer vision and artificial intelligence has long sought to enable machines to understand 3D given limited observations [1][2][3][4][5][6].This ability is in fact crucial for many downstream 3D based machine learning, vision and graphics tasks.Among these, novel view synthesis is a particularly prominent problem with numerous applications in free viewpoint and virtual reality, as well as image editing and manipulation. While most traditional approaches require depth information, coarse geometric proxies or dense samplings of the input views, deep learning based approaches rely on deep neural network's generalization abilities across view points and 3D scenes to achieve novel view synthesis from minimal visual input.In this context, the recently popularized implicit neural representations offer numerous advantages in modelling 3D shape [1] and appearance [2,4,7] in comparison to their traditional alternatives.In particular, Neural Radiance Fields [2] (NeRF), notably their generalizable versions (e.g.[5,7]), provide impressive novel view synthesis performances.However, the rendering of these methods requires sampling hundreds of points along each target pixel ray, and evaluating densities and view-dependent colors for all these points through a multi-layer perceptron (MLP), which increases the time and memory requirements. To reduce this complexity, we propose to use an implicit neural network operating in ray space rather than the 5D Euclidean × direction space, thus alleviating the need for per ray multi-point evaluation and physical rendering.For a given target pixel, an MLP (i.e.light field network) maps its ray coordinate and ray features to the color directly.Key to efficient generalization, and differently from [4], we build the ray features by computing and merging 3D convolutional feature volumes from the input images.These features are then rendered volumetrically into a coarse ray feature image, as illustrated in figure 2. Our method is trained end-to-end and evaluated using real multi-view stereo data (DTU [8]).We achieve competitive results in comparison to generalizable encoder-decoder NeRF models, while providing orders of magnitude faster rendering (see table 3). RELATED WORK We discuss existing work that is most relevant to few-shot novel view synthesis in this section. Early deep learning based approaches used 2D convolutional encoder-decoder architectures mapping the sparse inputs to the target images [9][10][11].These methods were outperformed by 3D aware convolutional approaches [12][13][14].Although many of these could learn to generate 360-degree views from very sparse inputs especially for synthetic central object data, most of them could not scale to high resolution images, complex scenes, and real data such as MVS datasets (DTU [8]). Implicit neural radiance fields (NeRF) [2] emerged later on as a powerful representation for novel view synthesis.It presented initially however a few limitations such as compu-Fig.2: Overview: Given an input image, a 3D feature volume is built with a ConvNet (first black cube) and re-sampled into a volume representing the target view frustum (red cube).Target feature volumes originating from different input views are aggregated using learnable weights and rendered with αcompositing.Finally the light field network maps a ray stemming from a target camera origin T to the corresponding pixel color of the target image. In particular, recent methods proposed to augment NeRFs with 2D [7,19,20] and 3D [5] convolutional features collected from the input images, allowing extra-scene generalization and feed-forward prediction.However, they still need to evaluate hundreds of query points per ray during inference, which makes them slow to render.Methods such as [17,21] try to alleviate NeRFs' rendering complexity by learning view independent radiance features.[21] combines it with a single ray-dependant specular component, while Yu et al. [17] predict radiance spherical harmonic coefficients instead.Furthermore, Sitzmann et al. [4] introduced a neural light field representation that maps rays i.e. target pixels directly to their colors without any need for physical rendering.The method was implemented in the auto-decoding setup, which means it requires test time optimization.It also uses a hypernetwork for conditioning, which is expensive to scale to bigger images in compute. Following [4], we explore here a tangent strategy to NeRFs, consisting in bypassing 3D implicit radiance modelling all together.Differently from [4] however, we propose a more efficient local conditioning mechanism for the light field network, which allows real scene generalization, and offers optimization-free inference. METHOD Given one or few images {I i } of a scene or an object with their known camera parameters, i.e. camera poses {R i , T i }, R i ∈ SO(3), T i ∈ R 3 , and intrinsics K ∈ R 3×3 , our goal is to generate images {I t } for novel target views , i.e. new camera poses {R t , T t }.A summary of our method is illustrated in figure 2. We present in the remaining of this section the components of the two stages of our method, namely the convolutional stage, and the neural light field network. Feature volume re-sampling Following seminal work (e.g.[5,13]), we build an explicit volume of features from an input image I i using a fully convolutional neural network E consisting of a succession of a 2D convolutional U-Net and several 3D convolutional blocks: where I i ∈ R H×W ×3 , H and W being the height and width of the input RGB image, and and C being respectively the height, width, depth, and the number of channels of the 3D feature volume. Using the the input feature volume F i aligned with the input image, we would like to create a feature volume F t/i aligned to the target image, that could be used subsequently to render a target feature image given the target camera pose {R t , T t }.Following the principles of volumetric rendering [2], in order to recreate a target image of dimensions H V × W V , we need to evaluate N points {p z u,v } N z=1 along each ray r u,v with direction d u,v , where u ∈ 1, H V and v ∈ 1, W V : where [2], z n and z f being the depth near and far bounds of the visual frustum.K is the intrinsic camera matrix.The target volume F t/i is obtained then as the resampling of input volume F i with trilinear interpolation, using points {p z u,v } aligned rigidly to the input camera coordinate frame: where F t/i ∈ R H V ×W V ×N ×C and {R i , T i } is the input camera pose.In practice, we normalize the aligned points' coordinates prior to sampling as F i is assumed to represent features in the input view normalized device coordinate (NDC) space. Feature Aggregation and rendering As different input views provide different information about the observed scene, we merge subsequently the 3D features obtained from the various inputs.We note that all target feature volumes {F k t/i } k provided by input images {I k i } k are represented in the same target view camera coordinate frame.Inspired by attention mechanisms, we propose to learn a 3D confidence measure per input view in the form of a weight volume W i ∈ R H V ×W V ×D .This volume is obtained as one of the channels of the input volume features W i = F i (1) (i.e.W t/i = F t/i (1)).After resampling the input features {F k i } k into the target ones {F k t/i } k , we use the resampled weights {W k t/i } k normalized with Softmax across the input views to compute a weighted average of the target volumes: where index k is over the number of input views, and F t ∈ R H V ×W V ×N ×C−1 .This aggregation allows our method to use an arbitrary number of input views at both training and testing.Following volumetric rendering [2], we generate a target feature image F for a given target view differentiably using α-compositing of the target feature volume F t along the depth dimension.We assume one of the target feature channels to represent volume density σ = F t (1) ∈ R H V ×W V ×D .We recall that the dimensions of tensor F t span the pixels of the target feature resolution H v × W v in the first two dimensions, and N points sampled along each ray for the third dimension.The rendered target feature image then writes: where T represents transmittance, δ z = t z+1 − t z and F ∈ R H V ×W V ×C−2 .In order to reduce the memory cost and increase the rendering speed of our method, the size of the rendered feature image is chosen to be lower than the size of the target image resolution, i.e.H V = H/4 and W V = W/4. Neural Light Field The convotulional rendered features produce a low resolution feature image representative of all rays making up the target view.We propose to learn a light field function f to upsample and refine these first stage results.Given a ray r u,v with direction d u,v corresponding to the target image pixel coordinates (u, v), with (u, v) ∈ 1, H × 1, W , we encode rays using Plücker coordinates similarly to Sitzmann et al. [4]: where r u,v ∈ R 6 .This representation ensures a unique ray encoding when the origin T t moves along direction d u,v .We recall that the expression of d u,v as a function of the target camera pose {R t , T t } can be found in equation 2. The feature F u,v of a ray r u,v at the final image resolution H × W is obtained from the lower resolution rendered feature image F ∈ R H V ×W V ×C−2 through a learned upsampling.Specifically, the rendered feature image undergoes two successive 2D convolutions and up upsamplings to produce a feature image at the desired resolution F ∈ R W ×H×C−2 .The final target RGB image I t = {c u,v } u∈ 1,H ,v∈ 1,W is predicted from the concatenation of the ray coordinate and its feature with an MLP accordingly: Notice that while convolution equipped NeRF [2] methods (e.g.[5,7]) require querying H × W × N 3D points through their implicit neural radiance fields, our light field network only needs to evaluate H × W rays, which enables our method to train potentially faster, and render orders of magnitude faster compared to [5,7] (see Table 3). Training Objective Our model is fully differentiable and trained end-to-end.We optimize the parameters of the convolutional network E and the light field network f jointly, by back-propagating a combination of a fine loss L r and two coarse losses Lr and Ld : L r and Lr are the L2 reconstruction losses of the final light field predicted image I t and the first stage prediction Ĩt respectively: We additionally regularize the gradient of the low resolution depth image dt rendered from the density volume σ of the first stage thusly: where T and α are detailed in equation 6. Implementation details We implemented our method with the PyTorch framework on a Quadro RTX 5000 gpu.We optimize with the Adam solver using learning rate 10 −4 in training and 10 −5 in fine-tuning.The depth of the convolutional feature volume is set to D = 32, and the number of channels C = 32. Comparison on DTU dataset We demonstrate the capability of our method to generate novel views from sparse input views using the DTU benchmark [8].Following the PixelNeRF [7] experimental settings, the data is split into 88 training scenes and 16 testing scenes.Each scene contains 49 views, including 4 views for testing as suggested by MVSNeRF [5] and GeoNeRF [22].Our training does not require mask supervision, thus all evaluation are performed on full resolution image(400 × 300) rather than only foreground. For quantitative comparison, we report the peak signal-tonoise ratio (PSNR), structural similarity (SSIM) and learned perceptual image patch similarity (LPIPS) reconstruction metrics in Table 1 for 3 and 6 view inputs averaged across all testing scenes.We report numbers of PixelNeRF(PN) and MVSNeRF(MN) from RegNeRF [23].We also show qualitative comparisons for 6 view inputs in figure 3.While our method is robust and competitive with NeRF based counterparts, it seems to lack some high frequency details.We defer this limitation to future work. Per-scene fine-tune results Table 2 shows a quantitative comparison of our method with the recent few-shot novel view synthesis state-of-the-art with test time optimization.We outperform all methods in the PSNR and SSIM metrics, including conditional baseline PixelNeRF(PN) [7] and MVSNeRF(MN) [5], and unconditional baselines DietNeRF(DN) [15] and RegNeRF(RN) [23].Figure 4 shows a qualitative comparison to MVSNeRF and PixelNeRF with 6 input views after finetuning.We obtain overall comparable performances with generalizable methods [5,7].We recall again that competition methods here require renderings that are orders of magnitude slower than ours. Rendering time comparison As shown in table 3, compared with PixelNeRF [7] and MVS-NeRF [5], our method requires less inference time on DTU dataset with 3 input views. Ablation We propose an ablative analysis showing the importance of the light field stage in our method.Specifically, we disable the latter (ours w/o lf), and we render the final image directly from the target view aligned convolutional feature volume.Table 4 shows numerical comparisons for 3 and 6 input views on DTU [8], and figure 5 shows qualitative comparisons for 6 input views. CONCLUSIONS We proposed a method for generating novel views from few input calibrated images with a single forward pass prediction deep neural network.We learn an implicit neural light field function that models ray colors directly.In comparison to [4], we proposed a more efficient local ray conditioning, and an optimization free inference.Our method outperforms the baselines and provides competitive performances compared to locally conditioned radiance fields (e.g.[5,7]), while being roughly 50 times faster at rendering. Fig. 1 : Fig. 1: Our method enables fast generation of novel views from sparse input images without 3D supervision in training.We generate above novel views for objects (ShapeNet dataset) and a scene (DTU dataset) never seen at training. 4 : Qualitative comparison with test time optimization from 6 input views on the DTU dataset[8]. Table 1 : [8]ntitative comparison of reconstructed images in the DTU[8]dataset without test time optimization.Qualitative comparison without test time optimization from 6 input views on the DTU dataset[8]. Table 2 : [8]ntitative comparison of reconstructed images in the DTU[8]dataset with test time optimization. Table 3 : Comparison of rendering complexity. w/o lf Ours w/o lf Ours w/o lf Ours
3,620.4
2022-07-24T00:00:00.000
[ "Computer Science" ]
Copula modeling for discrete random vectors Copulas have now become ubiquitous statistical tools for describing, analysing and modelling dependence between random variables. Sklar’s theorem, “the fundamental theorem of copulas”, makes a clear distinction between the continuous case and the discrete case, though. In particular, the copula of a discrete random vector is not fully identi able, which causes serious inconsistencies. In spite of this, downplaying statements may be found in the related literature, where copula methods are used for modelling dependence between discrete variables. This paper calls to reconsidering the soundness of copula modelling for discrete data. It suggests a more fundamental construction which allows copula ideas to smoothly carry over to the discrete case. Actually it is an attempt at rejuvenating some century-old ideas of Udny Yule, who mentioned a similar construction a long time before copulas got in fashion. Introduction In Yule [53], one can read: "Two association tables that are not directly comparable owing to the di erent proportions of A's and B's in the data from which the tables were compiled may be rendered directly comparable by multiplying the frequencies in rows and columns by appropriate factors, [...] reducing the original tables to some arbitrarily selected standard form" (p. 588). The standard form that he recommends is the table whose margins have been made uniform. Likewise, in their extensive study of association coe cients in ( × )contingency tables, Goodman and Kruskal [21, p. 747] mentioned transforming all marginals to / for facilitating interpretation. Later, Mosteller [36] developed: "We might instead think of a contingency table as having a basic nucleus which describes its association and think of all tables formed by multiplying elements in rows and columns by positive numbers as forming an equivalence class -a class of tables with the same degree of association" (p. 4). And: "we might especially arrange the table to have uniform margins on each side in the case of a two-way table so as to get a clearer look at the association that is actually occurring" (p. 6). If one identi es two-way contingency tables with bivariate discrete distributions, then the above ideas have much in common with copulas: one tries to capture the dependence structure between the two variables apart from the marginal distributions by making these into uniforms, hence uninformative. The observation is notable, as it has been known at least since Marshall [32] that the notion of copula ts poorly in the discrete framework. Here 'copula' refers to the classical de nition [7, De nition 1. 3.1] which, in the bivariate case, reads: Such copulas naturally arise in statistical modelling through the celebrated Sklar's theorem [50]: Theorem 1.1 (Sklar). Let F XY be the distribution function of a bivariate random vector (X, Y), with marginal distribution functions F X and F Y . Then there exists a copula C such that, for all (x, y) ∈ R , F XY (x, y) = C(F X (x), F Y (y)). (1.1) If F X and F Y are continuous, then C is unique; otherwise C is uniquely determined on Ran F X × Ran F Y only. Conversely, for any univariate distribution functions F X and F Y and any copula C, the function F XY de ned by (1.1) is a valid bivariate distribution function with marginals F X and F Y . The popularity of copulas for dependence modelling largely follows from quotes like 'Copulas allow us to separate the e ect of dependence from e ects of the marginal distributions'. Clearly, if C is unique, then it unequivocally characterises how the two marginals F X and F Y interlock for producing the joint behaviour of (X, Y), while staying ignorant of what those marginals are. For instance, if X and Y are independent (X ⊥ ⊥ Y) and if C is unique, then from (1.1) C must be the 'independence copula' (or 'product copula') 2) and this regardless of F X and F Y . It is often overlooked that the situation is this appealing only in the case of continuous margins. When there is no one-to-one correspondence between the joint distribution F XY and the copula C, i.e., for X and/or Y discrete, the above argument falls apart. Instrumental to copula ideas is the vector (F X (X), F Y (Y)). If X and Y are both continuous, then, through 'Probability Integral Transform' (hereafter: PIT), F X (X) and F Y (Y) have uniform distributions U [ , ] , and the copula C is their joint distribution. Clearly one can plug any increasing transformations of X and/or Y into PIT with the same output. Hence copulas are invariant under increasing transformations of the margins [37,Theorem 2.4.3], that is, 'margin-free'. Any copula-based dependence measure, such as Kendall's or Spearman's correlations [37,Chapter 5], is then margin-free as well. Now, in the case X and/or Y discrete, Ran F X and/or Ran F Y are just countable subsets of [ , ]. The distributions of F X (X) and/or F Y (Y) are thus not U [ , ] , and their joint distribution cannot be a copula as described by De nition 1.1. It is actually a subcopula, i.e., a function satisfying the main structural properties of copulas but whose support is only a strict subset of I containing 0 and 1 [37, De nition 2.2.1]. Any such subcopula can be extended into a copula [37,Lemma 2.3.5]: the gaps in I \(Ran F X × Ran F Y ) can be lled in a way preserving the properties of copulas; however there are uncountably many ways of doing so and C in (1.1) is not identi able. Such unidenti ability does cause serious inconsistencies, which [18] systematically investigated following preliminary warnings in [32], and questions the soundness of copula modelling for discrete data. This is discussed in Section 2, while a further invitation to calling current practice into question is o ered in Section 3. In particular, we claim that the concept of 'copula' should be given a more fundamental meaning, not limited to De nition 1.1 but agreeing with it in the continuous case. The key is to apprehend copulas from a di erent perspective, a detailed discussion of which is given in Section 4. Then we propose a construction which, rejuvenating Yule's, Goodman and Kruskal's and Mosteller's conceptions, allows a seamless extension of the main ideas of copula theory to the discrete case; rst in the case of a bivariate Bernoulli distribution (Section 5) and then gradually generalising it to bivariate discrete distributions with nite support (Section 6) and nally with in nite support (Section 8). Along the way, we bridge the gap between the copula literature and methods of contingency tables analysis. In particular, the role of the odds-ratio is reinforced and Yule's colligation coe cient establishes itself as the appropriate dependence parameter in a discrete copula-like setting; algebraic and geometric representations of contingency tables are leveraged; parallels between discrete copula modelling and the problem of matrix scaling are drawn; and a novel visual display of bivariate discrete distributions ('confetti plot') is proposed. New parametric models of dependence between discrete random variables are also introduced in Section 7. Copulas on discrete distributions Most reasons which make copula modelling attractive and e ective in the continuous case, break up in the discrete case: "everything that can go wrong, will go wrong" [10, p. 641]. A major downside is that copulas lose their margin-free nature when applied to discrete random vectors -whereas the whole copula methodology came into being in the rst place for exploiting the bene ts of margin-freeness [47]. In particular, in the discrete case, copula-based measures of dependence (e.g. Kendall's or Spearman's) are margin-dependent [Proposition 2.3 in 18, 32, Section 4.2]. Worse, a given copula model per se may or may not be intrinsically meaningful or even compatible with some marginals [12,54]. All in all, the case of discrete random vectors seems indeed misaligned with the very essence of the whole copula methodology. A telling example is the following. Let X ∼ Bern(π X ), Y ∼ Bern(π Y ) for two probabilities π X , π Y ∈ ( , ), and X ⊥ ⊥ Y (independent). Then, for reconstructing the bivariate Bernoulli F XY it is enough to plug in (1.1) any copula C such that as it is seen by inspection. Indeed Sklar's theorem states that C is only identi able on Ran }, but given that C is xed by trivial constraints along the sides of I, only what happens at ( − π X , − π Y ) may re ect (in)dependence. The product copula (1.2) naturally ful ls (2.1), but so does a wide spectrum of other copulas of miscellaneous shapes whose only common trait is to go through − π X , − π Y , ( − π X )( − π Y ) ∈ ( , ) -see Appendix for some illustration. Any conclusion drawn from such a copula-based bivariate Bernoulli model is highly questionable, as its central element C may interchangeably characterise independence or dependence of drastically di erent strength and nature, yet compatible with (2.1). It is true that, if one takes two discrete variables X and Y and binds them together through a copula C that we have picked, the 'Conversely'-part of Sklar's theorem guarantees that one has built a valid bivariate discrete distribution F XY with the right marginals. But there is no special link between C and F XY . For instance, [12] explains how the bivariate Bernoulli distribution on which [18, Example 13] tted a FGM copula could have been obtained all the same from a Plackett or an Ali-Mikhail-Haq copula, or from the reader's 'peculiar favourite copula family' [12, p. 128], making it futile to mention any speci c copula model at all in this case. Transformations of the margins to uniforms The root of all trouble is that it is not true that F X (X) ∼ U [ , ] for X discrete. Though, the U [ , ] -distribution of F X (X) and F Y (Y) in the continuous case is clearly what prompted De nition 1.1, and the widespread belief that copula methods are based on transformations of the margins into uniforms. Thus the very foundation of copula theory seems un t for the discrete framework. Nonetheless, in order to make the discrete case forcibly t into the classical copula framework, a common practice has been to 'jitter' the original discrete variables with some uniform random noise. The so-created arti cial continuous random vector has a unique copula, known as the checkerboard copula C . Arguably, C retains some of the dependence structure of the original discrete vector [6,18,38,45,46], and is a valid copula extension of the underlying subcopula [11,19,20]. However, C is just a particular choice -and not always the most natural one -among all the copulas satisfying (1.1), and by itself does not solve any of the problems exposed above [45, Remark 1.5(a)]. [35, Section 4] explicitly asked: 'Why does one transform the marginals to a uniform distribution?', and failing to nd any compelling mathematical answer (among other things) lead him to reject the idea of copulas altogether. Yet, it has been widely acknowledged since then [10], but even long before [24, p. 69], that the choice of transforming the margins to uniforms is driven by convenience only. Now, given that transforming to uniform is precisely the stumbling block of copula methods for discrete variables, one may sensibly ask: why stick to an inessential choice initially made for convenience only, if it is no more convenient at all in the situation of interest? Sklar's theorem establishes that the valuable information for understanding the joint behaviour of (X, Y)in particular their dependence -can be captured by some incidental copula C evaluated on Ran F X × Ran F Y . A naive interpretation of this puts C in the foreground, urging us to 'guess' what it might be; whereas it is actually of no importance, making the value of such guesswork unclear. In fact, there is no reason to extend the unique subcopula of a discrete vector to a copula, and any justi able analysis of the underlying dependence structure should be undertaken at the subcopula level, or equivalent. Consider again the bivariate Bernoulli case. What Sklar's theorem fundamentally says is that the whole dependence structure can be described by one single number; see the lines following (2.1). Naturally this agrees with any basic analysis of a ( × )-contingency table, where it is well-known that the dependence/association in the table is captured by a single degree of freedom -cf. the χ -test. It is not clear what would be the bene t of playing on a whole bivariate function C for modelling or characterising that dependence, knowing that one single number de nes it entirely. [8] argued that this number should be the odds-ratio because it is 'margin-free' (he did not use that term, though, but see his Corollary 2). [41] calls this 'variationindependence' between the marginal parameters (π X , π Y ) and the odds ratio ω, and further shows (his Theorem 6.3) that any such margin-free dependence parameter must be a one-to-one function of ω. In Section 5, it will indeed be shown that there is a one-to-one correspondence between the odds ratio and the value C( − π X , − π Y ) singled out by Sklar's theorem in this situation. It is also easy (Section 5.6) to show that, given π X and π Y , the full bivariate pmf can be reconstructed from the value of ω only. Hence the marginal distributions coupled with the margin-free dependence parameter ω unequivocally de nes the bivariate distribution of interest. Clearly, the single number ω entirely ful ls what we would like the role of a copula to be, while by no means being related to De nition 1.1. Transformation to uniform marginals is thus clearly not a necessary step for making sense of the main ideas behind copula modelling. Indeed, in Section 4, an alternative perspective on copulas is given, not relying explicitly on PIT. Avoiding PIT allows the concept to be readily adapted to the discrete case as well, while keeping all the pleasant properties of usual copula modelling. Copulas as equivalence classes of dependence Let (X, Y) be a continuous vector with distribution F XY . For simplicity, assume that X and Y are both supported on [ , ] (without loss of generality, one can imagine that we observe X ∈ R and Y ∈ R on the inverse logit scale, and copulas are invariant to monotonic transformations of the margins in any case) and that F XY admits a density f XY with marginal densities f X and f Y on the unit square I. Let F = {f : I → R, s.t. f ≥ , I f = }, the set of all bivariate probability densities on I, and S the set of all di erentiable strictly increasing functions from [ , ] to [ , ]. See that (S, •), where • denotes function composition, is a group, and so is (S × S, •.), where •. denotes componentwise composition: For any (Φ, Ψ) ∈ S × S, de ne g Φ,Ψ : F → F as , (u, v) ∈ I. Clearly g Φ,Ψ (f XY ) is the joint density of (Φ(X), Ψ(Y)), i.e., the version of f XY whose marginal distributions have been individually distorted by Φ and Ψ. The class [f XY ] contains all those 'marginally distorted' densities. What these densities have in common, compared to other classes, can then only be the constituent of f XY 'between' the margins, that is, what 'glues' them together. This is exactly the de nition of 'dependence': 'the information on the law of a random vector which remains to be determined once the marginal laws have been speci ed' [52]. Each equivalence class in F is thus representative of a certain dependence structure. The elements of F are really those which deserve the name 'copula', as they genuinely are the links ('copulae' in Latin) which cement marginals inside bivariate densities. To avoid confusion with De nition 1.1, though, we will call the element [f ] ∈ F the 'nucleus' of f to align with [36]; see Section 1. A nucleus [f ] is, in some sense, a bivariate density which has been entirely stripped from its marginals. Now, the abstract concept of a bivariate density with no marginals is di cult to visualise. Hence, for understanding the inner dependence structure of the vector (X, Y), one may want to exhibit a simple re-embodiment of [f XY ] into a proper density by pasting on it some default marginals. If one targets uniform marginals, then, by PIT, this is the element in which we recognise the density c of the copula C of F XY . Clearly the choice of uniform margins for reembodying [f XY ] into a proper density is totally arbitrary. It seems just sensible, for interpretation and visualisation purpose, to keep things as uncomplicated as possible. That said, uniforms and/or PIT do not play any role when de ning the concept of nucleus, which is really what copulas are all about. The construction can thus be adapted mutatis mutandis to discrete distributions, as detailed below. The Bernoulli copula . The bivariate Bernoulli distribution Consider again the case of two Bernoulli random variables X ∼ Bern(π X ) and Y ∼ Bern(π Y ) sharing (potentially) some dependence. The corresponding bivariate Bernoulli distribution, say p, is typically presented under the form of a ( × )-table: degenerate table). De ne P × the set of all such bivariate Bernoulli probability mass functions, where each p ∈ P × is identi ed to the matrix Now, as x,y pxy = , one can actually identify P × to the 3-dimensional simplex, here a regular tetrahedron whose vertices are the degenerate distributions d , d , d and d ; see Figure 5.1. We will call this tetrahedron the Bernoulli tetrahedron. Note that, as we assume < π X , π Y < , the 4 vertices and the edges d d , d d , d d and d d are not admissible elements of P × . . Marginal transformations Mimicking Section 4, we look for isolating the constituent of p which remains invariant to 'monotonic distortion' of the margins. In the continuous case, by such distortion it was meant the vector (Φ(X), Ψ(Y)) whose density (4.1) can take miscellaneous shapes. Here, this 'transformation trick' does not work: for X ∼ Bern(π X ), Φ(X) remains the same two-point distribution ( −π X , π X ) (only the 'labels' change), and same for Ψ(Y). Now, one can see (4.1) from a more basic perspective, considering the group action g Φ,Ψ as just a mechanism reassigning the initial probability mass di erently over I. Under the e ect of g Φ,Ψ , the point (u, v), initially assigned the probability f XY (u, v) du dv, would now get a probability f * is given by (4.1) -in a sense, we regard this as the e ect of a transport map taking the initial probability measure onto another one, as opposed to the initial interpretation as the resulting probability measure after transformation of the random variables. Note that the normalisation by Φ (Φ − (u)) Ψ (Ψ − (v)) just guarantees f * XY = . This more fundamental interpretation of (4.1) carries over to the Bernoulli framework. Indeed, for some ϕ > , de ne a distorted distribution for X as Bern(π * X ), where π * X = ϕπ X −π X +ϕπ X . Clearly, for ϕ > , π * X > π X : some of the probability initially assigned to X = has been transferred to the next value, X = ; and reversely for ϕ < . Like above, the factor /( − π X + ϕπ X ) is just a normalisation, guaranteeing π * X ∈ [ , ] for all ϕ > . When the margin Y is similarly distorted, the initial joint probability distribution is re-assigned through table (5.1) by a similar process of transferring probability weight between adjacent cells. The organisation of the cells, in particular their order along each margin, is not altered: the marginal distortions are monotonic in that sense. In e ect, the distorted table is obtained by multiplying the rows and columns of (5.2) by positive values (and renormalise). This totally concords with what [53] and [36] urged; see Section 1. Speci cally, de ne D ( ) × the set of all diagonal matrices whose entry ( , ) is equal to 1, and for any ϕ, ψ > , set Equipped with the matrix multiplication ·, (D ( ) × , ·) is a group, and so is (D ( ) × ×D ( ) × , ·.), where ·. is componentwise matrix multiplication: Similarly to Section 4, g Φ,Ψ is a group action of (D ( ) × × D ( ) × , ·.) on P × . Any p ∈ P × induces an orbit is the set of all those equivalence classes. Any [p] ∈ P × is a class of bivariate Bernoulli distributions (5.2) sharing the same 'core' structure once we strip them from their margins. . Bernoulli nucleus and Bernoulli copula probability mass function For any distribution p ∈ P × , de ne establishing that the elements of P × are again classes of equivalent dependence. We call [p] the nucleus of p, which contains the full information about how the Bernoulli marginals are glued together inside p. In the Bernoulli tetrahedron, the sets of distributions p ∈ P × sharing common odds ratios ω ∈ ( , ∞) are doubly-ruled surfaces corresponding to sections of hyperboloids of one sheet [15, Section 3]. E.g., Figure 5.1 shows the surface corresponding to ω = , that is, all bivariate Bernoulli distributions for which X ⊥ ⊥ Y. For interpretation purpose, it may be insightful to de ne a representative of [p], that is, a particular 'simple' bivariate Bernoulli distribution with odds-ratio ω(p). Again, a natural choice is the element of [p] with uniform margins, as [21] suggested (Section 1). Simple algebra reveals that, for p ∈ P × such that ω(p) = ω ≥ , there is a unique element in [p] with Bernoulli( / )-margins, which is This representative is akin to the copula density in the continuous case, hence we call p the Bernoulli copula probability mass function (copula pmf). Note that the values in (5.5) were mentioned in [4, 11.2-14], while a similar 'copula' was investigated in [51]. In the Bernoulli tetrahedron, all p ∈ P × with the same marginal distributions must lie on a straight line orthogonal to the edges d d and d d [15,Section 4]. Denote Given that X ⊥ ⊥ Y ⇐⇒ ω = , the independence Bernoulli copula pmf is evidently and clearly X ⊥ ⊥ Y ⇐⇒ p = π. This can be contrasted to the observation made in Section 2 that a continuous copula C gluing two independent Bernoulli's as in (1.1) need not be the independence copula. . Structural zeros The limit values ω = and ω = ∞ occur when (at least) one of the entries of (5. The value ω = arises in distributions like where ×'s represent non-zero elements. In the terminology of [29,, case (i) corresponds to absolute association, whereas cases (ii) correspond to complete association. Clearly (i) represents 'perfect dependence' (negative, in this case), but it is not that clear for (ii) as there is no one-to-one correspondence between X and Y. Therefore, the fact that the odds-ratio ω = and the corresponding copula pmf w do not distinguish between (i) and (ii) may appear puzzling. Yet it is easily seen that 'absolute association' (i) is only possible if π X + π Y = . Whenever π X + π Y ≠ , any sense of 'perfect dependence' automatically translates into 'complete association'. Hence the dependence is actually as strong as can be in both cases (i) and (ii) given the margins. A marginal feature, the distinction between (i) and (ii) must be ignored by the copula pmf. In , as one can approach w arbitrarily close while staying on any of the two faces. In these critical cases (ii), it is thus necessary to extend the nuclei [p ] or [p ] to their closure for them to include the corresponding copula pmf w. Further characterisation will be given in more generality in Section 6. The case ω = ∞ and p = m is treated in perfect analogy. . Yule's colligation coe cient Suppose that (U, V) is a bivariate Bernoulli vector with joint pmf (5.5) for some ω ≥ . One can check that Pearson's correlation between U and V is which is exactly Yule's 'colligation coe cient' [53, pp. 592-593]. Hence Υ can be regarded as the Bernoulli analogue to Spearman's ρ in the sense that it is Pearson's correlation computed after copula transformation. In fact, as √ ω = ( + Υ )/( − Υ ), the copula pmf p (5.5) can be written under the even simpler form . The e ect of Υ on p is thus linear in nature. The value of Υ acts as a ruler along w m in Figure 5.1: from Υ = − at w to Υ = at m, via Υ = at π. Kendall's τ corrected for the occurrence of ties, i.e. 'τ b ' [18,De nition 3], is, for the bivariate Bernoulli case, This, computed on the copula pmf (5.5), reduces down to τ b = Υ again. The above observations suggest that Yule's Υ is a very natural, if not the canonical, dependence parameter in the ( × )-table framework. This is noteworthy as, although it was originally Yule's preferred association measure [53, p. 592], this coe cient has by now largely passed into oblivion. . Construction of arbitrary bivariate Bernoulli distributions with given copula pmf Analogously to the 'Conversely'-part of Theorem 1.1, one can wonder if it is always possible to construct a bivariate Bernoulli distribution p whose marginals are Bern(π X ) and Bern(π Y ) ( < π X , π Y < ) and dependence structure prescribed by a certain Bernoulli copula (5.5). The answer is a rmative. In Figure 5 The other values follow by substitution. In particular, p = − π X − π Y + p . As p in (5.9) is an increasing function of ω, so is p . In the same time, 'the value C( −π X , −π Y ) singled out by Sklar's theorem', described below (3.1), is precisely p , establishing the one-to-one correspondence between C( − π X , − π Y ) and ω. Characterising the dependence in a bivariate Bernoulli vector by ω, or any monotonic function thereof, is thus totally consistent with Sklar's theorem. For ω = , we must have either p = or p = (or both). By obvious substitution, one gets if π X + π Y = , π X + π Y > or π X + π Y < , respectively. These distributions correspond to the lower Let p be its joint probability mass function, de ned by pxy = P(X = x, Y = y), (x, y) ∈ S X × S Y , and p X = (p • , p • , . . . , p R− • ) and p Y = (p • , p • , . . . , p •S− ) its marginal distributions: px• = y∈S Y pxy = P(X = x) and p•y = x∈S X pxy = P(Y = y). Let P R×S be the set of all such bivariate discrete distributions p with px• > ∀x ∈ S X and p•y > ∀y ∈ S Y , identi ed to the (R × S)-matrices Any such distribution can be regarded as a point in the (RS − )-dimensional simplex [13]. . Odds ratio matrix As Ran F X × Ran F Y here consists of (R − )(S − ) informative locations (i.e., strictly inside the unit square), Sklar's Theorem implies that one must be able to entirely describe the inner dependence structure of p by (R − )(S − ) parameters, naturally in agreement with the usual break down of degrees of freedom in comparable (R × S)-contingency tables. Those (R − )(S − ) parameters can be a family of odds-ratios [2]. [41, p. 119] spells out that those odds-ratios are margin-free ('variation-independent' of the marginal parameters). [1, p. 55] stressed that 'given the marginals, the odds ratios determine the cell probabilities'; in other words, the full distribution can be entirely reconstructed by coupling the marginal distributions and the margin-free set of odds ratios. Again, those entirely ful l here the desired role of classical copulas, the explicit resort to which being purposeless. Remark 6.1. Although they were ruled out in (5.2)-(5.4) when assuming < π X , π Y < , cases of / may arise in (6.1). Then the corresponding entry of Ω(p) may be left unde ned. Admitting some slight lack of rigour, we identify two odds ratio matrices whose all well de ned entries are equal, i.e., an unde ned entry in a matrix is assumed to be equal to whatever the corresponding entry may be in the other. . Marginal transformations, nucleus and copula probability mass function De ne D ( ) Q×Q the set of all diagonal Q × Q matrices whose entry ( , ) is equal to 1 and other diagonal entries are positive. Similarly to (5.3), for any Φ ∈ D ( ) R×R and Ψ ∈ D ( ) S×S , let The matrix Φ multiplies the rows of p and the matrix Ψ multiplies the columns of p: this is akin to 'marginal distortions' as in Section 5.2. This de nes a group action on P R×S , which induces orbits [p]: Free from any sense of marginal distributions, the orbits [p] must again be equivalence classes of dependence. Indeed, for any two p, p * ∈ P R×S , p ∼ p * ⇒ Ω(p) = Ω(p * ), as all odds ratios are preserved by g Φ,Ψ , exactly as in (5.4). This holds true for any unde ned elements of Ω(p) as well, as g Φ,Ψ leaves the zeros of p una ected. Hence [p] will again be called the nucleus of the discrete pmf p. If all entries of Ω(p) are de ned and positive, then Ω(p) = Ω(p * ) ⇒ [p] = [p * ]. Like in Remark 5.1, though, one may nd two p , p ∈ P R×S with Ω(p ) = Ω(p ) but [p ] ≠ [p ] when Supp(p ) ≠ Supp(p ), that is, when p and p show a di erent pattern of structural zeros. Again, the preponderant role of structural zeros on the dependence structure appears clearly. Now one may wish to single out the member of [p] with uniform marginals for embodying the dependence pattern in p by a simple element of the class. That one would be called the 'copula pmf' of p, leading to the following de nition of a discrete copula, the obvious analogue to De nition 1.1. Remark 6.2. A very similar de nition is given in [30], who investigated such 'discrete copulas' in the case R = S. The 'copula pmf' here coincides essentially with the bistochastic matrix of their Proposition 2. See also [33,34,39] and [7, Section 3.1.1]. . Existence and uniqueness of the copula pmf De ning the copula pmf of p as the member of [p] with uniform margins raises the question of the existence and uniqueness of such an element on [p]. This question is linked to the algebraic problem of 'matrix scaling': 'Given a nonnegative matrix A, can we nd diagonal matrices D and D such that D AD is doubly stochastic?' [48] showed that the answer is a rmative if A is a positive square matrix, a result later generalised to nonnegative and/or non-square matrices; see [25] for a review. This allows a simple necessary and su cient criterion for the existence and uniqueness of the copula pmf of a given p ∈ P R×S to be formulated. When all pxy's are positive in p, it can directly be deduced from [48,49] that the copula pmf exists and is unique. Hence the non-trivial case is again when structural zeros are present in p, with their layout being the essential feature. De ne the support pxy > ∀(x, y). Case (a) is the 'easy' case, which covers all (R × S)-distributions with no structural zeros (N(p) = ∅), but not only: p is allowed to have structural zeros (N(p) ≠ ∅), provided those are not too 'prominent' in the speci ed sense. The unique p is obviously the copula pmf of p. From (6.4), Supp(p) = Supp(p), that is, the pattern of structural zeros (if any) is the same in p and p. Case (b) is the critical case. In case (b (i)), the matrix p can be made block-diagonal by some permutations of its rows and columns. Then, each sub-block of non-zero elements of p can be dealt with separately when adjusting the margins, and it remains possible to write p under the form (6.3), that is p ∈ [p], and Supp(p) = Supp(p). In the Bernoulli case, this corresponds to 'absolute association' ((i) in (5.7)). By contrast, in case (b (ii)), the matrix p cannot be made block-diagonal. For complying with the uniform margins constraint, new zeros must be created in p which must therefore be a limit point of [p] in the sense (6.5). Then p ∈ Cl([p]) and Supp(p) ⊂ Supp(p). The new zeros are created on (ν * . But it holds true that Ω(p) = Ω(p) (in the sense of Remark 6.1) and p ∈ C R×S , hence p is again the unique copula pmf of p. In the Bernoulli case, this corresponds to 'complete association' ((ii) in (5.7)). Finally, case (c) establishes the non-existence of a copula pmf when the structural zeros form a bulky subset of p. As the zeros cannot be turned into positive values by (6.3) and are frozen, there do not remain su ciently many degrees of freedom for adjusting the marginals. Pragmatically, the dependence between X and Y is so overly dictated by the structural zeros that an approach based on odds ratios is pointless. The above observations allow us to state: Corollary 6.1 (Existence and uniqueness of the copula pmf). The bivariate discrete distribution p ∈ P R×S admits a unique copula pmf p if and only if |ν X | R + |ν Y | S ≤ for all (ν X × ν Y ) ∈ N(p). By de nition, the copula pmf p has discrete uniform margins, is such that Ω(p) = Ω(p) (in the sense of Remark 6.1) and Supp(p) ⊆ Supp(p). Finally, the following result provides an interesting characterisation of that copula pmf. Proof. See [5], Theorems 3.2 and 3.3. The copula pmf p is thus the bivariate discrete distribution with uniform marginals which is the closest to the initial p in terms of the Kullback-Leibler divergence. Not only this provides a quantitative description of the copula pmf p (as opposed to the rather qualitative de nition based on the concept of 'nucleus' in Section 6.2), it also allows a clear parallel to be drawn between the proposed discrete copula and its continuous counterpart. Indeed it is known that a similar characterisation of the copula of a continuous vector exists, under mild conditions, in terms of I-projection [42,44]. . Iterated proportional tting procedure Unlike in the Bernoulli case (5.5), p is usually not available in closed form in the general (R × S)-case. (Speci c models lead to closed form copula pmf, though; see Section 7.) However, one can easily extract p from any p ∈ P R×S by iterated proportional tting (IPF). This consists of alternately normalising the rows and columns of p to have uniform marginals. In e ect, it reproduces (6.5), alternately left-and right-multiplying the current version of p by diagonal matrices Φ k and Ψ k , which leaves all odds-ratios una ected. Hence IPF perfectly ts in our framework: the output of any iteration remains in [p]. The convergence of IPF was investigated in [14,26] and [43]; see also [4,Section 3.6] and [41,Section 12.2]. The IPF seeded on any p ∈ P R×S indeed converges to its copula pmf p provided that it exists, i.e. under the condition of Corollary 6.1. The convergence of the IPF procedure is geometric in case (a) and (b (i)), and arithmetic in case (b (ii)) [5]. The R package mipfp [3] implements the IPF. . Construction of arbitrary bivariate discrete distributions with given copula pmf Similarly to Section 5.6, one may want to construct a (R × S)-discrete distribution with particular marginal distributions p X and p Y and dependence structure driven by a copula pmf p. The existence and uniqueness of such a distribution is (partially) established by the following result, analogue to Theorem 6.1. This result guarantees the existence of the requested distribution p in 'easy' cases: no zeros in p, or zeros not lying on rows and columns carrying large target marginal weights, or block-diagonal copula pmf p. It does not say that such a p does not exist in the other cases. In fact, such distribution may exist, as evidenced by (5.10) in the Bernoulli case. However, reconstructing p then is not achieved through the transformation (6.3), as some zeros of p must be turned back into a positive probability, hence p ≁ p as p belongs to an orbit of which p is only a limit point. Existence and uniqueness of p in those cases remains an open question; however, a geometric perspective similar to Remark 5.2 suggests positive conclusions. . Yule's coe cient By analogy with Section 5.5, one can de ne a margin-free measure of overall concordance in p as Pearson's correlation coe cient computed on p. We call such a coe cient Yule's coe cient Υ , which can again be regarded as the discrete analogue to Spearman's ρ. Suppose that U is discrete uniform on { R+ , R+ , . . . , R R+ }, V is discrete uniform on { S+ , S+ , . . . , S S+ }, and their joint pmf is given by the copula pmf p. It can then be checked that Pearson's correlation between U and V is the Fréchet bounds analogous to (5.6). Note that any p ∈ P R×R represented by a diagonal matrix is easily seen to admit m or w as copula pmf. Those fall into case (b (i)) of Theorem 6.1 and correspond to (positive or negative) 'absolute association'; cf. Section 5.4. There also exist non-diagonal distributions p ∈ P R×R , belonging to case (b (ii)) of Theorem 6.1, which admit m or w as copula pmf as well. Those would be akin to 'complete association'. As in the Bernoulli case, the distinction between 'absolute' and 'complete association' is only a marginal feature which must be ignored by the copula. For instance, positive 'absolute association', i.e. probability weight concentrated on the main diagonal of p, is only possible if p X ≡ p Y . By contrast, dependence as strong as can be between unequal discrete marginals must turn into 'complete association'. As a result, Υ = ± without distinction between 'absolute' and 'complete' association. Now, when R ≠ S, |Υ | cannot reach 1. Indeed, if X and Y do not take the same number of values, the associated copula pmf can never approach any of the diagonal forms m or w for the obvious reason, preventing any sense of 'perfect dependence'. In that case, the maximum value attained by |Υ | occurs when p is the pmf associated to the Fréchet bounds in the class of (R×S)-bivariate discrete distributions with uniform margins [16,'Exemple I']. It is also clear that, like Spearman's ρ, Υ only detects monotonic dependence ('concordance') between X and Y. In particular, for max(R, S) > , Υ can be 0 even when X and Y are not independent. Genuine measures of dependence ∆, in the sense of ∆ = ⇐⇒ X ⊥ ⊥ Y, may be de ned along the same way as in [17]. Example 6.1. The copula pmf being, by de nition, margin-free, its construction only requires a sense of order for the 'values' of X and Y. In particular, if X and/or Y are ordinal random variables, then it remains meaningful to construct their copula pmf in order to understand their dependence. This is illustrated here through data on congenital sex organ malformations cross-classi ed by maternal alcohol consumption from a study described in [22]: shown as a confetti plot in Figure 6.1. The adverse e ect of maternal alcohol consumption on the risk of congenital malformation appears clearly, and is quanti ed by a positive value of Yule's coe cient of Υ = . . Entirely margin-free, such a copula-based measure of association between two ordinal random variables does not rely on assigning scores to each category as is otherwise necessary [28, Section 2.3] -the 'labels' X ∈ { , } and Y ∈ { , , , , } in (6.8) have no impact whatsoever. This seems desirable, as [21, p. 740] noted: "We feel that the use of arbitrary scores to motivate measures is infrequently appropriate." Parametric discrete copulas Paralleling the continuous case, one can construct parametric models of copula pmf's. In fact, any parametric continuous copula readily gives rise to a discrete copula pmf of any dimension (R × S), as described in Section 7.1. One may also think of speci c discrete copulas originating from particular bivariate discrete distributions, such as the Binomial copula (Section 7.2) or truncated Geometric copula (Section 7.3). . Discrete versions of classical continuous copulas Then, as C has uniform margins on I, it follows, for any u, v, S− v= p uv = R and R− u= p uv = S . Hence the (R × S)-discrete distribution p = [p uv ] u= ,...,R− , v= ,...,S− is a copula pmf as de ned by De nition 6.1. For simple parametric continuous copulas C, such p can be written in closed form. For instance, it can be checked that the (R × S)-discrete version of the FGM copula is, for θ ∈ [− , ], For θ = in (7.2), one nds p uv = /RS for all (u, v), which is the (R × S)-independence copula pmf: Remark 7.1. The discrete copula pmf's derived from a continuous one through (7.1) are obtained by overlaying C on the regular mesh { , R , . . . , R− R , } × { , S , . . . , S− S , } over the unit square I. The discretisation is thus carried out 'in the copula world', keeping all marginals uniform. The so-produced discrete copula pmf's p can then be used in a second time for modelling dependence and/or constructing new bivariate discrete distributions with prescribed marginals. The idea of two distinct building blocks, the marginals on one side and the dependence/copula on the other, is maintained. By contrast, when writing (1.1) for a bivariate discrete distribution F XY with a certain continuous copula C, the discretisation is achieved by overlaying C on the mesh Ran F X × Ran F Y set by the margins of F XY . So the discretised copula is de ned by the margins, which explains why dependence and marginal distributions can never be separated in such models. The two approaches coincide for a continuous vector (X, Y), though, as then the mesh Ran F X × Ran F Y reduces down to the whole unit square I, akin to a 'continuous regular mesh'. . The Binomial copula Let (X , Y ), . . ., (Xn , Yn) be independent copies of a bivariate Bernoulli random variable with pmf (5.1). [31,Section 3] de ned the bivariate Binomial as the distribution of the vector (X, Then, it can be checked that the odds-ratios (6.1) are where ω is the odds-ratio of the initial bivariate Bernoulli (3.1). For n xed, the dependence structure in a bivariate Binomial is thus only driven by one parameter ω, and the corresponding Binomial(n)-copula, which is a ((n + ) × (n + ))-discrete distribution with uniform margins, is a one-parameter model. For instance, if n = , the bivariate Binomial distribution and its odds ratio matrix (6.2) are, respectively, Through some algebra one can make the margins of p into uniforms through (6.3), and one obtains for ω ≠ . For ω = , of course, p = π, the ( × )-independence copula pmf (7.3). See also that, for ω = or ω = ∞, p = w and p = m, the Fréchet lower and upper bounds (6.7) in 3 dimensions. One also has as Yule's coe cient (6.6) for this copula pmf, which is Υ = − for ω = and Υ = for ω = ∞. This family of Binomial copulas is thus comprehensive as it allows all values for Yule's coe cients from − and . . The truncated geometric copula Let (X , Y ), (X , Y ), . . . again be a sequence of independent replications from the bivariate Bernoulli distribution (5.1). [31, Section 6] de ned the bivariate Geometric distribution as the distribution of the vector (X, Y) where X is the number of 's before the rst 1 in the sequence X , X , . . ., and Y the number of 's before the rst 1 in the sequence Y , Y , . . .. The pmf is for some N ≥ . The pmf of (X N ,Ỹ N ) is Denotep N the corresponding matrix in P N×N . For all integer N, it follows from Corollary 6.1 that this bivariate discrete distribution admits a unique copula pmfp So, the dependence structure of a discrete bivariate vector supported on N × N can be represented by a unique continuous copula. Yet, this unique copula is not any of the copulas C satisfying (1.1). Indeed, analogously to Remark 7.1, (1.1) reconstructs the bivariate discrete distribution F XY by overlaying a copula C on the mesh Ran F X × Ran F Y over I. Such copula is not unique and is indissociable to the margins. By contrast, the above construction singles out one unique copula which represents the 'core' of F XY in the spirit of the marginal transformations described in Section 6.2. It is independent of the margins, as it is a representation of all the odds ratios ωxy (6.1) for (x, y) ∈ N+ × N+. The bivariate discrete distribution F XY can thus be broken down into its marginal distributions on one hand, and its unique copula on the other, like in the continuous case. A di erence is that here, the combination of the copula and the marginals is not carried out by (1.1), but by a continuous version of IPF [25, Sections 6.3 and 6.4, and references therein]. . The Geometric copula Consider the truncated Geometric distributionp N given by (7.5), and set p • = p • = / . For any N ≥ , one gets a copula pmf'sp N ∈ C N×N involving only one parameter ω, like (7.7) for N = . As N → ∞, those discrete pmf's turn into a continuous distribution with uniform margins as pictured in Figure 8.1 (left: ω = ; right: ω = / ). For ω > , this copula admits a singularity along the main diagonal of the unit square I. This is reminiscent of the Marshall-Olkin copula [37, Section 3.1.1], a link to which could have been expected here given that the Marshall-Olkin bivariate Exponential distribution is the limit version of the bivariate Geometric distribution introduced above [31, Section 6]. The Geometric copula, however, remains a representative of the inner dependence structure in the discrete vector (X, Y) whose pmf is (7.4), and is not the Marshall-Olkin copula as appears clearly when ω < . Then, the limiting Geometric copula density is seen to be identically null on the main diagonal of I, forming some sort of 'inverse singularity'. . The Poisson copula Let Z ∼ P(λ ), Z ∼ P(λ ) and Z ∼ P(λ ) be three independent Poisson random variables, with λ , λ > and λ ≥ . Then de ne for (x, y) ∈ N × N, and X ∼ P(λ + λ ) and Y ∼ P(λ + λ ). The odds ratios (6.1) reduce down to It is seen that the dependence structure in such a bivariate Poisson vector only depends on the parameter ω . = λ /(λ λ ). If the bivariate Poisson distribution is understood as a limiting version of a bivariate Binomial [31,Section 4], then this ω would indeed be akin to the odds ratio in the constituting initial bivariate Bernoulli distribution. Acting as in the previous section, one can rst truncate X and Y at N − , for obtaining discrete copula pmf's and then let N tend to in nity for obtaining the Poisson copula densities shown in Figure 8 Like in any bivariate discrete distribution built on such an idea of 'trivariate reduction' (8.1), the components X and Y of a bivariate Poisson vector can only show positive association. How to construct bivariate discrete distributions with Poisson marginals showing negative association has been a challenging problem for a long time. For instance, [23] noted: "we have been unable to discover explicit in the literature any examples of bivariate Poisson distributions in which the correlation is negative." [40] proposed a classical copula construction based on (1.1), thus subject to caution following the discussion in Section 2, in particular, the impossibility of ever disjointing margins and dependence. By contrast, one can couple any two Poisson distributions with any continuous copula through IPF (Sections 6.4-6.5). Figure 8.3 shows confetti plots of three bivariate discrete distributions with Poisson P( ) marginals and negative association; coupled through (a) a Clayton copula with θ = − . ; (b) a Gaussian copula with ρ = − . ; and (c) a Geometric copula with ω = / (Figure 8.1). This illustrates that the proposed discrete copula approach shares with its continuous counterpart the same exibility for constructing 'new' bivariate distributions with arbitrary marginals and dependence structure. Concluding remarks The classical de nition of a copula (De nition 1.1) stems implicitly but manifestly from the Probability Integral Transform result. Hence it is fundamentally grounded in the continuous framework, and there is little surprise that classical copula ideas lead to many inconsistencies when applied on discrete random vectors. What may appear surprising is that a large part of the previous literature in the eld has tried to make such an inherently continuous concept forcibly t the discrete case as well, in spite of those inconsistencies. In this paper it is argued that the very essence of a copula should not be imprisoned in De nition 1.1. Fundamentally, a copula is akin to an equivalence class of distributions sharing the same dependence structure. De ning such equivalence classes, called nuclei, does not require resorting to PIT and hence smoothly carries over to the discrete case. This paper describes that 'discrete copula' construction. All the pleasant properties of copulas for modelling dependence are maintained in the presented discrete framework, such as margin-freeness of anything copula-based or exibility in constructing bivariate distributions with arbitrary marginals and dependence structure. Existence and uniqueness of the copula probability mass function, analogue to the copula density in the continuous case, are established under mild conditions. The ideas are rst introduced in the bivariate Bernoulli case, and then generalised to distributions supported on { , , . . . , R} × { , , . . . , S}, for some nite R and S, and nally to bivariate distributions supported on N × N. Interestingly, the dependence structure in such a (N × N)-supported distribution may still be captured by a classical continuous copula, and that copula is unique. However, that copula is not to be understood through Sklar's theorem (1.1). The construction gives rise to new continuous copulas, such as the Geometric copula and the Poisson copula, representing the de-pendence structure in bivariate Geometric and bivariate Poisson distributions, respectively. Purely discrete copulas are also introduced, such as the Binomial copula. Finally, we note that it is straightforward to generalise the bivariate concepts expounded in this paper to higher dimensions. De nition 6.1 (discrete copula) can be formulated in terms of a multi-dimensional array with uniform margins in every dimension. Higher-order odds-ratios keep their valuable properties, in particular margin-freeness [41, Section 6.2]. The matrix scaling problem is well-studied for higher-dimensional arrays as well [25, Section 6.1, and references therein], and the multi-dimensional IPF algorithm is implemented in the R mipfp package [3]. Acknowledgement The author would like to thank the Editor, the Associate Editor and two anonymous referees for their helpful suggestions which greatly helped to improve the quality of the paper. ; (c) α = . , β = . A Appendix Let C and C be two copulas such that C ≺ Π ≺ C (concordance ordering; that is, C (u, v) ≤ uv ≤ C (u, v), ∀(u, v) ∈ I). Consider the mixture copula de ned as for α, β ∈ [ , ] with α + β ≤ . Call and assume ξ ∈ ( , ). Then it can be checked that, for any α ≤ ξ and β = −ξ ξ α, the copula C α,β is such that C α,β ( − π X , − π Y ) = ( − π X )( − π Y ), (A.1) and thus satis es Sklar's theorem for two independent Bernoulli random variables X ∼ Bern(π X ) and Y ∼ Bern(π Y ). For any C , C , the independence copula Π corresponds to (α, β) = ( , ), but there exist in nitely many other copulas, of various natures and shapes, which satisfy (A.1). For example, for π X = π Y = / , the 6 copula densities shown in Figure 9.1 are equally valid for representing the independence between X and Y. Yet, their appearances -and consequently any qualitative or quantitative assessment of the underlying dependence based on them -are dramatically di erent.
12,878.6
2020-01-01T00:00:00.000
[ "Mathematics" ]
Detecting and Mitigating Adversarial Examples in Regression Tasks: A Photovoltaic Power Generation Forecasting Case Study : With data collected by Internet of Things sensors, deep learning (DL) models can forecast the generation capacity of photovoltaic (PV) power plants. This functionality is especially relevant for PV power operators and users as PV plants exhibit irregular behavior related to environmental conditions. However, DL models are vulnerable to adversarial examples, which may lead to increased predictive error and wrong operational decisions. This work proposes a new scheme to detect adversarial examples and mitigate their impact on DL forecasting models. This approach is based on one-class classifiers and features extracted from the data inputted to the forecasting models. Tests were performed using data collected from a real-world PV power plant along with adversarial samples generated by the Fast Gradient Sign Method under multiple attack patterns and magnitudes. One-class Support Vector Machine and Local Outlier Factor were evaluated as detectors of attacks to Long-Short Term Memory and Temporal Convolutional Network forecasting models. According to the results, the proposed scheme showed a high capability of detecting adversarial samples with an average F1-score close to 90%. Moreover, the detection and mitigation approach strongly reduced the prediction error increase caused by adversarial samples. Introduction Wind and solar energy are the most acceptable and promising resources of renewable energy due to their potential and availability. In particular, photovoltaic (PV) facilities have experienced an enormous technological advance over the last few years, exploiting the advantages of using recent architectures such as Internet of Things (IoT) and Cloud Computing [1]. IoT sensors can collect variables such as weather conditions, system temperature, and generated power in PV power plants, which may indicate faults and contribute to the understanding of the plant's generation capacity. By accessing this information online, operators can be prepared for promptly handling unexpected events and variations [2]. Machine learning (ML) is an important building block for successfully integrating PV power plants into smart grids. ML algorithms can underpin solutions to analyze and predict the power grid behavior from data collected by IoT sensors. In PV systems, alongside other goals, ML has been explored to forecast their generation capacity. Accurate generation predictions make power grids more reliable amid fluctuations in demand and capacity, avoid power outages, prevent plant managers from penalties, and save costs [3]. More specifically, deep learning (DL) models have been applied to forecast PV power generation with encouraging results [4,5]. Although the use of these forecasting models contributes to more active, flexible, and intelligent smart grids [6], they may be vulnerable to adversarial examples. In these attacks, adversaries add maliciously crafted noise to legitimate input samples, driving the DL model to make wrong predictions [7]. This fact draws attention to the physical and cyber security of this kind of facility [8], especially when considering that industry practitioners are not equipped with measures to protect, detect, and respond to attacks on their ML models [9]. Different schemes [10][11][12] have been proposed recently to defend ML algorithms against adversarial examples. Studies [10,11] made use of adversarial training. In this technique, data used for model training include adversarial samples especially crafted to make it more resilient against this kind of attack. Conversely, Abdu-Aguye et al. [12] proposed an approach that detects adversarial samples during the test phase. Despite their encouraging results, these studies only focused on protecting ML models designed for classification tasks. The literature still lacks defense schemes for regression models, which can also be deeply affected by these attacks. In PV systems, these attacks represent a severe threat. An adversarial sample might make the forecasting model predict a much higher or lower generation capacity than the correct one. As operators use these predictions to coordinate multiple power plants that operate together to meet the energy demand, a high prediction error will eventually lead to wrong decisions, which can cause large-scale failures [13]. This work proposes a novel scheme to detect and mitigate adversarial samples inputted into DL regression models that forecast PV power generation. First, the approach extracts multiple features from the inputs forwarded to the forecasting system. These inputs are observations about the power plant generation capacity over time. The extracted features range from basic statistics such as minimum, maximum, and mean to spectral measures such as Hurst exponents. Their objective is to make a time series profile that allows for distinguishing natural observations from maliciously crafted ones. Then, a one-class classifier is employed to classify the feature vector as legitimate or malicious. If malicious behavior is detected, the observations are replaced by the last set of observations classified as legitimate. This means that the approach mitigates the attack, preventing the adversarial samples from reaching the forecasting system. The results showed that the proposed scheme could detect most of the adversarial samples and reduce significantly the error increase caused by the attacks. The main contributions of this paper are: The remaining of this paper is organised as follows: Section 2 presents the background about time series and adversarial ML along with the related work. In Section 3, the proposed approach is discussed. Section 4 shows the materials and methods used during the proposal's evaluation, while Section 5 discusses the results. Finally, Section 6 draws the final conclusions. Time Series A time series is a data sequence in a particular period. These data can produce different values at distinct moments in time. Formally, it can be defined as an ordered set of observations X = [x 1 , x 2 , . . . x T ] in which T corresponds to the length of the series [14]. The forecasting task consists of finding a function f that predicts the h-th future value in any time t, i.e., x t + h based on i past values: where i represents the input window size and h, the forecast horizon. When the latter is equal to one, the forecasting task is referred to as a one-step-ahead forecast. Otherwise, it is known as a multi-step ahead forecast. In a supervised training of f , t must also attend the condition t ≤ T − h. Moreover, time series can present seasonality, which occurs when regular patterns are captured in the series. Seasonal events are phenomena that occur, for instance, daily at a certain time, every day, or in a certain month every year. Adversarial Machine Learning Solutions that rely on ML might suffer attacks based on adversarial examples [7]. Adversarial inputs are very similar to benign ones but tailored to maximize the model's prediction error. Three aspects of these attacks are worthy of discussing in this section: their classification, adversarial example generation, and defense strategies. Attack Classification An attack may be classified according to its specificity. In targeted attacks, the attacker focuses on specific system instances (e.g., specific users, periods, or inputs). Conversely, in untargeted attacks, the attacker aims at any instance of indiscriminate attacks. Adversarial examples can be used at the training or test phases of the ML pipeline. A poisoning attack, also known as causative, occurs when the attacker can access and modify training data. Data access attacks are also related to the training phase but are more restricted. In these attacks, the attacker can access but not modify the training data. They may then use the retrieved data to induce substitute learning models useful for attacks in the test phase. An exploratory attack occurs when the attacker can modify only the test data [15]. The attacker's knowledge is another relevant feature, which might differ according to the level of access to the system components: training data, feature space, and learning algorithm. The latter may also involve the knowledge of the loss function and the trained hyper-parameters. The attacker's knowledge can be classified according to the access to these three components [16]: • A white-box attack, which implies that the attacker has access to the entire set of components. • A black-box attack, which implies that the attacker lacks substantial knowledge about the system components. • A gray-box attack, which lies between the previous attacks. In this case, the attacker may have partial access to the training data, knowing the training algorithm or the feature space. When the attacker lacks knowledge about the learning algorithm, an alternative is defining a surrogate/substitute model. This leads to the concept of transferability, which means that adversarial examples designed for a specific model can also affect another model [17]. Adversarial Examples Generation The attacker generates an adversarial input to fool the ML model based on their knowledge about the target. By accessing training data or gathering information about the model, the attacker can make inputs that look like the legitimate ones but carry a perturbation specially crafted to explore the model's vulnerabilities [18]. Most of the methods for crafting adversarial examples were originally designed for images. A Fast Gradient Sign Method (FGSM) [19] is one of the most notable methods, being also the basis for later methods [20,21]. In an FGSM attack, the perturbation η is given by Equation (2): where corresponds to the coefficient that controls the perturbation magnitude, x to the input to the model, y to the output associated with x, θ to the weights of the adversarial model, and J(.) to the loss function. The malicious sample to be inputted to the target results from adding η and x. FGSM is computationally cheap since it only needs the gradient sign, which can be quickly obtained. Although it was designed to compute adversarial image perturbations, Santana et al. [13] showed that FGSM is also effective at making adversarial examples to degrade the prediction performance of DL models in PV power generation forecasting. Defense Approaches In the image processing literature, defense approaches include network distillation [22], adversarial retraining [23], randomisation [24], denoising [25], and adversarial example detection during test time [26]. The main idea behind adversarial example detection relies on training a classifier to detect adversarial inputs, distinguishing them from legitimate ones. ML algorithms might be used for this task. They have been successful in detecting attacks in traditional computer systems and are also promising to detect attacks in smart grids and in time series [27][28][29]. In this kind of proposal, data about the target's behavior are gathered and then used to feed a ML-based classifier, which learns to classify the behavior as malicious or legitimate according to its characteristics [30,31]. As the types of attack change so does the type of analyzed data. For example, to detect network-based attacks, variables related to network traffic such as the packets per second rate or average packet size are investigated. False data injection in smart grids, on the other hand, can be detected through the analysis of measurements about the power grid state such as current flow and voltage magnitude. In short, these proposals gather data that are sensitive to an attack and use them to distinguish legitimate from malicious instances. The same rationale may be successfully applied to detect adversarial examples in the domain of PV generation. Addressing the topic of adversarial examples in smart grids and time series, Chen et al. [32] evaluated the adversarial examples impact on feed-forward Neural Network (NN) and Recurrent Neural Network (RNN) models for simulated data on power quality classification and load forecasting, respectively. Based on the results, the authors encouraged more discussion towards increasing the robustness of models implemented in power systems. Fawaz et al. [7] adapted FGSM and Basic Iterative Method to univariate time series classification and performed attacks against DL models. These attacks achieved an average reduction in the model's accuracy of 43.2% and 56.89%, respectively, and the experiments showed that FGSM allows real-time adversarial sample generation. The authors claim that their work is the first to consider the vulnerability of DL models concerning time series examples. Niazazari and Livani [10] performed attacks on a multiclass Convolutional Neural Network (CNN) trained on simulated data. The targeted model classifies power grid events such as line energization, capacitor bank energization, or fault. The attacks were generated using FGSM and Jacobian-based Saliency Map Attack (JSMA) algorithms and showed significant potential to make the CNN-based model misclassify the tampered input. Karim et al. [11] proposed using an adversarial transformation network to attack 1-Nearest Neighbor Dynamic Time Warping (1-NN DTW) and Fully Convolutional Network (FCN) models, trained on 42 classification datasets, showing their susceptibility to adversaries. They used the retraining defense strategy to improve the models' robustness. Abdu-Aguye et al. [12] proposed using OCSVM to classify samples as original or perturbed. The work was based on the attacks and datasets presented in [7]. The authors claimed to reach 90% detection accuracy on most datasets and up to 97% in the best case. Table 1 summarizes the comparisons among the reviewed studies. This work addresses an important limitation found in the reviewed literature: the lack of protection for regression models. In other words, most researchers in adversarial ML are devoted to tackling classification focused on image processing tasks. As observed in a previous work [13], adversarial examples can also affect regression models, which are usually the core of PV generation forecasting. Among the related works, only Chen et al. [32] addressed this possibility, but they did not propose a defense solution against these attacks. All other proposals are aimed at attacks against classification models. Proposed Approach Attacks involving adversarial examples against forecasting models consist of multiple steps. They begin with the attacker exploring any vulnerability that allows for accessing training data. Using these data, the attacker induces a model to craft malicious perturbations. Then, the attacker needs to find a breach to tamper with the data inputted into the forecasting model. After achieving this goal, the attacker can add malicious perturbations into the input data and complete the attack. As this attack requires breaking into multiple systems through various steps, multiple defense mechanisms are needed to tackle it. Detecting malicious inputs and preventing forecasting models from processing them may provide protection when other defense lines have already been violated. This work proposes an approach based on one-class classifiers to detect and replace malicious inputs over power generation data in a PV plant. It assumes that adversarial examples can be distinguished from legitimate ones because they are intentional anomalies [33]. Even malicious inputs crafted to be as similar as possible to legitimate ones might carry distinguishable characteristics. Figure 1 provides an approach's overview. When new data instances from the power plant are forwarded to the Generation Forecasting Module, they are first assessed by the Attack Detection Module. This module organizes the data instances in windows of length i, which is the input window length of the forecasting model, as explained in Section 2.1. Then, the Attack Detection Module extracts the following features from each window: Minimum, Mean, Median, Maximum, Standard deviation, Ratio between Mean and Maximum, Ratio between Minimum and Maximum, Entropy, Correlation, Detrended fluctuation analysis (DFA), and Hurst Exponent. They make up a statistical profile of each window, which is intended to evince the differences between legitimate and maliciously crafted data. After being extracted, the feature vector feeds an ML-based detector, more specifically, a one-class classifier. This kind of ML model is usually employed for anomaly detection. The most important one-class classifier's characteristic is the need for samples from only one class to be trained. In this work, the one-class classifier is trained using only legitimate data. As it might be hard to find samples from malicious data, this aspect of one-class algorithms is particularly useful for the proposed approach. The one-class classification model then analyzes the feature vector extracted from the input window and classifies it as legitimate or malicious. When a malicious input is detected, the window is replaced by the most recent window classified as legitimate. Therefore, it prevents malicious data from being forwarded to the Generation Forecasting Module, while ensuring that the forecasting process keeps receiving inputs. Finally, the Generation Forecasting Module employs a DL model to make the predictions. Dataset The power generation samples are obtained from a PV plant that started to operate in November 2019 at the State University of Londrina campus (Brazil). This power plant is a typical IoT system that contains sensors connected to the Internet through a wireless network and transmits data to be processed in the cloud. More specifically, the plant has 1020 solar panels and sensors that collect observations about solar power generation every 15 min. Thus, 96 observations about the plant performance are collected each day. The plant's generation capacity is 489.6 MWh/year. Some variations in the collected samples may occur primarily due to two factors. Firstly, they depend on the weather condition. Rainy or cloudy days show a considerable disparity in sample values collected on sunny days. Secondly, the quality of the collection is also subject to interference from dirt that can accumulate on the solar panels, such as leaves from trees that surround the PV plant. These variations are also meaningful to calibrate the forecasting models. All collected data are transmitted online to a private cloud maintained by the plant vendor, where the data are stored and can be accessed for operation and control. For the training of forecasting models, data from December 2019 to June 2020 was chosen. Moreover, 20% of the training data was used for hyper-parameter tuning of each model. For testing, data from July and August were employed. Threat Model Our threat model is based on targeted gray-box attacks that use FGSM for crafting adversarial examples. A targeted attack means that not all inputs are maliciously manipulated. The attacker picks specific inputs or targeted instances to manipulate according to some criteria. Modeling the behavior of attackers is an intricate task and exhaustive options are possible. For practical purposes, three different patterns were defined to select attacked instances: (1) Random: the attacker picks the targeted instances at random. In this pattern, the attack can be confused with the plant intrinsic noise; (2) Intermittent (inter): every targeted instance is followed by a non-attacked instance and vice versa. In [34], this pattern showed to be hardly detected by an estimation-based detector; (3) Sinusoidal (sin): a group of targeted instances is followed by a group of nonattacked instances and vice versa. This function takes as an argument the instance index in radians. If the result is negative or zero, the instance is attacked. This pattern corresponds to a smoother variation of the intermittent pattern. Figure 2 depicts the attack patterns. To understand how gray-box FGSM attacks can be launched against a forecasting model, it is necessary to recall first that the models addressed in this work have a training and a test phase. During the training phase, they use the training data to induce a regression model F. Then, during the test phase, they make predictions by using historical data inputted to F. In gray-box attacks [15], the attacker has limited knowledge about the target. Following this idea, it is assumed that the attacker can access a significant portion of the training data but has no knowledge about the model F induced by the target. Moreover, the attacker cannot modify the training data but can tamper with inputs during the test phase. To overcome the lack of knowledge about F, the attacker induces a substitute model F , exploring the cross-technique transferability. This means that the attacker can analyze the attack circumstances and choose an algorithm that better fits their need to induce F . In this work's scenario, an attacker could install malware or plug a rogue device at different points, ranging from the PV power plant to the cloud-based servers. If the algorithm behind the attack is a big consumer of CPU, memory, disk, or network resources, the defense systems that monitor these parameters can detect it. In this sense, employing a lightweight solution is an attacker's strategy to stay unnoticed. A costly ML algorithm can also make the requirements to run the attack very strict, hindering its execution. For being simpler than DL, successful in other adversarial scenarios [17] and still differentiable, logistic regression (LR) was adopted to build the substitute model. Based on this strategy, the attacker uses the first half of their training data to build F . Then, to compute the perturbation η in Equation (2), the attacker uses the second half of the training data as x and y along with the F model. After calculating η, the attacker is ready to manipulate inputs and generate adversarial examples in the forecasting model's test phase. For a given legitimate input x test that is forwarded to the forecasting model, the attacker will make an adversarial sample x test according to Equation (3): This adversarial example x test is inputted to the forecasting model instead of x test , increasing F's prediction error. During the experiments, the value in Equation (2) was varied from 0.05 to 2 with steps of 0.05. In FGSM attacks, determines the attack magnitude. As for the one-class classifier in the Attack Detection Module's core, OCSVM and LOF were explored. This kind of classifier has been successfully applied to fault detection in smart electric power systems [6]. OCSVM [36] creates hyperplanes (n-dimensional planes) that set boundaries around a region containing as much as possible of the training data. By doing so, OCSVM can identify whether an instance is within this area. LOF estimates a score, named Outlier Factor, which reflects the level of abnormality of each observation from a dataset [37]. It works based on the idea of local density. The k-Nearest Neighbors algorithm is applied to the data, and each data instance is given a locality, which is used to estimate the clusters' density. During the experiments, the OCSVM hyper-parameter ν varied from 0.1 to 0.4 in 0.05 steps. As for LOF, the contamination alternated between 2.5 × 10 −4 and 5 × 10 −4 , and the number of neighbors varied from 20 to 45 in steps of 5. Generation Forecasting Module Studies related to data analysis in time series have been carried out for a long time [38,39]. ML-based applications have become more popular due to their high performance on data inference, outperforming even classical statistical models [40]. More specifically, DL techniques have played a fundamental role in reducing the regression approaches' error. This work explores TCN and LSTM, both DL models, to make predictions in time series. LSTM is a type of RNN. Unlike some traditional neural networks, LSTM can remember the most useful information. This is possible thanks to its architecture. The networks that comprise the LSTM are connected in the form of loops. This process allows information to persist on the network. It also has a gating mechanism for learning long-term dependencies without losing short-term capability [41]. In particular, this neural network has been achieving important contributions in photovoltaic power generation forecasting [42]. Evaluation Metrics To compute the forecasting model performance, the Root Mean Squared Error (RMSE) was assessed for the test sets. This error metric tends to be more robust with undesirable large deviations [43]. F1-score was calculated to evaluate the detector. This metric describes the relation between two other metrics for classifiers, recall and precision. Precision measures the percentage of classified adversarial examples that are truly malicious. Recall consists of the effectiveness of the approach in identifying adversarial examples. Adversarial Examples' Mitigation This section assesses whether the Attack Detection Module effectively reduces the adversarial examples' impact over the prediction error. Alongside the detection mechanism, the mitigation approach is evaluated here. The mitigation function blocks samples classified as malicious and, at the same time, has to be able to replace them with samples that keep the prediction error low. The tests were carried out as follows. First, the Generation Forecasting Module was executed to make predictions under non-attack and attack scenarios without the Attack Detection Module's aid. The same data used for the tests in Section 5.2 were employed here. The results show that LSTM had a better performance in terms of RMSE. LSTM obtained the lowest error in several scenarios: without attack and for with values of 0.05, 0.15, and 0.2. TCN outperformed LSTM just for = 0.1. The second part of the tests reintroduced the Attack Detection Module in the pipeline. A remarkable reduction of RMSE for both models (LSTM and TCN) was observed using OCSVM or LOF at the Attack Detection Module's core. Figure 3 presents the results for all these scenarios. Table 3 presents RMSE obtained with all attack patterns and grouped by . TCN outperformed LSTM in all scenarios where the Attack Detection Module was present in the pipeline. This result suggests that TCN benefits more from the mitigation scheme than LSTM. The fact that LSTM is solidly grounded on the time series's sequential information can explain this outcome. As the mitigation scheme uses the most recent legitimate input, when the current input is malicious, the time series' sequence is eventually broken. TCN, which uses local and global information of the time series, handles this characteristic of the mitigation strategy better. It is noteworthy that the error increase for the scenario with = 0.15 was substantially reduced when the Attack Detection Module was used. In this scenario, the Attack Detection Module based on LOF reduced the increase in TCN's prediction error caused by adversarial examples from 711.21% to 19.70%. Attack Detection Module's Efficacy in Detecting Adversarial Examples OCSVM and LOF were applied as detectors using different hyper-parameters to find the most suitable classifier for the Attack Detection Module. Figure 4 shows box plots for LOF F1-Scores obtained by varying the Number of Neighbors (20, 25, 30, 35, 40, and 45) and Contamination (2.5 × 10 −4 and 5 × 10 −4 ) over different attack patterns (random, intermittent, and sinusoidal) and values (0.05 to 2 in 0.05 steps). In box plots, boxes depict the range between the upper and lower quartiles, while horizontal lines inside the boxes represent the median. Vertical lines extending from the boxes illustrate the variability outside the quartiles. Individual points represent outliers. The results for different numbers of neighbors show that lower values for this hyperparameter deliver better results. The best average F1-score found in these tests was 86.05%, obtained with 20 neighbors. In contrast, the contamination hyper-parameter tests pointed out that the best average performances were obtained with the highest value for this hyper-parameter. With Contamination = 5 × 10 −4 , the detector reached an average F1-Score of 87.94%. The standout LOF outcome was obtained by combining 40 neighbors and contamination = 5 × 10 −4 , which resulted in an F1-Score of 95.86% for detecting adversarial examples following a sinusoidal pattern. To check whether there is a statistically significant difference between the performance of both classifiers, LOF and OCSVM, the Friedman's statistical test and the post-hoc test of Nemenyi were used. In this evaluation, three metrics were compared: F1-Score, precision, and recall. The Critical Difference (CD) demonstrates that the difference between two algorithms is significant if the gap between their ranks is larger than CD. Otherwise, no significant differences are found between them. Diagrams for these three metrics are presented in Figures 6-8. The metrics were collected considering all attack patterns and values, and the tests had a significance level of 95%. According to the statistical tests, there was a statistically significant difference between both models since the CD is equal to 0.57 and the distance between them is equal to 1. The CD value equal to 0.57 is the same for all scenarios as the number of experiments and algorithms used are also the same. Consequently, there is statistical difference between the metrics evaluating the two models with the same value in all cases. The detector efficacy focusing on the influence of values and attack patterns was also analyzed. The experiments showed that LOF reached higher F1-score for lower , while OCSVM outperformed LOF for higher values, as Figure 9 shows. Clear differences in detection performance were not observed for each attack pattern. Figure 10 presents a box plot that depicts F1-Score results obtained with LOF and OCSVM considering the three attack patterns (random, inter, and sin). OCSVM achieved a higher median F1-Score than LOF for the three attack patterns. Actually, the OCSVM median was very close to the third quartile of LOF and the minimum values of OCSVM were very close to the LOF median. Lastly, the influence of the detection model, attack magnitude ( ), and attack pattern on the Attack Detection Module's efficacy was investigated. The Pearson correlation coefficient was employed to identify a linear relationship between each factor and the detection performance. A coefficient value of 0 means no correlation. On the other hand, a value close to −1 or 1 represents the full correlation. The obtained correlations were 0.016, 0.244, and 0.651, for attack pattern, detection model, and attack magnitude, respectively. This result suggests that the detection performance is more affected by the attack magnitude, while the attack pattern and the detection model have a low correlation to the detector efficacy. Feature Importance Seeking to provide more insights into what distinguishes FGSM adversarial samples from legitimate ones, the importance of each feature inputted to the Attack Detection Module was analyzed. Spectral Feature Selection for Supervised and Unsupervised Learning (SPEC) [44] was used to this end. Figure 11 shows the features sorted by their importance. Hurst exponent (hurst), median, entropy, ratio between mean and maximum (ratio-mean-max), and correlation (corr) were the most promising ones with roughly the same importance. Mean, standard deviation (std), and maximum (max) showed slightly worse performance than the best ones. Despite its high importance, DFA is clearly less important than the other ones. In short, the computed features' importance suggests that a great part of them contribute significantly to distinguish legitimate and adversarial samples, except the features related to the minimum value (the minimum itself and the ratio between minimum and maximum features). Discussion Considering two different classifiers (LOF and OCSVM), three attack patterns (random, intermittent, and sinusoidal), and a broad range of attack magnitudes, the results showed that the proposed approach was consistently effective at detecting adversarial examples over several situations. The approach successfully detected low-magnitude attacks, which are particularly challenging due to their small difference to legitimate samples, and achieved excellent performance in detecting high-magnitude attacks. The variation of attack patterns did not affect the detection capacity. Moreover, OCSVM and LOF both had a good performance, but OCSVM was statistically superior to LOF. Abdu-Aguye et al. [12] also achieved high accuracy at detecting FGSM adversarial examples with an OCSVMbased scheme. Unlike this work, their scheme focused on defending classification models and did not vary attack patterns and attack magnitudes. Despite these methodological differences, the high efficacy reported by both studies suggests that one-class classifiers are a promising option to address this issue. Almost all features extracted from the forecasting model input showed to be good indicators of artificial presence within the analyzed data. First, this suggests that adversarial examples affect the analyzed window's basic features, such as minimum, maximum, and median. Moreover, this result implies that features related to the time series spectral behavior, such as the Hurst exponent, are influenced by these artificial manipulations. Combining the detection approach with a mitigation mechanism allowed a significant reduction in the error increase caused by adversarial samples. For high-magnitude attacks, the error increase plummeted from figures above 700% to roughly 20%. The results were also relevant for low-magnitude attacks, dropping from above 200% to around 45% in the worst case. Both TCN and LSTM could benefit from using the attack detection and mitigation mechanism, but slightly better results were found for TCN. Other studies [10,11] that followed a different mitigation strategy (e.g., adversarial training) also reported a positive impact on the target model robustness towards adversarial examples. Nevertheless, with a simple mitigation scheme backed by an effective detection approach, this work achieved a positive outcome for attack mitigation without requiring adversarial examples during the training phase. Conclusions DL models are great options to forecast the generation capacity of PV power plant, but they are vulnerable to adversarial examples: as the results showed, the forecasting error under attack increased up to 962.21% when compared to the forecasting error in non-attacked conditions. On the other hand, detecting and discarding these examples reduces the damage to the forecasting model accuracy: in the worst case, the error increased 77.60% when compared to the forecasting error in non-attacked conditions. In this sense, schemes that detect adversarial examples and mitigate them should not be neglected to avoid the malfunction of the power plant. Future work includes investigating other methods to defend regression models against adversarial examples, testing the proposed scheme over different attack methods and domains. Furthermore, the proposed mitigation approach will be extended as it is possibly a point that can be changed to reduce the error increase caused by attacks even more. Conflicts of Interest: The authors declare no conflict of interest. Abbreviations The following abbreviations are used in this manuscript:
7,659.4
2021-09-26T00:00:00.000
[ "Environmental Science", "Engineering", "Computer Science" ]
Asymptotic behaviour of Stokes flow in a thin domain with a moving rough boundary We consider a problem that models fluid flow in a thin domain bounded by two surfaces. One of the surfaces is rough and moving, whereas the other is flat and stationary. The problem involves two small parameters ϵ and μ that describe film thickness and roughness wavelength, respectively. Depending on the ratio λ=ϵ/μ, three different flow regimes are obtained in the limit as both of them tend to zero. Time-dependent equations of Reynolds type are obtained in all three cases (Stokes roughness, Reynolds roughness and high-frequency roughness regime). The derivations of the limiting equations are based on formal expansions in the parameters ϵ and μ. Introduction The fundamental problem in lubrication theory is to describe fluid flow in a gap between two adjacent surfaces which are in relative motion. In the incompressible case, the main unknown is the pressure of the fluid. Having resolved the pressure it is possible to compute other fundamental quantities such as the velocity field and the forces on the bounding surfaces. To increase the hydrodynamic performance in various lubricated machine elements, for example journal bearings and thrust bearings, it is important to understand the influence of surface roughness. In this connection, one encounters various approaches, commonly based on the equation proposed by Osborne Reynolds in 1886 [1]. Although a number of averaging methods considering surfaces roughness have been proposed over the last 40 years (e.g. [2][3][4]), homogenization has prevailed as the proper way to average [5,6]. Homogenization is a rigorous mathematical theory that takes into account information about local effects on the microscopic level [7]. This study is concerned with the asymptotic behaviour of Stokes flow in a narrow gap described by two small parameters and μ. The parameter is related to the distance between the surface, whereas μ is the wavelength of the periodic roughness. In many problems involving two small parameters, the way in which the parameters tend to zero is primordial and the limiting equations may be different whether tends to zero faster, slower or at the same rate as μ. Using formal asymptotic expansions in the evolution Stokes equations, we show that three different asymptotic solutions, i.e. three different flow regimes, exist in the limit as > 0 and μ > 0 tend to zero depending on whether the limiting ratio λ = lim ( ,μ)→(0,0) μ equals zero, a positive number or ∞. In all three flow regimes, the limiting pressure is governed by a two-dimensional equation of Reynolds type whose coefficients take into account the fine microstructure of the surface, i.e. a homogenized equation. The situation can be summarized as follows: Stokes roughness regime. The case when 0 < λ < ∞. One finds that the coefficients of the homogenized equation are obtained by solving three-dimensional so-called cell problems which depend on the parameter λ. Reynolds roughness regime. The case when λ = 0. The cell problems are two-dimensional and the proposed averaged equation appears in, for example, [3,8,9]. The same limiting equations are obtained if one lets λ → 0 in the Stokes roughness. High-frequency roughness regime. The case when λ = ∞. We obtain a limiting equation of very easy and cheap treatment. The same limiting equations are obtained if one lets λ → ∞ in the Stokes roughness. This work is closely related to the studies by Bayada & Chambat [4,10] and Benhaboucha et al. [11], who considered the stationary case, i.e. only the flat surface is moving. The main novelty is the treatment of the unstationary case (the rough surface is moving) as well as the way that and μ tend to zero. The paper is organized as follows: §2 is devoted to the formulation of the problem and basic notations. Section 3 contains a summary of the main results of this work. In §4, we define the formal asymptotic expansions, the corresponding change of variables, domains and differential operators for the problem. Section 5 is concerned with the Stokes roughness, with constant ratio /μ = λ. This is the case analysed in [4,10]. Section 6 is devoted to the case = μ 2 , which corresponds to Reynolds roughness. We apply the asymptotic expansion method in one parameter and derive the homogenized Reynolds equations. The last section deals with the case μ = 2 , which belongs to the high-frequency roughness regime. We obtain the classical Reynolds equations with truncated film thickness. We note that neither = μ 2 nor μ = 2 is covered in [4,10], whereas [11] only covers = μ 2 . Evidently, as mentioned in [4,10], identical equations are obtained if one lets λ → 0 and λ → ∞ in the Stokes roughness regime. However, from a mathematical point of view, there is no apparent reason why taking limits in such different ways would yield the same result. For clarity, the main results are presented as 'theorems' and their derivations as 'proofs', although the method of formal expansion is not rigorous by mathematical standards. Choosing this style, we hope to make the paper accessible to a wider audience. We stress, however, that all calculations (including limit processes) can be made rigorous. Problem formulation and basic notations This study is concerned with thin film hydrodynamic lubrication of rough surfaces. For simplicity, we suppose that one of the surfaces is rough and moves with velocity v = (v 1 , v 2 , 0) and that the other is flat and stationary. As the rough surface is moving, the film thickness varies in both space and time, thus rendering the problem unstationary. A point in space (R 3 ) is denoted as x = (x 1 , x 2 , x 3 ), and t is a time variable that belongs to the interval [0, T]. The problem considered is the evolution Stokes system ∂u ∂t − ν u + ∇p = 0 (2.1) and div u = 0, where ν (viscosity) is a constant, and u = (u 1 , u 2 , u 3 ) (velocity field) and p (pressure) are unknown. We shall write where and μ are two small parameters. The basic idea of the homogenization method is to treat x , y , t and τ as independent variables. Equations (2.1) and (2.2) are assumed to hold in a moving space domain Ω μ (t), defined by where ω is an open connected set in R 2 with smooth boundary, outward unit normally denoted byn and the function H(x , y , t, τ ) describes the geometry of the upper surface. H is assumed to be Y-periodic in y , Y = [0, 1] × [0, 1] being the cell of periodicity and T-periodic in τ . More precisely, where h 0 describes the global film thickness, whereas the Y-periodic function h per represents the roughness. Thus, is related to the film thickness, whereas μ is the wavelength of the roughness. Moreover, we define the 'minimum film thickness' The boundaries of Ω μ (t) are We assume the following no-slip boundary conditions: where g = (g 1 , g 2 , g 3 ) is some given function and the initial condition where ∇H = (∂H/∂x 1 , ∂H/∂x 2 , 0). For convenience, we use the notation for integrals of a function f . Moreover, we denote by e 1 , e 2 , e 3 the standard basis vectors in R 3 . Finally, to ensure the existence of u, we must require some compatibility between the boundary conditions and H. To this end, it is assumed that g is a C 1 vector field defined on R 3 such that divg = 0, g(x 1 , x 2 , 0) = (0, 0, 0), for all (y , t, τ ). Formal asymptotic expansion in and μ We analyse the asymptotic behaviour of the equations of motion (2.1) and (2.2). We define the following expansions for u and p: where x , z, y and τ are defined by (2.3) though subsequently treated as independent variables. As the roughness is periodic, it is assumed that u n,m (x , z, y , t, τ ) and p n,m (x , z, y , t, τ ) are Y-periodic in y and T-periodic in τ . It is convenient to define also the following domains: The boundaries of Ω(y , t, τ ) and Ω * (t) are Note that B(x , t, τ ) and B * (x , t) do not have lateral boundaries because of the periodicity of Y. The boundary of Y z is denoted by (a) Differential operators Figure 1 describes the case when and μ tend to zero with constant ratio 0 < λ < ∞. That is, we assume that μ = /λ, where λ is a positive constant. We define the following asymptotic expansions: Stokes roughness Inserting (4.1) and (4.2) into (2.1) and equating terms of the same order using gives Similarly for (2.2), we have and 0 : div The boundary conditions are and initial conditions are (a) Analysis of equations The main result pertaining to the Stokes roughness is as follows. Theorem 4.1. The leading term u 0 in expansion (4.1) for u is given by where α i (i = 0, 1, 2) is a solution of the periodic cell problems (4.12) and (4.13) and the leading term p −2 in expansion (4.2) for 2 p is a solution of the boundary value problem and . We are looking for solutions u 0 of the form (4.8) and p −1 of the form where α i = α i (x , z, y , t, τ ) and q i = q i (x , z, y , t, τ ) are to be determined. Clearly, (4.8) and (4.11) satisfy (4.4) and (4.6) if and The above systems of equations are called cell problems, whose solutions α i and q i are Y-periodic, and the boundary conditions are Multiplying (4.7) by φ(x ) ∈ C 1 (ω) and integrating by parts using the Gauss-Green theorem, we obtain As φ is arbitrary, it holds that div By integrating (4.8), we obtain Figure 2 describes the case when the wavelength of the roughness is much greater than the film thickness, i.e. μ . This case can be studied by assuming that is a function of μ such that lim μ→0 ε(μ) μ = 0. Reynolds roughness For simplicity, we shall assume that = μ 2 . We postulate the following expansions for u and p : Plugging (5.1) and (5.2) into (2.1) and equating terms of the same order using The boundaries conditions are and initial conditions are (a) Analysis of equations The main result is as follows. Theorem 5.1. The leading term u 0 in expansion (5.1) for u is given by where A and b are given by (5.22) and (5.23), and From (5.6), we deduce u 0 3 = 0 in Ω because of the boundary conditions for u 0 3 . Thus, (5.5) in component form becomes Hence, the first two equations may be written as Integrating (5.13) with respect to z and taking into account the boundary values of u 0 , we get (5.9). Integrating (5.9) once more, we obtain Multiplying (5.7) with φ(y ) and integrating over B using the Gauss-Green theorem gives for all smooth and Y-periodic φ. Hence, Inserting (5.14) into (5.15), it is seen that p −3 can be written in the form (5.12), where q i is periodic solutions of Multiplying (5.8) with φ(x ) ∈ C 1 (ω) and integrating gives As φ is arbitrary, it holds that div x u 0 z y + ∂H y ∂t = 0 in ω (5.19) and (u 0 z y −ḡ z y ) ·n = 0 on ∂ω, (5.20) where Figure 3 illustrates the case when the roughness wavelength is small compared with the film thickness, i.e. (a) Analysis of equations The main result is as follows. The matrix A λ and vector b λ are macroscopic quantities known as 'flow factors'. They are calculated by solving local problems on a periodic cell, thus taking into account the local geometry, i.e. the roughness, of the problem. The expression A λ ∇p + b λ comes from averaging the first two components of the velocity field u z y . As the flow is governed by an equation which is a generalized form of the Reynolds equation, one can say that the thin film approximation is valid on the macroscopic scale in all three cases. In the Stokes roughness regime, the thin film approximation is not valid on the microscopic scale-the local problems are periodic analogues of the Stokes equation and three dimensional. Consequently, the calculation of the flow factors comes at a high cost. However, as λ tends to zero, the solution of the local problems asymptotically satisfies problems that are local variants of the classical Reynolds equation. Thus, the thin film approximation is valid also on the microscopic level in the Reynolds roughness regime. In conclusion, one can say that some information about the flow on the microscopic level is lost at the extreme cases λ = 0 and λ = ∞. In fact, A ∞ and b ∞ retain no information about the roughness (except the minimum height) and the cell problems have trivial solutions. The limiting equation in the high-frequency regime is exactly the classical Reynolds equation, which has been well studied. It can be interpreted as though the flow is prevented from entering the thin valleys of the rough surfaces. The information loss in the case λ = 0 is due to the thin film approximation on the microscopic level. It would be interesting to compare A λ , b λ to A 0 , b 0 for small values of λ as well as the corresponding flow patterns. We hope to accomplish such a study in the future including numerical simulations. As to previous studies, the present result reduces to the stationary case when ∂h 0 /∂t = 0 and ∂h per /∂τ = 0. Compare with eqn (17) in [4] and theorem 3.1 in [11] for the Stokes roughness; eqn (25) in [4] and theorem 3.2 in [11] for the Reynolds roughness; and eqn (20) in [4] for the highfrequency roughness. Note that, in the unstationary case, time plays only the role of a parameter in all three limiting equations. Although the original equation (2.1) contains the term ∂u/∂t, the time derivative of the unknown solution does not appear in the limiting equation nor in the local problems.
3,423
2014-07-08T00:00:00.000
[ "Mathematics" ]
Regional WebGIS User Access Patterns based on a Weighted Bipartite Network With the rapid development of geographic information services, Web Geographic Information Systems (WebGIS) have become an indispensable part of everyday life; correspondingly, map search engines have become extremely popular with users and WebGIS sites receive a massive volume of requests for access. These WebGIS users and the content accessed have regional characteristics; to understand regional patterns, we mined regional WebGIS user access patterns based on a weighted bipartite network. We first established a weighted bipartite network model for regional user access to a WebGIS. Then, based on the massive user WebGIS access logs, we clustered geographic information accessed and thereby identified hot access areas. Finally we quantitatively analyzed the access interests of regional users and the visitation volume characteristics of regional user access to these hot access areas in terms of user access permeability, user usage rate, and user access viscosity. Our research results show that regional user access to WebGIS is spatially aggregated, and the hot access areas that regional users accessed are associated with specific periods of time. Most regional user contact with hot accessed areas is variable and intermittent but for some users, their access to certain areas is continuous as it is associated with ongoing or recurrent objectives. The weighted bipartite network model for regional user WebGIS access provides a valid analysis method for studying user behaviour in WebGIS and the proposed access pattern exhibits access interest of regional user is spatiotemporal aggregated and presents a heavy-tailed distribution. Understanding user access patterns is good for WebGIS providers and supports better operational decision-making, and helpful for developers when optimizing WebGIS system architecture and deployment, so as to improve the user experience and to expand the popularity of WebGIS.  Corresponding author: wuhuayi @whu.edu.cn INTRODUCTION With the rapid development of Internet technology, Web Geographic Information Systems (WebGIS) are becoming more and more important in people's daily life.The main reason users access WebGIS is to query geographical location, traffic routes, and information about surrounding areas of a location at a specified distance (Zhang, 2004;Wu, 2004).Groups of WebGIS users display certain access patterns, implying that the regularities found in user behaviours as documented in user access records can make online behaviour empirically understandable and predictable.By analyzing users' WebGIS access logs, we can measure user access interests and access patterns for WebGIS, to support WebGIS provider decisionmaking for better operations, and help developers to optimize WebGIS system architecture and deployment, thus improving the user experience and expanding the popularity of WebGIS.Therefore, the discovery of access regularities in WebGIS user access logs is significant and important for the empirical understanding of regional users. In recent years, user access regularities in WebGIS have become an extremely active research area.Scholars have executed studies deploying basic statistical measures to online map applications (Lin, 2009); Zheng (2009) carried out a research program based on mined location-based information, such as tracked user activities using GPS trajectories, and user geographic diaries, to help clients understand user personal lifestyle characteristics.These works can also provide recommending services based on the similarity of tracks for different users (Zeng, 2008).Xia (2014) indicated that the user access to spatial data was intermittently active during the day and relatively calm during the night; and the accessed content is spatiotemporal related.Li (2012) indicated that access to tiled spatial data (tiles) was aggregative and outburst.The researches above all focused on the individual access behaviour or group users' access behaviour in WebGIS.However, they have not associated regional characteristics in user access behaviour with and the accessed content. In our work, we established a weighted bipartite network model to explore regularities in regional users' access behaviour.First, the accessed geographic information (tiles) was clustered to form hot access areas according to regional characteristics, then we analyzed the accessed interests and the regional characteristics of users when accessing hot access areas.Our results show that regional users WebGIS access patterns exhibit spatiotemporal regularity in both interests and visiting volume. A WEIGHTED BIPARTITE NETWORK MODEL FOR REGIONAL USER ACCESS TO A WEBGIS A bipartite network is one means to represent and analyse complex networks, and is consists of two types of nodes as well as the edges that connect nodes (Latapy, 2008).Many scholars use the bipartite network model to describe mutual relationships in the real world, such as a network for movies and actors (Watts, 1998), the network for authors and literature (Newman, 2001) and the network for audience and songs (Lambiotto, 2005).In this paper, we propose a weighted bipartite network model for regional user WebGIS access as a means to quantify the relationship between regional users and the content accessed. The bipartite network model for regional user WebGIS access is represented as a weighted bipartite graph , where the node set V contains two types of nodes: m represents regional user nodes RU ru ,ru , ,ru , ,ru ,ru 1 2 i m 1 m and n represents hot access area nodes HA ha ,ha , ,ha , ,ha ,ha 1 2 j n 1 n .We used an adjacency matrix W i 1,2, ,m and j 1,2, ,n w ij to express the accessed hot areas in relation to regional users; each element ij w in the matrix represents the weight of edge (i, j), and is the access frequency of an individual regional user access to an individual hot access area. Node degree is defined as the number of edges which connect the node with other nodes.In this paper, ru k and ha k represent the degree of regional user nodes and the degree of hot access area nodes, respectively, as seen in Equations ( 1) and ( 2): 1, 2, , , 1, 2, , 1 Node strength is defined as the sum of the weights of all the edges connected to the node, in this paper, ru s and ha s represent the strength of a regional user node and the strength of a hot access area node, respectively, as in Equations ( 3) and ( 4) : Figure 1 illustrates an example of a weighted bipartite network for regional users and a hot access area; it consists of eight regional user nodes and seven hot access area nodes (Ma, 2008) (Zhao, 2012).The edge weight represents the access frequency for a regional user's access to a hot access area.As Figure 1 shows, the regional user 1 ru accessed two hot access areas 1 ha and 2 ha thus the regional user' node degree is 2 while the edge weights of the two hot access areas are 150 and 200, respectively.Thus, the strength of the regional user node is 350.The data sample used in this paper is the access logs of user from Beijing in a public geospatial information service "TIANDITU".The logs are from February 7th to February 16th, 2014.The date the number of individual users visiting the site, and frequency of visits from these access logs are shown in Table 1. Clustering hot access area Due to vast number of tiles accessed by regional users, it is difficult to analyze the access characteristics of each tile individually, so we used a k-means algorithm (Yu, 2010) (Qiu, 2010) to cluster the accessed tiles by regional users in Beijing according to the geographic attributes of the tiles.After a number of experiments, the accessed tiles for each day are clustered into seven classes, the square sum of distance between clustered groups is 94.7% for all clustered groups; indicating that is cluster grouping is an appropriate classification for accessed tiles. The ratio of each individual access area and all access to WebGIS hot access areas is shown in Figure 2. ANALYSIS OF REGIONAL USER ACCESS PATTERNS In this section, the weighted bipartite network for "regional user and hot access area" is used to analyze the access pattern of regional users in Beijing.Based on the user access logs from February 7th to February 16th, 2014, we established ten weighted bipartite networks for regional users and hot access areas as G 1, 2, ,10 i i . Access interest scope: In the weighted bipartite network model, the degree of regional user node ru k represents the number of hot access areas that the regional user accessed.ru k can reflect the geographical scope of the access interests of a regional user.The regional user node degree distribution of the ten weighted bipartite networks G 1, 2, ,10 i i is shown as in Figure 3.The distribution function Pkdescribes the distribution of the node degrees that represent the probability of a randomly selected node whose degree is k (Hu, 2009).The results show that the maximum degree value of the regional user nodes is seven in the ten weighted bipartite networks, indicating that less than 0.0005% of regional users visited all hot access areas.However, there are different regional characteristics in user access to tiles: most regional users access only a few concentrated hot access areas; while at the same time a few regional users access multiple dispersed hot access areas. .The results also show that more than 85% of regional users access to only one hot access area, and that 90% of regional users access no more than two hot access areas.These results indicate that there is a spatial aggregation pattern in regional user access to tiles. 4.1.2 Interest strength in regional user access pattern: In the weighted bipartite network model, the edge weight represents the access frequency of regional user i ru to hot access area j ha .The strength of regional user node ru s represents the number of total accesses, and reflects the interest strength in regional user access to tiles.The strength of the regional user node distribution of the ten weighted bipartite networks G 1, 2, ,10 i i is shown as in Figure 4. Distribution function Ps describes the distribution of the node strength of a regional user, representing the probability of a randomly selected node whose strength is s (Wu, 2011).Figure 4 indicates that more than 80% of regional users did less than or equal to 100 access to the WebGIS, while less than 20% of regional users did 100 to 1000 access to the WebGIS.However, interest strength distribution of regional users presents a heavy-tailed distribution. Characteristics of regional user access to hot access areas Based on the weighted bipartite network model for regional user and hot access area, we analyzed the hot access areas and the access regional user preferences in terms of user permeability, usage rate, and the viscosity of a hot access area. User permeability of a hot access area: User permeability of a hot access area refers to the proportion of regional users who accessed the hot access area to all regional users.It indicates the popularity of a hot access area to regional users.In the weighted bipartite network model, the user permeability of hot access area j ha can be expressed as j ha UP , calculated by Equation ( 5).The degree ha j k represents the access frequency of the hot access area, and m represents the total number of regional users who accessed the hot access area.(longitude is 20˚E and is 26˚N) and Class70 (longitude is 129˚E and latitude is 41˚N) are low.This shows that most regional user access is concentrated in specific hot access areas identified with hot news happenings, or in residential areas where the users are located, while a few regional users have a special goals and areas in mind when accessing WebGIS.For different hot access areas, their regional user permeability and user usage rate are positively correlated, for example Class41 to Class50 with high regional user permeability also have high user usage rates.That indicates popular hot access areas have high usage rates.Hot access areas appear continuously across a continuous time period as seen in Figure5 and 6; the content accessed by users in a region is temporally associated. 4.2.3 Access viscosity of regional user: Access viscosity of a regional user is defined as the average access frequency of a hot access area.We find that the higher the access viscosity, the greater the popularity of a hot access area.In the weighted bipartite network model, the access viscosity of a regional user to a hot access area These results indicate that there are huge numbers of users that access some hot access areas, but with a low average access frequency and lower access viscosity; however there are a few users who access some access areas with high access viscosity and a higher average access frequency.This indicates that access from most users to hot access areas is not consecutive, while at the same time, access from some users, with a clear purpose or goal, to specific areas is consecutive. CONCLUSION In this paper, the weighted bipartite network for regional user and hot accessed areas was used to describe the relationship between regional user access and hot access areas.Based on WebGIS access logs from regional users in Beijing from February 7th to February 16th in 2014, we analyzed the access interests of regional user and characteristics of regional users when visiting hot access areas.The proposed weighted bipartite network can be used in studying user behaviour in WebGIS, to quantitatively analyse user access characteristics.The proposed access patterns present a spatiotemporal aggregated of access interests, as interest strength of regional users presents a heavytailed feature; popular hot access areas have high usage rates and the content accessed by users in a region is temporally associated; access from most users to hot access areas is not consecutive, while access from some users with a special purpose to specific areas is consecutive.The research results provide an empirical reference a support for WebGIS decision making and planning.In future work, we will study the model of regional user access patterns, to mine more of user access feature patterns in different regions, especially focusing on the spatiotemporal characteristics in user access patterns. Figure 1 . Figure 1.Weighted bipartite network example for regional user and hot access area Figure 2 indicates that the hottest access areas are from Class 41 to 50, whose longitude range is [114˚E, 117˚E] and the latitude range is [39˚N, 41˚N].It also indicates that these areas are queried for location-based services more often than other areas. Figure 2 . Figure 2. Ratio of each individual access area and all access to WebGIS hot access areas Figure 3 . Figure 3. Regional user node degree distribution of the ten weighted bipartite networks Figure 4 . Figure 4. Regional user node strength distribution of the ten weighted bipartite networks Figure 5 Figure 5 shows the user permeability of hot access areas, in which x-axis represents the class number of hot access areas, form Class1 to Class70, and the y-axis represents the user permeability j ha UP for Class j.As Figure5 shows, the user permeability of Class22 to Class36 (longitude range is [107˚E, 118˚E] and latitude range is [16˚N, 23 ˚N]) and Class41 to Class50 (longitude range is [114˚E, 117˚E] and latitude range is [39˚N, 41˚N]) are high, but the user permeability of Class1(longitude is 20˚E and is 26˚N) and Class70 (longitude is 129˚E and latitude is 41˚N) are low.This shows that most regional user access is concentrated in specific hot access areas identified with hot news happenings, or in residential areas where the users are located, while a few regional users have a special goals and areas in mind when accessing WebGIS. Figure 5 . Figure 5. User permeability of hot access areas Figure 6 . Figure 6.Usage rate of hot access areas of hot access areas are shown in Figure7.The user access viscosities of hot access areas near Class1 (longitude 20˚E and latitude 26˚N) and Class70 (longitude 129˚E and latitude 41˚N) are high, while the regional user access viscosities of Class22 to Class36 (longitude range is [107˚E, 118˚E] and latitude range is [16˚N, 23˚N]) and Class41 to Class50 (longitude range is [114˚E, 117˚E] and latitude range is [39˚N, 41˚N]) are low. Figure 7 . Figure 7. Regional user viscosity rate for hot spots Table 1 . Statistical results of access to a WebGIS of users in a
3,736.6
2015-07-10T00:00:00.000
[ "Computer Science", "Geography" ]
The Impact of Floating Raft Aquaculture on the Hydrodynamic Environment of an Open Sea Area in Liaoning Province, China : The sea area of Changhai County in Dalian City is a typical floating raft aquaculture area, located in Liaoning Province, China, where a key issue in determining the scale and spatial layout of the floating raft aquaculture is the assessment of the impact of aquaculture activities on the hydrodynamic environment. To address this issue, we established depth-averaged two-dimensional shallow water equations and three-dimensional incompressible Reynolds-averaged Navier–Stokes equations for the open sea area described in this paper. The impact of floating rafts for aquaculture on hydrodynamic force was reflected in the numerical model by changing the Manning number, where scenarios with different aquaculture densities were taken into account. Finally, the water exchange rate of the floating raft aquaculture area in the study area was calculated. It was found, through a comparison between the simulated value and the measured value obtained via layered observation, that the two values were in good agreement with each other, indicating that the model exhibits great accuracy. In addition, the calculation results for scenarios before and after aquaculture were compared and analyzed, showing that from low-density to high-density aquaculture zones, the variation in flow rate was greater than 80% at the peak of a flood tide. The water exchange rates of the water body after 1 day, 4 days, and 8 days of water exchange were also calculated, and the results show that they had been reduced by 17.92%, 13.59%, and 1.63%, respectively, indicating that the existence of floating rafts for aquaculture indeed reduced the water exchange capacity of the water body. The model described in this paper can serve as a foundation for other studies on aquaculture in open sea areas, and it provides a theoretical basis for the scientific formulation of marine aquaculture plans and the rational optimization of the spatial layout. Introduction Changhai County, Dalian, is located on the eastern side of the Liaodong Peninsula in the northern waters of the Yellow Sea in Liaoning Province, China(as shown in Figure 1), and its geographic coordinates are 122 • 17 E-123 • 13 E, 38 • 55 N-39 • 35 N. As the only county completely located on islands in Northeast China, Changhai County has an area of water covering 10,324 square kilometers, which is an ideal habitat for temperate marine organisms, such as fish, shrimp, shellfish, and algae. In recent years, with the rapid development of the marine aquaculture industry, floating raft and cage aquaculture industries have emerged in this open sea area, which has brought not only a great deal of economic benefits to residents, but also huge challenges to marine hydrodynamics and ecological environment protection. Some aquaculture farmers excessively pursue high yields with a lack of scientific and reasonable justification, so they tend to increase the scale and density of aquaculture in a disorderly manner. Due to the over-crowded raft areas, the rafts and facilities have yields with a lack of scientific and reasonable justification, so they tend to increase the scale and density of aquaculture in a disorderly manner. Due to the over-crowded raft areas, the rafts and facilities have had a hindering effect on the hydrodynamic environment, inhibiting water exchange and weakening the substance transport and diffusion capacity of the water body. As a result, it is impossible for algae and bait to be evenly distributed with the hydrodynamic force, which would support the growth of marine organisms. This results in a phenomenon where the cultured organisms that were longitudinally arranged in a raft area appear to grow well, while those in the middle grow slowly or even die due to a lack of bait. Some aquaculture operators who have been cultivating scallops, oysters, or sea cucumbers in floating rafts and cages have gradually realized the severity of the problem. Thanks to such changes in their awareness, they are looking for a scientific and reasonable solution to the problem, with the ultimate goal of determining the degree of impact of the overall structure for aquaculture, including rafts, floaters, ropes and cages, and even the cultured organisms, on the hydrodynamic environment of an open sea area. The solution of this problem could provide technical support for the scientific formulation of a sowing density plan for cultured organisms, the rational selection of the location of an aquaculture area, and the precise placement of bait casting devices in bait-deficient zones. In recent years, some researchers have carried out relevant studies on the mechanisms of interactions between raft placement and hydrodynamic environments in raft aquaculture areas; however, most of the studies have focused on the changes in water quality and the sediment environment or used field observations and model tests. For example, Zhao et al. [1] conducted a simulation-based assessment of the impact of the deep-sea cage aquaculture of Lateolabraxjaponicus on water quality and the sediment environment in the Yellow Sea of China based on a three-dimensional Lagrangian particle tracking model. Water quality simulations indicated that deep sea cages account for 26% of the total dissolved inorganic nitrogen and 19% of the active phosphorus content. The model results indicated that the installation of all deep-sea cages will lead to acceptable In recent years, some researchers have carried out relevant studies on the mechanisms of interactions between raft placement and hydrodynamic environments in raft aquaculture areas; however, most of the studies have focused on the changes in water quality and the sediment environment or used field observations and model tests. For example, Zhao et al. [1] conducted a simulation-based assessment of the impact of the deep-sea cage aquaculture of Lateolabraxjaponicus on water quality and the sediment environment in the Yellow Sea of China based on a three-dimensional Lagrangian particle tracking model. Water quality simulations indicated that deep sea cages account for 26% of the total dissolved inorganic nitrogen and 19% of the active phosphorus content. The model results indicated that the installation of all deep-sea cages will lead to acceptable levels of water quality, but that sediments may become polluted. The coupled model can be used to predict the environmental impacts of deep-sea cage farming and provide a useful tool for designing the layout of the integrated multi-trophic aquaculture of organic extractive or inorganic extractive species. Klebertet et al. [2] carried out field monitoring and modeling for the three-dimensional deformation of a large circular flexible sea cage in high currents using an acoustic Doppler current profiler (ADCP) and an acoustic Doppler velocimeter (ADV). The results showed a reduction of 30% in the cage volume for a current velocity above 0.6 m/s. The measured current reduction in the cage was 21.5%. Moreover, a simulation model based on super elements describing the cage shape was applied, and the results showed good agreement with the cage deformations. Dong et al. [3] conducted an experimental study involving an internationally advanced experimental model of fluidstructure interactions, which described the fluid-structure interactions of flexible structures, in a study on the cage aquaculture of Thunnusorientalis. They measured the drag force, cage deformation, and flow field inside and around a scaled net cage model composed of different bottom weights under various incoming current speeds in a flume tank. Results indicated that the drag force and cage volume increased and decreased, respectively, with the bottom weight. Owing to the significant deformation of the flexible net cage, a complex fluid-structure interaction occurred and a strong negative correlation between the drag force and cage volume was obtained. Furthermore, an area where the current speed was often reduced was identified. The intensity of this reduction depended on the incoming current speed. The results of this study can be used to understand and design optimal flexible sea cage structures that can be used in modern aquaculture. In addition, a team led by Dong used model-scale test and full-scale sea test techniques [4] to determine the hydrodynamic characteristics of a sea area near a cage aquaculture area for silver salmon. In that study, the results of model-scale and full-scale tests were compared, showing that under the impact of lower currents, only bottom mesh deformation was found. As for the observed trends, the resistance, cage deformation, and cross-sectional area estimated based on the depth data from the full-scale test were generally consistent with the results converted from the model-scale test using the law of similarity. However, the resistance value of a full-sized cage converted from the model-scale test was larger than the depth estimated based on the depth data from the full-scale test. Conversely, the result from the model-scale test was smaller than the estimate from the full-scale test. In the future, cage deformation should be investigated at higher flow rates, and resistance should be measured at full scale to verify the results of model-scale tests and hydrodynamic model tests. Sintef et al. [5] also observed and investigated the turbulence and flow field changes in sea cages for commercial salmon aquaculture and their wakes in their study, where an acoustic Doppler current profiler (ADCP) installed on the seabed was used to measure the flow rate and turbulence on a layered basis, and an acoustic Doppler velocimeter(ADV) was used to measure the velocity inside the sea cages; dissolved oxygen sensors and echo sounders were also arranged in the sea cages to measure fish distribution, in order to facilitate the acquisition of data. The final results showed that a reduction in strong currents in the wakes near the cages and the existence of high-turbulence columns in the upper part of the water were both caused by the cages. Measurements performed in the cages indicated that although fish aggregation reduced water flow, there was no evidence that fish generated secondary radial and vertical flows within the cages.Ji et al. [6] observed, in a study on a gulf ecosystem for shellfish aquaculture, that in a crowded area with suspended shellfish, the sedimentation effect of organisms was very obvious, and the hydrodynamic effect was obviously insufficient. Hatcher et al. [7] conducted a measurement in the mussel aquaculture area located in the Upper South Cove, Canada, and found that the settlement of the raft aquaculture area was more than twice that of the control area without aquaculture. Bouchet and Sauriau [8] found, in an ecological quality assessment on a shellfish aquaculture area in the Pacific Ocean, that the suspended aquaculture system resulted in higher organic matter enrichment compared with a bottom sowing culture. With the rapid development of computer technology, mathematical models have been widely adopted in numerical-simulation-based studies on marine aquaculture. Panchanget et al. [9] stated that the mathematical modeling of hydrodynamic force and particle tracking can be an effective method with which to study the laws of diffusion and transport of pollutants in aquaculture areas, and the fate and traceability of materials. Xing et al. [10] studied the impact of an aquaculture area on the distribution of the vertical structure of the water flow with a hydrodynamic model and found that the distribution of the vertical structure was mainly controlled by the bottom friction of the aquaculture area. Durateet et al. [11] calculated the hydrodynamic characteristics of the estuary in Galicia based on a three-dimensional numerical model. It was found, through the analysis of the residual current field, that raft aquaculture can reduce the flow rate of the residual current by at least 40%, which facilitates the development of harmful algal blooms, posing a serious threat to cultured organisms and the aquatic environment. Shiand Wei [12] simulated an aquaculture area in Sanggou Bay with an optimized POM and found that the high-density aquaculture and related facilities in Sanggou Bay reduced the flow rate by nearly 40% on average and increased the average half-exchange time by 71%. In summary, the valuable technical studies conducted by these researchers will greatly inspire our later studies. The original intention of this work was to solve some problems with floating raft aquaculture areas. In this study, a typical floating raft aquaculture area located in Changhai County, Liaoning Province, was chosen as the research area on the basis of the successful establishment of the hydrodynamic model and tracer model in these area of Liaodong Bay, in order to quantitatively explain the impact of floating raft aquaculture on the hydrodynamic environment of an open sea area. Compared with the sea area of Liaodong Bay, the study area features a higher degree of openness. Aiming to comprehensively understand the temporal and spatial distribution and variation characteristics of hydrodynamic force in the waters near the floating raft aquaculture area located in Changhai County, Dalian, the project team simulated and analyzed the hydrodynamic field and water exchange rate in the sea area near the floating raft aquaculture area. In this study, depth-averaged two-dimensional shallow-water equations and three-dimensional incompressible Reynoldsaveraged Navier-Stokes equations were established for the open sea area. We described the impact of rafts (floaters, ropes, cages, cultured organisms, etc.) on hydrodynamic force in the aquaculture area by changing the Manning number of the seabed. Finally, the model was verified with the observed hydrodynamic data, and the results show that the model has great accuracy, stability, and universality, and it can provide an accurate prediction of the hydrodynamic environment of aquaculture in the raft area. Observational Data The project team set up a temporary tide-level observation station, T1, in the coastal waters of Dalian, and conducted tide-level observations for three months, from 00:00, 1 August 2021 to 23:00, 31 October 2021. Two continuous observation stations, P1 and P2, were set up for ocean current observation, where a total of 25 h of layered and synchronous continuous ocean current observations were carried out, from 11:00, 13 September 2021 to 12:00, 14 September 2021. The specific coordinates of the stations are shown in Table 1, and their locations are shown in Figure 2. Refer to Section 3.1 for the specific observation values below. Model and Methods The model is based on the solution of the three-dimensional incompress olds-averaged Navier-Stokes equations. First, the integration of the horizont tum equations and the continuity equation over depth for the two-dimensional shallow water equations was carried out [13][14][15][16]. Based on said principle, the commercial model encapsulation platforms used in this stu included Hydro info, a water conservancy information system developed University of Technology, China, and Mike, a commercial water simulation system developed by the Danish Hydraulic Institute (DHI). Based on the above model and methods, the specific implementation p completed, as follows. First, in order to accurately analyze the hydrodynamic of the water area near Changhai County, two-dimensional models of the Yello the Bohai Sea and the waters near Changhai County were created, where boundary of an open water area was driven by the time series file of the tidal in order to reflect the hydrodynamic conditions of the sea area near the aquac located in Changhai County in more detail, a three-dimensional model of a sm interest in Changhai County was created with a nesting method [17,18] based al-level drive after the calibration of the two-dimensional model of Changh where the calculation range mainly covered the area contained by the four con C, D, E, and F shown in Table 2, and the locations of the control points are Figure 3a. Model and Methods The model is based on the solution of the three-dimensional incompressible Reynoldsaveraged Navier-Stokes equations. First, the integration of the horizontal momentum equations and the continuity equation over depth for the following two-dimensional shallow water equations was carried out [13][14][15][16]. Based on the aforesaid principle, the commercial model encapsulation platforms used in this study mainly included Hydro info, a water conservancy information system developed by Dalian University of Technology, China, and Mike, a commercial water simulation computing system developed by the Danish Hydraulic Institute (DHI). Based on the above model and methods, the specific implementation process was completed, as follows. First, in order to accurately analyze the hydrodynamic conditions of the water area near Changhai County, two-dimensional models of the Yellow Sea and the Bohai Sea and the waters near Changhai County were created, where the open boundary of an open water area was driven by the time series file of the tidal level. Then, in order to reflect the hydrodynamic conditions of the sea area near the aquaculture area located in Changhai County in more detail, a three-dimensional model of a small area of interest in Changhai County was created with a nesting method [17,18] based on the tidal-level drive after the calibration of the two-dimensional model of Changhai County, where the calculation range mainly covered the area contained by the four control points C, D, E, and F shown in Table 2, and the locations of the control points are shown in Figure 3a. Hydrodynamic Model The study area is located along the northern coast of the Yellow Sea (see Figure 1), where tidal currents play a dominant role in various flow components. The three-dimensional Navier-Stokes equations for the free-surface flow of incompressible fluid in the Cartesian coordinate system were used for description; on this basis, the horizontal momentum equations and the continuity equation for the three-dimensional shallow water form were integrated in the range H=η + h to obtain the following depth-averaged two-dimensional shallow water continuity equation: where η represents the sea surface fluctuation (tidal level) relative to the still sea surface; h represents the still water depth (the distance from the seabed to the still sea surface); H=η + h represents the total water depth; Cz=n. H (1/6) is the Chezy coefficient; n=1/M is Manning's roughness coefficient, and M represents the Manning number. Equations (1)-(3) are the basic governing equations for solving the hydrodynamic elements. In order to comply with the uniqueness of solutions, the definite conditions must be given. Hydrodynamic Model The study area is located along the northern coast of the Yellow Sea (see Figure 1), where tidal currents play a dominant role in various flow components. The three-dimensional Navier-Stokes equations for the free-surface flow of incompressible fluid in the Cartesian coordinate system were used for description; on this basis, the horizontal momentum equations and the continuity equation for the three-dimensional shallow water form were integrated in the range H = η + h to obtain the following depth-averaged two-dimensional shallow water continuity equation: where η represents the sea surface fluctuation (tidal level) relative to the still sea surface; h represents the still water depth (the distance from the seabed to the still sea surface); H = η + h represents the total water depth; C z = n. H (1/6) is the Chezy coefficient; n = 1/M is Manning's roughness coefficient, and M represents the Manning number. Equations (1)-(3) are the basic governing equations for solving the hydrodynamic elements. In order to comply with the uniqueness of solutions, the definite conditions must be given. (1) Initial Conditions The cold-start mode was used, meaning that the initial conditions were considered irrelevant to the final result of the calculation. In this study, the initial flow rate and tidal level were both determined as 0. level, and the dry-wet variation of the grid nodes in the moving boundary was taken into consideration in this work [19][20][21]. 1 Open boundary condition: The open boundary condition is also known as the water boundary condition, where, on this boundary, either the flow rate is given, or the time series condition of the tidal level is given. For the open boundary in this work, calculation was performed in the following form of tidal harmonic analysis: where ω i represents the angular velocity of the ith tidal constituent; f i and u represent the intersection factor and epoch correction of the ith tidal constituent, respectively; H i and g i represent harmonic constants, which are the amplitude and epoch of each tidal constituent, respectively; V 0 represents the time angle of a tidal constituent. The time series data of the tide level at the open boundary of the model were also verified according to the global tide module. 2 Closed boundary condition: The normal flow rate at the shoreline of the given water body should be 0. The Euler Model for Residual Current Calculation As the most important environmental dynamic factor in coastal waters, the residual current plays a crucial role in the transport and diffusion of substances in seawater. In studies on ocean dynamics, Eulerian velocity is usually used to calculate residual currents. The Euler residual current in the ocean can be simply defined as the mean Eulerian velocity, which can be calculated with the following equation: where U E and V E represent the mean Eulerian velocities in directions x and y, respectively; n represents the number of cycles used in the calculation; t 0 represents the start time of calculation; T represents the current cycle; u(x 0 ,t) and v(x 0 , t) represent the component velocities in directions x and y. The numerically discrete form of Equation (6) is described below: where N = nT/∆t and ∆t represent the time step of numerical simulation. Mathematical Model of Water Exchange The tracer method was used to simulate the degree of water exchange [22][23][24], where a dissolved non-degradable and conservative substance was set in the sea area, and its concentration diffusion under the action of hydrodynamic force was investigated. For the transport of the tracer, the convection-diffusion equation based on Eulerian substance transport was used, as shown below: where C represents the substance concentration; D x and D y represent the substance diffusion coefficients in directions x and y, respectively; F represents the substance attenuation coefficient, which is zero (F = 0) for the conservative substance; S represents the point source concentration. The substance diffusion coefficient was calculated with the following equation: where E x = E y represents the horizontal turbulent viscosity coefficient; σ T represents the Prandtl number, which was determined as 1.0 in this study. After a certain period of time, the percentage of the total amount of substance diffused from the system to open water divided by the total amount of initial substances in the system should be the water exchange rate of the overall system. The statistical calculation expression is provided below. where EX represents the water exchange rate; C represents the substance concentration; H represents the total water depth; i represents the node number in the statistical domain; n represents the total number of nodes in the statistical domain; j represents the time number. Grid Creation The calculation grid was generated with the Surface Water Model System(SMS 10.1). This grid generation program can realize a flexible and variable resolution in the horizontal direction of the grid and a large gradient, and it can create a highly smooth grid at a location where a flow tends to be generated around an island. In addition, it can partially increase the density in areas with complex terrain, such as coastal areas, estuaries, and wetlands. The entire two-dimensional simulated domain of Changhai County consists of 32,560 nodes and 63,579 triangular elements. Figure 3a shows the calculation domain and grid distribution of the established two-dimensional model of the sea area near Changhai County. The entire simulated domain of the three-dimensional model [25] consists of 12,539 nodes and 24,281 triangular elements. Figure 3b shows the calculation domain and grid distribution of the three-dimensional model of a small area of Changhai County. Model Calculation Settings (1) The calculation of hydrodynamic force The calculation time step of the model was adjusted according to the CFL conditions to ensure that when the model calculation was converged, the minimum time step was 5.0 s. The seabed friction was controlled by the Manning number, with a specific value of 32-42 m 1/3 /s. In many applications, a constant eddy viscosity can be used for the horizontal stress terms. Alternatively, Smagorinsky proposed to express sub-grid-scale transport by effective eddy viscosity related to a characteristic length scale. The sub-gridscale eddy viscosity is given in [26], and the specific expression is shown below: where c s is a constant; l is a characteristic length; and the deformation rate is given by The minimum calculation time step of the three-dimensional small-scale local model was 1.0 s. In the vertical grid, the sigma hierarchical function was adopted, and the impact of the rafts on the hydrodynamic force was described with a double-resistance model with the introduction of a secondary drag coefficient, where the frictional resistance of the seabed was controlled by secondary drag coefficient C f , and the specific expression was determined by assuming a logarithmic profile between the seabed and a point at a distance of DZ b above the seabed as follows: where κ = 0.4 is the von Kármán constant; Z 0 represents the length scale of the roughness of the riverbed; when the boundary surface is rough, Z 0 depends on the roughness height, where Z 0 = mk s , the approximate value of m is 0.033, k s is the roughness height, ranging between 0.01 m and 0.30 m, and the value was determined as 0.05 m. In summary, the average value of the secondary drag coefficient was 0.01. (2) The calculation for the aquaculture area The location of the selected aquaculture area, the range of the calculation domain, and the distribution of the seabed topography are shown in Figure 1. The aquaculture area is located in a sea area near the Changshan Archipelago in the southeast of Changhai County, and its boundaries are shown in Table 3 below. In the post-aquaculture model, unstructured grids were also used to divide the horizontal calculation domain and locally densify the sea area where the aquaculture area was located, with a grid scale of 30 m. In other areas along the shoreline, the grid scale ranged between 50 m and 100 m; in sea areas far away from the aquaculture area, the maximum grid scale was 400 m, and the calculation domain contained 17,642 triangular grids and 9011 nodes. The Manning field considering aquaculture areas of different densities is shown in Figure 4 below. (3) Assessment of water exchange capacity In this study, the water exchange rate was used as an index to describe the water exchange capacity of the aquaculture area. A dissolved conservative substance was (3) Assessment of water exchange capacity In this study, the water exchange rate was used as an index to describe the water exchange capacity of the aquaculture area. A dissolved conservative substance was placed in the sea area where the aquaculture area was located, which would be carried by the water body and could not be degraded. The convection and diffusion of the conservative matter directly reflect the form of movement of the water body. Based on the above considerations, in this study, a conservative substance with a concentration of 1.0 was placed in the aquaculture area; the concentration of substances in open water was set at 0.0; the attenuation coefficient was set as F = 0, and the point source concentration was set as S = 0. The substance diffusion coefficient was equal to the turbulent viscosity coefficient of water flow (σ T = 1.0). Model Verification Results The actual calculation and simulation period of the model was from 0:00, 1 August 2021 to 23:00, 31 October 2021. Figure 5 shows a time-curve-based comparison between the simulated and measured values of the tidal level during the period from 0:00, 8 August 2021 to 23:00, 15 September 2021. Figure 6 shows the comparison between P1 and P2 in terms of flow rate, flow direction, and measured value during the period from 11:20, 13 September 2021 to 12:30, 14 September 2021. To fully represent the calculation results of the numerical model, Figure 7 shows the flow rates and flow direction fields of the top, middle, and bottom layers at the same moment during a spring tide and a neap tide in the three-dimensional calculation model for Changhai County. A comparison between Figures 5 and 6 shows that the results calculated by the model are in good agreement with the measured values, where the error is within an acceptable range. According to the verification results of tidal currents and the flow field distribution maps at different moments, as shown in Figure 5, the mathematical model can reflect the flow field in the sea area near Changhai County in a more realistic manner, indicating that the model has reasonable boundaries and parameters and can be used for the calculation of subsequent working conditions. Figure 7 shows that although grids of different scales were used in the calculation of the two-dimensional and three-dimensional models, the flow rate and flow direction in the entire spatial calculation domain reasonably changed, featuring a strong gradient and no sudden change, which indicates that the model is stable and can be used as the basis for the calculation of subsequent working conditions. In order to verify the accuracy, the root-mean-square error (R) was used to quantitatively analyze the error between the calculation results of the model and the measured values, where R represents the mean deviation between the results of the model and the observed data. R is calculated as follows: where M i represents the calculated value of the model; O i represents the observed value; and n represents the number of observed values. After calculation, the root-mean-square error in the tidal level of T1 is less than 0.18 m, generally indicating that the simulated value is in good agreement with the measured value, and the model has great accuracy and reliability. Tracer model verification is indeed a relatively important part of the assessment of water exchange capacity, which has been verified in other studies [27,28], as described below. In these studies, one of the major tasks was the numerical simulation of the transportation of water pollutants in Liaodong Bay, the northernmost bay in the Bohai Sea in China. On the basis of a comprehensive understanding of the natural conditions of the sea area of Liaodong Bay, the impact of point source input was introduced to establish a convectiondiffusion model for pollutant transport in the sea area of Liaodong Bay, with which the distribution of different nutrient elements in the sea area was simulated, and where major water quality indicators included NH 3 -N and COD. The accuracy and stability of the model were verified through a comparison between the simulated values and the concentration levels of the elements obtained by field sampling and analysis in the sea area, indicating that the model features a great ability to reproduce and predict the concentration field in the sea area of Liaodong Bay. In respect of the following few years, the distribution of PO4-P, a water quality indicator, in Liaodong Bay, was reproduced, and model verification was performed, providing the simulation results of the hydrodynamic field, half exchange time, and concentration field in Liaodong Bay at different typical moments. Finally, this model was adopted in the simulation and assessment for the identification of marine pollution accidents, and it delivered satisfactory results. Simulation Results of Tidal Current and Residual Current in the Aquaculture Area The results regarding the distribution of the flow field in the sea area near the aquaculture area are shown in Figures 8-11, indicating that the flow field in the aquaculture area exhibits the characteristics of reciprocating motion, where the main flow direction is NW-SE, and the flow rate magnitude during a flood tide and an ebb tide is 1.0 m/s. In accordance with the simulation result of the hydrodynamic field in the sea area of Changhai County, the Euler residual current field in the sea area near the aquaculture area was obtained. Figure 12 shows the Euler residual current fields in the cycle of a spring tide before and after the implementation of aquaculture activities, indicating that the mean residual current intensity in the aquaculture area was approximately 0.018 m/s; the direction was NE; and there was generally no significant variation before and after the implementation of aquaculture activities. 022, 14, x FOR PEER REVIEW accordance with the simulation result of the hydrodynamic fi Changhai County, the Euler residual current field in the sea are area was obtained. Figure 12 shows the Euler residual current spring tide before and after the implementation of aquaculture ac the mean residual current intensity in the aquaculture area was ap the direction was NE; and there was generally no significant var the implementation of aquaculture activities. In order to quantitatively and clearly reflect the impact of surface roughness (Manning) on the calculation results, the sensitivity of the Manning number was analyzed. For the sea area near the floating raft aquaculture area, which is 180-3650 m from the shoreline, the simulated values of flow rate and flow direction at the peak of a flood tide and at the peak of an ebb tide during a spring tide in one tidal cycle were compared before and after the implementation of aquaculture, where the Manning settings under the two working conditions were as shown in Part 2. The results show that the changes in flow rate and flow direction were generally significant in the study area, especially during the flood tide, wherein the flow rate changed by more than 80% within 750 m in the aquaculture area; the mean change in flow rate was approximately 10%, and the number of points where the flow direction changed by more than 45 • accounted for around 20% of the total number. This indicates that, after the establishment of aquaculture activities, as the Manning number of the aquaculture area decreased and the roughness increased, which, together with the effect of flow resistance arising from the raft net, indeed significantly affected the flow rate and flow direction of the sea area near the aquaculture area. Simulation Results of Water Exchange Rate Based on the aforesaid hydrodynamic simulation results, a mathematical model of water exchange, described in Equations (7)- (9), was used to study the water exchange capacity of the aquaculture area. In addition, in order to enable the model to reflect the variation in water exchange capacity before and after the implementation of aquaculture activities in a clearer and more sensitive manner, the scope of the aquaculture area was appropriately magnified according to the actual sea area, and the calculation domain and grid were rearranged and refined, where the average grid scale was 50 m, and the minimum grid scale for the key areas of interest in the aquaculture area was 30 m. Figures 13 and 14 show the initial field distribution of the tracer concentration distribution, and the overall water exchange rate-time curves before and after the implementation of aquaculture activities, respectively. Table 4 shows the statistics for the overall water exchange rate-time curves before and after the implementation of aquaculture activities, indicating that the water exchange rate after the implementation of aquaculture decreased compared with that before implementation. Before the implementation of aquaculture, the water exchange rates after 1, 4, and 8 days of water exchange were 27.90%, 61.83%, and 76.48%, respectively; after the implementation of aquaculture, the water exchange rates after 1, 4, and 8 days of water exchange were 22.90%, 53.43%, and 75.23%, respectively. after the implementation of aquaculture, where the Manning settings under the two working conditions were as shown in Part 2. The results show that the changes in flow rate and flow direction were generally significant in the study area, especially during the flood tide, wherein the flow rate changed by more than 80% within 750 m in the aquaculture area; the mean change in flow rate was approximately 10%, and the number of points where the flow direction changed by more than 45° accounted for around 20% of the total number. This indicates that, after the establishment of aquaculture activities, as the Manning number of the aquaculture area decreased and the roughness increased, which, together with the effect of flow resistance arising from the raft net, indeed significantly affected the flow rate and flow direction of the sea area near the aquaculture area. Simulation Results of Water Exchange Rate Based on the aforesaid hydrodynamic simulation results, a mathematical model of water exchange, described in Equations (7)- (9), was used to study the water exchange capacity of the aquaculture area. In addition, in order to enable the model to reflect the variation in water exchange capacity before and after the implementation of aquaculture activities in a clearer and more sensitive manner, the scope of the aquaculture area was appropriately magnified according to the actual sea area, and the calculation domain and grid were rearranged and refined, where the average grid scale was 50 m, and the minimum grid scale for the key areas of interest in the aquaculture area was 30 m. Figures 13 and 14 show the initial field distribution of the tracer concentration distribution, and the overall water exchange rate-time curves before and after the implementation of aquaculture activities, respectively. Table 4 shows the statistics for the overall water exchange rate-time curves before and after the implementation of aquaculture activities, indicating that the water exchange rate after the implementation of aquaculture decreased compared with that before implementation. Before the implementation of aquaculture, the water exchange rates after 1, 4, and 8 days of water exchange were 27.90%, 61.83%, and 76.48%, respectively; after the implementation of aquaculture, the water exchange rates after 1, 4, and 8 days of water exchange were 22.90%, 53.43%, and 75.23%, respectively. Figure 14. Water exchange rate-time curves before and after the implementation of aquaculture. Tidal Current Conditions The hydrodynamic force calculation results indicate that the hydrodynamic force in the waters near Changhai County is mainly in the NW-SE direction during a spring tide, during which the tidal currents of flood and ebb tides rotate counterclockwise. During the spring tide, the tidal field shows that the tidal direction in the open waters of Changhai County was NW, the tidal field was stable, and the flow rate generally ranged between 0.50 m/s and 0.85 m/s. The nearshore current is a coastal current in essentially the same direction as the shoreline and has a lower flow rate, ranging between 0.2 m/s and 0.4 m/s. This is mainly because the flow rate is significantly reduced by the bottom friction due to the shallow water in the near-shore area. The local coastal waters are affected by the coastline, with a maximum flow rate of 1.2 m/s during a flood tide; during an ebb tide, the flow direction in the open waters is SE, and the flow rate ranges between 0.45 m/s and 0.90 m/s. Affected by the topography, coastal waters generally feature lower flow rates, with a maximum flow rate of approximately 1.1 m/s. Conditions of the Aquaculture Area The calculation of the characteristics of the flow rate in the aquaculture area shows that flow rates in the aquaculture area usually range between 0.2 m/s and 0.4 m/s. There is a small island in the SE direction in the aquaculture area, so the maximum flow rate in the aquaculture area is up to 0.7 m/s; the flow rate gradually increases from the shoreline to the sea, and it reaches 1.0 cm/s in the part of the aquaculture area that is closer to the shoreline. A comparison with the flow rate before the implementation of aquaculture activities shows that, with regard to the degree of flow resistance imposed by the floating rafts, the variation in flow rate ranges between 2.87% and 84.58% at the peak of a flood tide, and between 2.65% and 20.89% at the peak of an ebb tide from a low-density zone to a high-density zone of the aquaculture area. This indicates that the variation in flow rate caused by the floating rafts in the sea area near the aquaculture area of Changhai County is significantly greater during a flood tide than that during an ebb tide, and the flow resistance rate at the peak of a flood tide is greater than 80%. Therefore, aquaculture operators and marine environmental protection workers should pay attention to the impact of floating rafts for aquaculture. Even in open sea areas, during the setting of the orientation and density of a floating raft aquaculture area, it is crucial to first investigate the hydrodynamic conditions and the impact of aquaculture activities on the hydrodynamic conditions in the sea area, in order to scientifically implement aquaculture activities and rationally determine the layout while protecting the marine environment. Residual Current Conditions Residual current distribution plays a decisive role in the transport and diffusion of bait, nutritive salts, and other related substances in an aquaculture area. According to a comparison with the residual current before the implementation of aquaculture activities, the extent of variation ranged between 3.01% and 84.74% during a spring tide, and it ranged between 9.46% and 78.50% during a neap tide, indicating that the extents of variation in residual current during tides are essentially the same; they should not be underestimated. Therefore, to accurately understand the distribution of algae and bait in the floating raft aquaculture area, we must calculate and analyze the residual current in the sea area based on accurate hydrodynamic analysis. In this way, we can understand the characteristics of the transport and diffusion of floating and suspended substances in the sea area in real time, thereby providing guidance for the formulation of aquaculture plans and production activities. Water Exchange Conditions The quantitative calculation shows that, due to the aquaculture activities, the water exchange rates of the open sea area decreased by 17.92%, 13.59%, and 1.63% compared with those before implementation; moreover, the half-exchange cycle of the water body appeared in 2.3 d and 3.9 d, respectively, before and after the implementation of aquaculture. This indicates that even floating rafts for aquaculture located in an open sea area have a certain impact on the water exchange capacity, and the specific extent of such impact is closely related to various factors, such as the density, size, scope, and location of rafts in the aquaculture area. Conclusions In this study, a numerical simulation method was applied to a floating raft aquaculture sea area to quantitatively calculate and assess the changes in the hydrodynamic environment of the open sea area. The model is based on the solution of the three-dimensional incompressible Reynolds-averaged Navier-Stokes equations. Then, the integration of the horizontal momentum equations and the continuity equation over depth for the twodimensional shallow water equations was carried out. In the hydrodynamic model, in order to generalize the impact of rafts on the hydrodynamic force in the aquaculture area, the Manning number of the seabed-namely the seabed roughness-in the two-dimensional mode was changed; in the three-dimensional mode, a double-resistance model of the top and bottom layers was used, with the introduction of a secondary drag coefficient. The final verification and results show that the numerical model proposed in this paper can satisfactorily simulate and predict the hydrodynamic conditions of the sea area near the aquaculture area in Changhai County; the three-dimensional flow field can reflect the variation in the spatially stratified hydrodynamic indexes of the dynamic environment in a more realistic way, which can also reflect the hindering effect of rafts on hydrodynamic force in a more accurate way, and the model features great accuracy and stability. According to the working conditions before and after the implementation of aquaculture activities, the impact of the floating rafts on the hydrodynamic environment and water exchange capacity was compared and analyzed. The results indicate that the flow resistance rate was greater than 80%; the maximum decrease in the water exchange rate was close to 20%. The quantitative results sufficiently show that, even if floating rafts are arranged with a certain density in a completely open sea area, they have a great impact on the hydrodynamic conditions of the sea area. Therefore, aquaculture operators and marine environmental protection workers must pay sufficient attention to the impact of floating rafts for aquaculture on the hydrodynamic conditions of sea area. The establishment of the method in this paper provides a basic model for the rational arrangement of a fully open raft aquaculture area and the scientific determination of breeding density, and it offers a quantitative numerical calculation method for the assessment of the water exchange capacity in aquaculture areas containing flexible objects [29] (such as rafts, vegetation, etc.).It also provides aquaculture operators with technical support in making scientific and effective decisions regarding aquaculture. In future studies, spatial modeling for floating rafts, mainly including floaters, external aquaculture nets for hanging cages and organisms, will be added, and a fluid-structure interaction-based multiphase flow (volume of fluid, VOF) model will be used to simulate the impact of floating rafts for aquaculture on the dynamic environment and water exchange, in order to provide more accurate and comprehensive technical support for the rational arrangement of aquaculture orientation and the scientific setting of aquaculture density. Furthermore, after the accurate determination of the impact of a raft aquaculture area on the hydrodynamic conditions, it is also possible for aquaculture operators to reasonably select a site for the installation of bait casting and distribution devices, which can thus help to considerably increase the production efficiency of raft aquaculture, guarantee a stable income for aquaculture operators, and improve the social and economic benefits of raft aquaculture in sea areas in Changhai County and even in other open sea areas with floating raft aquaculture. Data Availability Statement: This research did not report any data that are linked to publicly archived datasets analyzed or generated during the study.
10,558.8
2022-10-04T00:00:00.000
[ "Environmental Science", "Engineering" ]
Filterless and Compact ANy-WDM Transmission System Based on Cascaded Ring Modulators To cope with the exponential increase in internet services and corresponding data traffic, especially data centers and access networks require new high data rate transmission methods with low cost, very small package and low energy consumption. In this paper, we demonstrate a filterless, agnostic Nyquist wavelength division multiplexing (ANy-WDM) transmission system based on cascaded ring modulators and a comb source. The single ring modulator acts as a filter, filtering one of the <inline-formula> <tex-math notation="LaTeX">$n$ </tex-math></inline-formula> WDM lines, generated by the comb. The same ring modulator modulates <inline-formula> <tex-math notation="LaTeX">$k$ </tex-math></inline-formula> time division multiplexed (TDM) channels on the single wavelength. Since each WDM channel, consisting of <inline-formula> <tex-math notation="LaTeX">$k$ </tex-math></inline-formula> time domain channels, has a rectangular bandwidth, the aggregated symbol rate of the superchannel modulated by this system corresponds to the optical bandwidth of all <inline-formula> <tex-math notation="LaTeX">$n$ </tex-math></inline-formula> WDM channels together. The approach is very simple and compact. Since no optical filters, delay lines or other special photonics or high bandwidth electronics is needed, an integration into any photonics platform is straightforward. Thus, the proposed method might enable very compact, ultra-high data rate transmission devices for future data centers and access networks. I. INTRODUCTION According to the Cisco Annual Internet Report, the total number of internet users in 2023 will be 5.3 billion, and the number of machine-to-machine connections will increase to 14.2 billion. Additionally, 77% of the internet connections will rely on mobile devices [1]. To satisfy these data demands, the capacity of intra-and inner-data center communications as well as access networks has to be maximized [2]. Data rates of up to 1.6 Tbit/s will become essential in data centers in the near future, for instance [3] and even the peak data rates in 6G and beyond wireless cells will increase to 1 Tbit/s [4]. However besides high data rates, low cost is required especially in data centers and access networks. Therefore, an integration of the transceivers on a low cost photonic platform is needed. The associate editor coordinating the review of this manuscript and approving it for publication was Tianhua Xu . Today, the systems mainly rely on high-bandwidth photonic and electronic devices. For increasing data rates, the bandwidth of these devices and their electrical energy consumption will increase [5], which makes an integration on a low cost platform challenging. An alternative might be optical superchannels [6]. These superchannels can be realized by orthogonal frequency division multiplexing (OFDM) [7] or Nyquist wavelength division multiplexing (Nyquist-WDM) [6]. However, these methods still require complex electronic-photonic signal processing, high bandwidth photonics, optical delay lines, optical filters and so on [8], [9], [10]. Recently, it has been shown, that integrated comb sources can lead to a cost effective, high data rate transmission with reduced energy requirements [11]. For the high data rate modulation of superchannels with any kind of signals by very simple electronic and photonic devices, the agnostic sampling transceiver was presented [12]. In this concept, FIGURE 1. Concept of the conventional ANy-WDM transmission system. It is based on a WDM filter or DMUX to select between the n comb lines. In each branch an MZM modulates the k TDM channels on the selected wavelength. A MUX is used to combine the n WDM channels before transmission. In the receiver, coherent detectors with local oscillators are used to receive the transmitted signals. Please note that, here the receiver side shows the reception of all k TDM channels at a single wavelength. For the reception of a single TDM channel at one single wavelength one of the branches would be sufficient. WDM Filter: wavelength division multiplexing filter, MUX: wavelength division multiplexer, RF: radio frequency generator, CD: coherent detector, and DSP: digital signal processing. an optical superchannel is achieved by the combination of n WDM channels, generated by a comb source, each of which modulated with k TDM Nyquist channels. The transmitter and receiver for these superchannels basically consists of a modulator. For transmitting higher order modulation formats, an IQ-modulator is required in the transmitter and for the demultiplexing a single Mach-Zehnder modulator (MZM) is sufficient. The MZM that can be used at the transmitter or receiver, may be with single drive [12] or dual drive ports [13]. Within this concept, analog and/or digital signals with different bandwidths and data rates can be transmitted, multiplexed, and processed into a superchannel with no need for high bandwidth electronics and photonics. Even Nyquist channels within a rectangular bandwidth can be generated and processed. Hence, the signal transmission is agnostic and can achieve the maximum possible symbol rate in the bandwidth of the superchannel [12]. Like Nyquist-WDM and OFDM the agnostic transceiver method [12] enables the transmission of the maximum possible symbol rate in the Nyquist bandwidth. However, compared to OFDM no broadband transmitters and receivers and no complex signal processing is necessary [7] and compared to Nyquist-WDM, no sophisticated broadband analog to digital conversion [9] or special source [14] and optical filter is needed. However, the modulation of the n WDM channels in the transmitter requires to use WDM filters [15], [16] or arrayed waveguide gratings (AWGs) [17] to select between the wavelengths. Thus, n optical branches are established. In each optical branch, a single MZM modulator can be used for the k TDM channels. Therefore, the aggregated data rate from the whole system depends on n parallel optical branches. This makes the system complex. Additionally, MZMs require a quite high chip space and power consumption. The integrated MZM that has been used for the agnostic sampling transceiver in [18], had a length of 3.2 mm for each arm and a power consumption of > 1pJ/bit, for instance. Thus, for n WDM superchannels the transmitter would require a high chip space and power just for the MZMs. Since the radius of a ring modulator is only a few micrometers and consumes much lower power in the femtojoule range, integrated ring modulators might be a much better solution for compact devices [19]. Cascaded silicon ring modulators have successfully been used for the modulation of WDM channels, generated by a frequency comb [20], [21], [22]. Each ring modulator works as a modulator and filter for a single WDM channel. Thus, integrated WDM filters/AWGs and DMUXs are not needed. Here, we demonstrate, how this concept can be extended for the generation of Nyquist superchannels with n WDM channels without any guard band, each of the ring modulators consisting of k TDM channels, which we have called filterless agnostic Nyquist-WDM (ANy-WDM). The system does not require any filter, optical delay line, special electronics or photonics. Therefore, it can easily be integrated into any photonics platform. Additionally, because of the small radius, low power consumption, and high bandwidth of integrated ring modulators, the transmitters can be very compact, making them especially attractive for data center and access network applications. II. CONCEPT OF THE ANy-WDM SYSTEM To show the principle of operation of the proposed filterless ANy-WDM system based on the cascaded optical ring modulators, the conventional ANy-WDM system based on multiplexers and demultiplexer is described first. A. CONVENTIONAL ANy-WDM SYSTEM The idea of a conventional ANy-WDM transmission system [12] is shown in Fig. 1. It is based on a WDM filter or VOLUME 10, 2022 FIGURE 2. Concept of the filterless ANy-WDM transmission system. The receiver is the same as the conventional one but the transmitter is based on cascaded ring modulators to multiplex the k TDM signals with n comb lines. Each ring modulator is working as a filter, for selecting the wavelength from the comb lines, and as modulator by modulating the TDM channels on the selected wavelength. Please note that, the ANy-WDM receiver of the proposed system is same as that shown in Fig. 1. AWG to select between the n wavelength lines for n optical branches. In each branch, a single MZM is used for the modulation of k TDM channels [12]. These k TDM channels can be processed in the electrical domain and modulated with one single modulator. The multiplexing of the k TDM channels is based on the orthogonality of k sinc-pulse sequences, each of which time shifted to the zero crossing of the previous one. In the equivalent frequency domain, the single sinc pulse sequence corresponds to a frequency comb with k lines, the frequency separation f and the bandwidth B = k × f . Thus, the sequences are time shifted by 1/B and the aggregated symbol rate of all k channels together corresponds to B. The frequency comb can be generated by l = (k − 1)/2 sinusoidal electrical frequencies [23], [24]. All these electrical frequencies are multiplied with the data of the single channel in the electrical domain. The next channel has a time shift of 1/B to the previous one. This corresponds to a phase shift of the electrical frequencies of φ = 2π/k. Therefore, the sinusoidal frequencies of the next channel can be phase shifted by φ and modulated with the data. All the k channels are summed up and used to drive the single modulator [12]. This is done in each single branch and all n WDM channels are multiplexed together in a wavelength division multiplexer (MUX) or AWG to build the k × n superchannel, before transmission. Since all channels can be multiplexed in the wavelength domain without any guard band, the maximum possible aggregated symbol rate in the superchannel corresponds to the bandwidth n × B. In Fig. 1 the receiver for the detection of all k TDM channels at a single wavelength is shown. For the detection of a single channel C p,q , one of the branches would be sufficient. In each of the branches a Mach-Zehnder modulator driven with a number of l = (k-1)/2 sinusoidal radio frequencies with the phase shift (p-1) × φ is used to select the p-th (p = 1, 2, . . . , k) TDM channel. A single intensity modulator is sufficient also for the demultiplexing of higher order modulation formats [25]. The modulator multiplies the incoming superchannel with a sinc-pulse sequence with a time shift defined by the phase shift of the radio frequencies [12]. However, this will demultiplex all TDM channels with the same time shift in all n WDM channels. To select the single TDM channel at a single wavelength q (q = 1, 2, . . . , n) from the n × k WDM-TDM superchannel, a local oscillator (LO) and a low bandwidth coherent detector is required. The low bandwidth of the coherent detector filters out all other mixing products of the LO signal with the superchannel. The same can be achieved with an electronic filtering after detection. This local oscillator signal can be generated by a single LO with the correct wavelength, or by one line extracted from a second integrated comb source. The required bandwidth of the MZM is only B/2 and that of the coherent detector B/(2k) [26]. Please note that the bandwidth of the transmitted superchannel is n×B. Thus, for the transmitter and especially receiver, low bandwidth equipment can be used to process very high bandwidth superchannels, leading to a reduction of the SNR requirements, power consumption and costs [27]. B. PROPOSED FILTERLESS ANy-WDM SYSTEM The basic concept of our method is demonstrated in Fig. 2 and enables a drastic reduction of the hardware requirements in the transmitter. It is based on a comb source and cascaded ring modulators without any WDM filters/AWGs, delay lines or MUXs. Only one single optical branch with serial ring modulators is needed to achieve the optical superchannel. The receiver, however, is the same as that for the conventional system, saving all the advantages especially for data center and access applications. The comb source, preferably an integrated one, generates n lines, which define the number of WDM channels. This comb source can be based on an integrated ring resonator, for instance, which enables a very precise frequency locking between the different center frequencies of the channels [28]. A phase-locking, however, is not required. For higher bandwidths modulators and a lower number of channels, the resonator based comb source might even be replaceable with dual-drive modulator [13]. A frequency jitter between the comb lines up to several percent of the channel bandwidth is tolerable and these lines do not have to be phase-locked [12]. The following single ring modulator works as a filter, by selecting one of the n WDM lines and as a modulator, by modulating k orthogonal TDM channels on the WDM line. For the wavelength selection, the resonance wavelength of each ring modulator is adjusted with a heater, or with a bias to the modulator to the specific wavelength of the comb lines. To stabilize the temperature for all ring modulators, a temperature compensation system can be employed [29]. Alternatively, a heater can be integrated with each ring [20]. However, this increases complexity and power consumption. The heat cross-talk can be effectively suppressed in densely packed photonic chips by applying an air-filled trench between the ring modulators [30]. For the multiplexing of the k orthogonal TDM channels only, electrical phase shifters and mixers are required [12] and each ring modulator modulates these channels to the corresponding wavelength. Please note that the single TDM channel can be an analog or a digital one, it can be a Nyquist or a normal data channel and it can have any modulation format so the transmission is completely agnostic [12]. Each subsequent ring modulator is adjusted to the next adjacent wavelength, and will modulate the next WDM channel, consisting of k TDM sub-channels. The number of TDM and WDM channels, their bandwidth and shaping can be chosen freely. However, to transmit with the maximum possible symbol rate in a given bandwidth, the single TDM channels should be Nyquist shaped with a rectangular bandwidth, which corresponds to the frequency spacing between the lines of the comb, as shown in Fig. 2. Compared to MZM, ring modulators can have a very small radius [19]. Since no special photonic components and only low bandwidth equipment is needed, the transmitter can be very compact and easily be integrated into any photonic platform to generate and receive Tbit/s data signals. III. SIMULATION SETUP For the proof of concept, the simulation setup in Fig. 3 was defined using the Lumerical software package. Due to limita-tions of the software, the simulations were restricted to a back to back configuration. However, in experiments with MZM, integrated on a silicon-on-insulator platform, we have shown the transmission of 48 Gbit/s superchannels over 30 km of fiber without any pre-or post-compensation. For long fibers the dispersion may lead to problems for the transmission of the signal. However, since the coherent detector receives the amplitude and phase of the signal, this can be compensated by an electronic post-or pre-compensation. For the transmitter part, a comb laser source with 10 dBm power and 1 MHz linewidth is utilized to generate n = 5 lines with frequency spacing f = 84 GHz as shown in Fig. 4(a). The central frequency of the comb is adjusted to be at 193.516 THz (in the C-Band of optical communications). The selected wavelength is modulated with k = 3 orthogonal TDM channels with a binary phase shift keying (BPSK) in a rectangular bandwidth, resulting in a superchannel with 15 TDM-WDM channels. In the electrical domain the TDM channels are modulated to one single RF frequency of 28 GHz, phase shifted by 0 • , 120 • , and 240 • for the three orthogonal channels. Each phase shifted version of the RF is multiplied with a 28 GBd BPSK signal bandlimited to 14 GHz. All the three channels are added together in the electrical domain before driving the single ring modulator. Thus, the aggregated data rate of the proposed system is based on a PN ring modulator with a 10 µm radius working under reverse bias configuration. The PN ring modulator in carrier depletion mode was used because it offers a better modulation depth with higher bandwidth [19]. From the simulated intensity transfer function in Fig. 4(b) it can be seen that the modulation depth increases with reverse voltage. A modulation depth of −34.8 dB at −2V DC voltage is obtained. The resonance point of each of the ring modulators is adjusted to match with the selected wavelength of the comb lines as can be seen in Fig. 4(c). Please note that we have assumed that the temperature of the cascaded ring modulators was stabilized during all simulations. The 3 dB electrical bandwidth of the ring modulator is 42 GHz at −2 V DC bias as presented in Fig. 4(d). The center wavelength of the next ring modulator is adjusted to meet that of the next comb line and again modulated with three distinct TDM channels with an aggregated data rate of 84 GBd. To demultiplex the single TDM channel from the 15 WDM-TDM superchannel, a Mach-Zehnder modulator driven with a sinusoidal frequency of 28 GHz and a phase shift of 0 • , 120 • or 240 • selects the single TDM channel. The WDM channel is defined by the wavelength of the local oscillator (LO) and the phase shift of the sinusoidal wave defines the TDM channel. A coherent detector with a baseband bandwidth of 14 GHz demodulates the channel (amplitude and phase). In the simulation the noise of the photodiodes was 1e-22 A/Hz. The gain and noise figure of the electrical amplifier is 33 dB and 3 dB, respectively. IV. RESULTS AND DISCUSSIONS In this section, simulation results of the proposed system for the fifth wavelength (ring modulator # 5, please see Fig. 3) are presented. The other wavelengths show similar results. For the simulation, the 15 channels in the superchannel were demultiplexed simultaneously by using a 1:15 power splitter in the receiver. Figure 5 depicts the rectangular spectrum shape of the multiplexed superchannel for n = 5 wavelength channels, each of which modulated with k = 3 TDM channels with total optical power of −21 dBm, without any amplification. As can be seen, there is no guard spacing between the comb lines and each single TDM channel has an almost rectangular bandwidth that corresponds to the comb spacing. The carrier at each wavelength has a higher amplitude, because the ring modulator is working at a negative bias. This bias adds an additional DC offset to the transmitted data signal of the three TDM channels and is still there after the optical modulation with the ring modulator. To suppress the carrier, the ring modulator should work at 0 V DC bias. However, because of software inefficiencies, this was not possible in the simulation. That a 0 V bias is indeed possible has been shown by experiments [31], [32]. For the demultiplexing of the WDM channel, the wavelength of the LO is adjusted to that wavelength and for the demultiplexing of the time domain channel, the sinusoidal frequency of the receiver has a phase shift of 240 • (please see Fig. 3). The comparison between the transmitted 28 GBd Nyquist BPSK signals (black) and the received data(red) of the three TDM channels (p = 1, 2, 3) is shown in Fig. 6(a), (b), and (c). The reception of all transmitted signals at the receiver side is demonstrated without any kind of pre-or post-compensation. Experimental transmission results with BER vs. SNR curves for agnostic signals can be found in [12]. The out-of-band noise contributions from the coherent detector and the electrical amplifier are cancelled by the rectangular electrical filter. As shown, the received signals are close to the transmitted ones. Clear eye diagrams with BERs of around 10 −5 , 10 −6 , and 10 −6 are achieved as shown in Fig. 6(d), (e), and (f) respectively. The measured BERs can be calculated by using (1) depending on the Q factor and the error function (erfc). The Q-factor can be measured based on the signal to noise ratio (SNR), the electrical bandwidth of the receiver B c and the optical bandwidth B o [33]. where: In Tab.1, the simulation results for the log(BER) for all 15 channels are presented. All values of the BER for all channels meet the acceptable BER of 10 −3 [34], [35]. The overall aggregated data rate of the proposed system is 420 Gbit/s for the BPSK data signals and with PAM-4 it will be 840 Gbit/s. Although the modulation of PAM-4 signals with ring modulators was demonstrated in many references such as [36] and [37], the simulation software was limited to BPSK. Recently, ring modulators with 110 GHz bandwidth have been demonstrated [37], suggesting 2.2 Tbit/s superchannels for PAM-4 modulation with three TDM, and five WDM channels. V. CONCLUSION In conclusion, a compact filterless ANy-WDM transmission system was presented. It is based on a comb source and cascaded ring modulators for an agnostic Nyquist transmission. The multiplexing and demultiplexing of a WDM-TDM superchannel with 5 wavelength domain and 3-time domain channels has been shown. The aggregated data rate of this 15 BPSK superchannel was 420 Gbit/s. The system is very compact, it does not need any WDM filters, delay lines, or complicated or high-bandwidth photonics or electronics. Thus, integration into any photonics platform is straightforward. We believe that the system would be especially advantages for data center and access network applications.
4,965.8
2022-01-01T00:00:00.000
[ "Computer Science", "Physics" ]
In Vitro and In Vivo Evaluation of pH-Sensitive Hydrogels of Carboxymethyl Chitosan for Intestinal Delivery of Theophylline Chitosan is a natural polymer which has limited solubility. Chitosan gets solubilized at acidic pH but is insoluble at basic pH. In the present study, carboxymethyl chitosan (CMC) was prepared which shows high swelling in basic pH and thus can delay the drug release and can act as matrix for extended release formulation. CMC was characterized by FTIR and NMR. pH-sensitive hydrogels of theophylline were formulated using CMC and carbopol 934. Hydrogels were evaluated for swelling, drug content in vitro drug release studies, and in vivo studies on rabbit. The swelling studies have shown little swelling in acidic pH 432% at the end of two hours and 1631% in basic pH at the end of 12 hours. The release profile of the formulation I containing CMC and carbopol in 1 : 1 ratio showed sustained release. In vivo studies showed that the release of theophylline from the prepared hydrogel formulation (Test) exhibit better prolonged action when compared to (standard) marketed sustained release formulation. The studies showed that the pH-sensitive hydrogel of CMC can be used for extended release of theophylline in intestine and can be highly useful in treating symptoms of nocturnal asthma. Introduction Chitosan is biodegradable and biocompatible polymer. Since chitosan is insoluble in water, the use of chitosan in basic environment is limited and hence delivery of drugs to the intestine is not possible. A derivative of chitosan, that is CMC, is soluble in water [1,2]. Amphoteric polyelectrolyte hydrogels possessing both positive and negative charges, and many researchers are using amphoteric polyelectrolyte hydrogels to develop controlled delivery systems such as an insulin pump for diabetics, matrices for molecular recognition or separation, and so forth. A lot of research is going on the stimuli-sensitive polymer hydrogels. Among stimuli-sensitive systems, pH or temperature-responsive hydrogels have been extensively studied in the biomedical field, because these two factors can be easily controlled and are applicable both in vitro and in vivo conditions [3,4]. CMC is amphoteric polyelectrolyte and has various applications due to its unique chemical, physical, and biological properties, especially its excellent biocompatibility. It is used to prepare wound dressings, artificial bone, and skin, is used as a bacteriostatic agent and blood anticoagulant also. It has also demonstrated good pH and ion sensitivity in aqueous solutions due to abundant -COOH and -NH 2 groups [5]. Recent studies have shown that CMC has been used in preparation of nanoparticles for treatment of cancer [6,7]. The use of CMC has also been explored for delivery of antimicrobial agents [8] and proteins [9]. Extended release matrix tablets have been studied using chitosan and carbopol [10]. Authors of this work have previously investigated CMC hydrogels to deliver methylprednisolone [11]. They found that hydrogels show minimal swelling in acidic pH. Considering this behavior of hydrogels the present study was carried out. Carbopol 934 is a polymer which is sensitive to pH and was used in the present study along with chitosan. To combine the advantage of synthetic and natural polymers and at the same time maintain the property of natural polymers such as biodegradation and bioactivity, amphoteric polyelectrolyte hydrogels with pH sensitivity 2 ISRN Pharmaceutics were synthesized with CMC and carbopol 934 in this work. The swelling behavior of the hydrogel under different pH was studied. The release behavior of theophylline was investigated, when it was loaded into the pH-sensitive hydrogels. Theophylline is a antiasthmatic drug and the dosing of theophylline is complicated because it shows extensive variation in bioavailability among patients. About 75% of people with asthma have symptoms that disrupt both the length and depth of their nighttime sleep at least once a week. The number of inflammatory cells in the airways is highest in the early morning, with a peak at 4 AM. In one study, patients with nocturnal asthma were found to have a 20% decrease in lung function overnight compared with 4% in nonasthmatics. Hencem changing the timing or dosage of the medications may improve symptoms one experiences at night. Many different asthma medications have been specifically studied for their effectiveness at night. Theophylline comes in both short-acting and slowrelease formulations, once or twice a day. It comes as a pill or in granules which should be swallowed whole, so as not to release too much medication at one time. The main drawback of this type of dosage form is that when blood levels are too high, unpleasant side effects may occur, such as nausea, vomiting, abdominal pain, jitteriness, insomnia, rapid or irregular heartbeat. Theophylline, if delivered to the intestine can be useful in treatment of nocturnal asthma and a single dose of a slow release or extended release theophylline preparation given at night, may provide effective control of nocturnal asthma symptoms. In the present paper, an attempt was made to formulate a pH-sensitive hydrogel from CMC containing theophylline which has not been attempted so far and evaluate it in vitro and in vivo. The polymer exhibits pH dependent swelling that is, they swell and release the drug depending on pH range, hence sustained or extended drug delivery is possible in the basic environment of gastrointestinal tract. CMC since soluble in water, undergoes extensive swelling in basic pH compared to acidic pH and drug release is maximum in intestine, thus it can be highly helpful in controlling symptoms of nocturnal asthma. Experimental Theophylline was gift sample from Strides Arco lab Limited, Bangalore. Chitosan (MW = 3.5 × 10 5 , >80% deacetylated) was purchased from Sigma Aldrich, USA. Carbopol 934 was purchased from Loba Chemie Pvt. Ltd., India. All other chemicals were of analytical grade. There is no conflict of interests for any financial gain, as the chemicals were purchased from the companies. Preparation of CMC. Chitosan solution was prepared in acetic acid and methanol, and acetic anhydride was added under stirring at room temperature. The mixture was stored overnight at room temperature to give a rigid gel. The prepared gel was agitated with 0.5 M NaOH in ethanol at room temperature overnight. The solution was precipitated by addition of concentrated NH 4 OH solution and filtered. The product was washed with 75% ethanol and dried in dessicator. The product which is formed after drying is Nacetyl chitosan. N-acetyl chitosan was suspended in 50% NaOH and kept at −20 • C overnight. The product was transferred to 2-propanol and chloroacetic acid was added in portions under stirring. After stirring at room temperature for 2 hr, the reaction mixture was heated to 60 • C for another 2 hr. Dialysis was carried out against deionized water for 3 days; the product obtained was dried in dessicator. The dried product obtained was CMC [12][13][14]. Fourier Transforms Infrared (FTIR) Spectral Analysis. The prepared hydrogel was subjected to FTIR analysis by KBr pellet method using fourier transform infrared (FTIR) spectrophotometer (Perkin Elmer, spectrum-100, Japan). This was employed to ascertain the compatibility of drugs with the excipient. Scanning Electron Microscopy (SEM). SEM studies were carried out on hydrogel samples after coating with gold palladium on a scanning electron microscope, Joel SEM analysis instrument, Japan. 2.5. Differential Scanning Calorimetry. Differential scanning calorimetry was performed on a pure sample of theophylline and its formulation, using Shimadzu DSC-50 apparatus. Differential scanning calorimetric thermograms of 2 to 3 mg samples were recorded at a heating rate of 5 • C/min in an open aluminium pan over the range of 25 • C-300 • C. Nuclear Magnetic Resonance (NMR). Nuclear magnetic resonance studies were carried out on CMC to determine whether the conversion of chitosan has occurred to CMC using C 13 NMR and using an NMR spectrometer (DSX-300, Bruker, India). The solid state (without solvent) NMR was done at 75 MHz. Estimation of Theophylline Content of the Hydrogels. Here an amount of hydrogels containing 20 mg of theophylline was placed in 7.4 pH phosphate buffer solution, for 24 hours. In the 7.4 pH phosphate buffer solution the hydrogels swell and the drug is released. At the end of 24 hours, amount of theophylline present in 7.4 pH phosphate buffer is determined spectrophotometrically at 272 nm. The method was validated for linearity, accuracy, and precision. The method obeyed Beer's law in the concentration range 2-14 µg/mL. Swelling Studies. The pH-dependent swelling property of hydrogel was studied by immersing the dry hydrogels in aqueous solutions of the pH 1.2 HCl buffer for 2 hr and then in pH 7.4 phosphate buffer for another 8 hr. After regular intervals of time, hydrogels were removed from the aqueous solution, excess surface water was removed with filter paper, weighed, and returned to the same container until equilibrium was observed [13,15]. The degree of swelling (W t ) was calculated at different times by means of following equation: 2.9. In Vitro Drug Release Studies. In vitro drug release from the hydrogels was carried out in triplicate at 37 ± 0.1 • C in USP XXII dissolution apparatus type II (six basket dissolution tester-USP XXII,TDT-08L, Electrolab, Mumbai, India) at a rotation speed of 50 rpm. A sample of hydrogel equivalent to 300 mg of theophylline was used in each test. Drug release from the hydrogel was studied in 900 mL of dissolution medium (2 hr in pH 1.2 HCl buffer and 10 hr in pH 7.4 phosphate buffer). Sample of dissolution fluid was withdrawn through a filter (0.45) µm at every hour and was assayed at 272 nm for theophylline content using a Shimadzu UV-1700 double beam spectrophotometer [13]. The release data obtained were fitted into korsmeyer-peppas equation, log % R = log K + n log t, where R is the amount of drug released in given time t, K is the release rate constant, and n is the time exponent. A graph of log % R v/s log t was plotted. The intercept on Y -axis gave the value of K, the release rate constant and the slope the value of n, the time exponent. To determine the release mechanism, the parameter n and k were used. In Vitro Wash-Off Test for Mucoadhesion. About 50 hydrogel particles were taken for the study and spread over the sheep's intestinal mucosa (2 × 2 cm) taken as a biological substrate for studying mucoadhesive nature of the hydrogels. The prepared hydrogel was passed through sieve number 20; the particles which were retained on the sieve were coarser and were counted and taken for the study. The instrument designed was, disintegration apparatus USP, the 6 tubes were removed and the mocosa was fixed to the base of the apparatus. The medium chosen was 7.4 pH phosphate buffer, every 5 min interval hydrogel particles adhering to the mucosa were counted. The study was carried out for 30 min [16]. In Vivo Studies. The in vivo release studies were conducted on albino rabbits weighing 2.5-3 kg. The animals were divided into two groups of 6 rabbits each as a standard and the other as test. A written approval was obtained from the Institutional ethical committee of JSS Medical College and Hospital and JSS College of Pharmacy, Mysore, India. Detailed verbal and written information on the study was provided to the Veterinary Surgeon, Central Animal Facility, JSS Medical College and Hospital and permission was obtained. The animals were fasted for 12 hours before the capsules were introduced into the oesophagus and washed using 5 mL of distilled water in order to avoid the possible damage caused by chewing. Blood samples were collected from ear vein at 1, 2, 4, 8, 16, 24 hr after the oral administration [17]. The blood samples were centrifuged and plasma was stored at −20 • C for further analytical determination. To the above samples, isopropyl alcohol was added and vortexed for 30 sec. The drug was extracted with 2 mL of chloroform and vortexed at high speed for 1 min. After centrifugation at 1000 rpm for 5 min, the organic layer was evaporated and the residue was reconstituted with 100 mL of the mobile phase. This solution was injected into the HPLC system for analysis. The instrument used was Shimadzu LC-2010AHT. Acetonitrile (7.5%) in 0.2% acetic acid solution was used as the mobile phase. Column used was C 18 . The temperature was kept ambient. Injection volume was 20 µL at 1.5 mL/min flow rate [18]. The sample run time was 8 min. The in vivo studies were conducted on prepared optimized hydrogel formulation (Test) and on marketed sustained release formulation Theobid SR tablet from Cipla (Standard). Stability Studies. Stability studies were conducted on optimized formulation of CMC hydrogels to assess their stability with respect to their physical appearance, drug content, swelling, and drug release characteristics after storing them at 25 • C/(RH) 60%, and 30 • C/(RH) 65% as per ICH Q 1 A (R 2 ) regulations for 6 months. Preparation of CMC. Chitosan is a unique polysaccharide derived from deacetylation of chitin. When chitosan is changed into O-carboxymethyl chitosan (O-CMC) by introducing -CH 2 COOH groups onto -OH along chitosan molecular chain, an amphoteric polyelectrolyte containing both cationic and anionic fixed charges was prepared [16]. By varying degree of deacetylation and carboxymethyl group substitution of the chitosan, we can obtain CMC. Carboxymethyl substituents were observed on amino and hydroxyl sites on the surface of modified chitosan. The preparation was carried out in two steps. First, N-acetyl chitosan was prepared using acetic anhydride, then carboxymethylation was done to get CMC. As given in literature at 50% concentration NaOH gives better degree of substitution, hence, 50% concentration was used [16]. The prepared CMC is white-colored free-flowing powder and shows good solubility in both water and organic solvents, which extends its range of applications. CMC shows characteristic behavior with pH. This property and water solubility was used in preparing pH-sensitive hydrogels in the present work. Preparation of CMC Hydrogels. CMC is amphoteric in nature and contains positively charged groups, they interact with negatively charged carboxylic groups of Carbopol and form interpolyelectrolyte complexes (IPECs) which were stabilized by cooperative ionic bonds. Moreover, interpolymer interactions were possible between countercharged groups in the own macromolecule and of course between copolymer chains of different macromolecules. Due to its good solubility in wide range of pH values, the CMC solution could be readily blended with polyacrylic acid solution and homogenous hydrogels were obtained. The cross-linking was carried out at room temperature. Different formulations were prepared by varying the concentration of CMC by keeping the concentration of polyacrylic acid constant from F1-F5. For the formulation F1 to F5 the concentration of carboxymethylchitosan was increased gradually from 1%, 1.25%, 1.5%, 1.75%, and 2%, respectively. FTIR Studies. FTIR studies were carried out for carboxymethylchitosan and chitosan, and the spectra are given in Figure 1. Spectra showed signals of nonmodified chitosan at 1,653 and 1,560 cm −1 for the C-O stretching (amide) and N-H bending (amine), respectively. Other characteristic peaks of chitosan O-H stretch, C-H stretch, and C-O stretch were present at 3,400-3,600, 2,800-2,900 and 1,020-1,180 cm −1 , respectively. The spectra of carboxymethylchitosan are similar to that of the original chitosan with a new peak appearing at 1,703 cm −1 , which is assigned to the carbonyl groups. This confirmed the conversion of chitosan to carboxymethylchitosan. Carboxymethylchitosan showed the disappearance of the -NH 2 associated band at 1595 cm −1 , which can be ascribed to characteristic vibration deformation of the primary amine N-H and the appearance of some new intensive peaks at 2922-2852, 1466, and 721 cm −1 which can be attributed to the methyl groups and the long carbon segment of the quaternary ammonium salt. Characteristic peaks of the hydroxyl and second hydroxyl groups between 1152 and 1030 cm −1 did not change. Theophylline (pure drug) and hydrogel formulation were subjected for FTIR spectroscopic to ascertain whether there is any interaction between the drugs and polymers used. The characteristic peaks of the pure drug were compared with the peaks obtained for their formulation. It was observed that similar characteristic peaks appear with minor differences, at 1654 cm −1 (C=O stretching amide), 1596 cm −1 (C=C stretching aromatic), 1307 cm −1 (C-O stretching) for theophylline and for the formulation as shown in Figure 2. Hence, it can be concluded that the drug is in free state and there is no interaction between drug and polymer used. 13 NMR Studies. The C 13 NMR of chitosan was studied as given in literature and compared with the spectra of CMC. Chitosan spectra show peaks at 177.9 ppm and at 25 ppm, which are assigned to the carbonyl carbon of -COCH3 and the methyl carbon (-CH3), respectively. These signals are less intense than the other signals. The signal at 101.3 ppm is assigned to the hydrogen bonded to carbon of chitosan and the signals in 59.6 ppm, 73.1 ppm, 81.1 ppm, 78.6 ppm, and 64 ppm are assigned to carbons of glucopyranose [19]. The C 13 NMR of carboxymethylchitosan was carried out and shown in Figure 3. Spectra show the signal shifted from 101.3 ppm to 105.9 ppm because of the electron-withdrawing effect of the carboxymethyl substituents. Since various different units occur in the structure of carboxymethylchitosan, many of the signals in the spectrum of chitosan appear split in the spectrum of carboxymethylchitosan. Thus, the signals at 60.1 ppm, 73.8 ppm, 73.2 ppm, 82.2 ppm 78.2 ppm, and 63.9 ppm are split and shifted in relation to those detected in the spectrum of the parent chitosan. The signal observed at 180.7 ppm is assigned to the carbonyl carbons of carboxymethyl groups while the one detected at 177.9 ppm corresponds to the carbonyl carbon of -COCH3 of the parent chitosan. The methylene groups (-CH2), carbons give rise to the signals at 53 and 57.4 ppm, respectively. However, no signal was detected at 53 ppm in the spectrum of carboxymethylchitosan Figure 3 and the weak signal at 58.4 ppm can be probably assigned to the methylene (-CH2) bonded to the amino group (-NH) [19]. These features are taken as evidence that the carboxymethylation occurred at the hydroxyl as well as in the amino groups of chitosan which is also supported by FTIR studies. The spectra are shown in Figure 3. Scanning Electron Microscopy. Scanning electron microscopy was carried out in order to study surface morphology, texture, and porosity of hydrogels. The SEM photograph of hydrogel clearly showed the rugged nature of hydrogel particles. The SEM photograph is shown in Figure 4. 3.6. Differential Scanning Calorimetry. DSC studies of pure drug and F1 were studied to determine the possible interaction between the drug and the hydrogel. Thermogram T (%) DSC revealed no interaction between the drug and polymers used, as there was no significant change in the melting point of theophylline. The obtained results are shown in Table 2. 3.7. Drug Content. The test for drug content was carried out to ascertain that the amount of drug in the formulation. From the results obtained, it can be inferred that there is proper distribution of theophylline in the hydrogels. The drug content analysis showed that the drug is uniformly distributed in the range of 74.5-88.6% of the total amount of the drug added in different formulations. Swelling Studies. The swelling behavior of CMC was studied. Swelling studies were conducted for 12 hrs but swelling did not show significant change after 10 hrs until 12 hrs hence data for 10 hrs is presented. The swelling in water mainly depends on the osmotic pressure difference between inside the gel and the surroundings caused by redistribution of mobile ions. The swelling was observed to be more at basic pH due to increase in the number of mobile ions inside the gel and large osmotic pressure leads to swelling. The results for swelling studies are shown in Figure 5. % swelling was found in range of 432.75-1631.56. The results indicate that with an increase in pH from 1.2 to 7.4, a considerable increase in swelling was observed for all the hydrogel formulations, which may be due to the dissociation of the -COOH groups of CMC, thereby increasing the osmotic pressure inside the hydrogels resulting in increased swelling [16]. Swelling increased when ratio of CMC was changed till 1.5 : 1 with polyacrylic acid, that is, formulation F1-F3 but further increase in ratio, that is, 1.75 : 1 and 2.0 : 1 (F4 and F5) swelling decreased. This shows that CMC and polyacrylic acid have synergistic effect till certain ratio as a result of which swelling increases but further increase in CMC amount resulted in reduced water uptake which may be attributed to the antagonistic effect resulting in decrease swelling. Swelling strongly depends on the extent of cross-linking. At lower cross-linking, the network is loose with a greater hydrodynamic free volume, so that the chains can accommodate more of the solvent molecules resulting in higher swelling. In this study it was found that the swelling was increased when the pH was changed from acidic to basic, and it conforms that the prepared hydrogel was sensitive to pH. The effect on swelling with time is shown is Figure 5. In Vitro Drug Release Studies. The In vitro release studies were carried out for all the formulations in both acidic and basic media. The release studies were carried in the pH 1.2 HCl buffer for the first two hours, to mimic the acidic conditions prevailing in the stomach. For the next 10 hours, the release studies were carried out in pH 7.4 pH phosphate buffer, to mimic the alkaline conditions of the intestine. For the initial 2 hours that is, in the pH 1.2 HCl buffer, the percentage drug release was found to be low in all the cases; this can be attributed to the fact that the hydrogel swells less in the acidic medium. When the dissolution medium was changed to pH 7.4 pH phosphate buffer, the release was found to increase, with time. The drug release showed effect on basis of swelling. Drug release decreased, from F1-F3 and again increased for F4-F5. The effect observed is based on swelling, as swelling increases the drug release decreases and again when swelling decreases drug release increases as shown in Figure 6. On the basis of above studies conducted F1 was chosen as optimized formulation as it showed desired sustained-release profile along with other drug content and swelling studies results. All formulations present an initial burst effect. It may be attributed to diffusion of the drug caused by rapid gel swelling and also the release of drug adsorbed towards the surface of the gel matrix. The % drug release was found in range of 37.21-98.47 at the end of 12 hrs. The data obtained from in vitro release studies was fitted into various mathematical models. The regression coefficient, R 2 obtained is given in Table 3. R = regression coefficient. The value of n determined from korsmeyer peppas equation was in the range of 0.5-0.7, which indicates the drug release from the hydrogels followed non-fickian or anomalous mechanism (relaxation controlled). In Vitro Wash-Off Test for Mucoadhesion. The mucoadhesive study showed that all the hydrogel particles get detached from the mucosa within 20 min. This study shows that carboxymethylchitosan does not have a good mucoadhesion property when compared to well-known mucoadhesive strength of parent chitosan. This can be attributed to the better solubility of CMC in water and organic solvents. 3.11. In Vivo Studies. In vivo studies were carried out for Theobid SR tablet from Cipla (product A) and theophylline loaded hydrogels of CMC both containing 300 mg of theophylline on albino rabbits. Blood samples were withdrawn at different time intervals and plasma concentrations of theophylline were estimated, the profile is presented in Table 4. From the data obtained, it may be observed that after oral administration, peak plasma concentration C max of 12.34 ± 2.42 µg/mL was observed for product A and 09.69 ± 4.12 µg/mL for product B. From the comparison of mean values of plasma concentrations of product A and B, it was observed that product B has lower plasma concentrations. It was observed that the plasma concentration of theophylline in all animals after 24 hrs of oral administration was below 20 µg/mL for both products. It was also observed from the studies that the therapeutic concentration range of theophylline maintained for about 24 hrs following a single oral dose administration for both products. From the data obtained, it may be observed that the time taken to reach peak plasma concentration T max was 5.0 ± 0.81 hrs for product A and 6.0 ± 0.75 hrs for product B. Mean elimination rate constant Kel was found to be 0.08410 h −1 for product A and 0.07813 h −1 for product B. Similarly mean elimination half life t 1/2 for product A was 8.24 ± 4.7 hrs and for product B 8.87 ± 5.74 hrs. The mean AUC 0−24 values for product A was 101.73 ± 16.5 µg·hr/mL and for product B 123.17 ± 21.5 µg·hr/mL. The lower C max , prolonged T max , of theophylline in rabbits indicated that the drug release from the product B is slow, thereby providing a prolonged and controlled in vivo delivery of the drug. These in vivo absorption characteristics are in confirmation with the observed in vitro drug release rate of the drug from the hydrogel. Stability Studies. Stability studies were conducted for different formulations for a period of 6 months. At specific time intervals, the samples were tested for drug content, swelling, and % drug release. The drug content in samples was tested at 2 different conditions along with 95% confidence limits, using sigma plot software 10.0 as shown in Figures 8 and 9. Stability studies results obtained at various intervals showed that the hydrogel prepared from CMC did not show significant difference in physical appearance at the end of 6 months except in change of colour from light brown to brown. % drug release, % swelling, and mucoadhesion did not change significantly at the end of stability studies. From the data, it can be seen that hydrogels prepared are stable at given two conditions. Conclusion From the results obtained, it may be concluded that the prepared hydrogels were pH-sensitive and the degree of swelling of hydrogel depends on the concentration of crosslinking agent as well as on the pH of the environment. As the theophylline is released highly in basic pH the peak theophylline concentration will be achieved at early morning which will be highly beneficial to the patients suffering from nocturnal asthma. The in vitro and in vivo studies showed that the drug release is slowed down and prolonged which is better than commercial available formulation and hence better. Thus, the hydrogel prepared using CMC can be used to deliver theophylline in sustained manner and can be used effectively against nocturnal asthma; CMC can also be useful for the delivery of drugs which are unstable in acidic pH.
6,255.2
2012-07-01T00:00:00.000
[ "Chemistry", "Materials Science", "Medicine" ]
Procyanidin B2 alleviates liver injury caused by cold stimulation through Sonic hedgehog signalling and autophagy Abstract Procyanidin B2 (PB2), a naturally occurring flavonoid abundant in a wide range of fruits, has been shown to exert antioxidant, anti‐inflammatory and anticancer properties. However, the role of PB2 in the prevention of cold stimulation (CS)‐induced liver injury. The present study was undertaken to determine the effects of PB2 on liver injury induced by cold stimulation and its potential molecular mechanisms. The present study results showed that treatment with PB2 significantly reduced CS‐induced liver injury by alleviating histopathological changes and serum levels of alanine transaminase and aspartate transaminase. Moreover, treatment with PB2 inhibited secretion of inflammatory cytokines and oxidative stress in cold‐stimulated mice. PB2 reduced cold stimulation‐induced inflammation by inhibiting TLR4/NF‐κB and Txnip/NLRP3 signalling. Treatment with PB2 reduced oxidative stress by activating Nrf‐2/Keap1, AMPK/GSK3β signalling pathways and autophagy. Furthermore, simultaneous application of Shh pathway inhibitor cyclopamine proved that PB2 targets the Hh pathway. More importantly, co‐treatment with PB2 and cyclopamine showed better efficacy than monotherapy. In conclusion, our findings provide new evidence that PB2 has protective potential against CS‐induced liver injury, which might be closely linked to the inhibition of Shh signalling pathway. the adrenal glands and consumption of glycogen in the liver, 5,6 indicating sympathetic nerve activity and oxidation stress is the source of this fact. In addition, the rate of oxygen consumption increases during cold stress in mammals. Oxidative stress caused by CS results in the accumulation of oxygen free radicals. 7,8 Under CS conditions, liver tissue is the first tissue to show abnormalities in the body heat production or response. 9 More importantly, most liver injury caused by CS is accompanied by inflammatory response and oxidative stress. Thus, anti-inflammation and anti-oxidative stress may alleviate the potential preventive measures of liver damage caused by CS. Cold stress affects many cellular processes, leading to physiological and immune responses. 10 There is evidence that cold stress can trigger inflammatory responses, releasing a large number of inflammatory cytokines. 11 Inflammatory responses lead to the activation of nuclear factor (NF)-κB, an inducible transcription factor expressed mainly in lymphocytes and it activates pro-inflammatory factors. It has been found that cold stress increases mRNA expression of NF-κB and TNFα in quails. 12 Additionally, oxidative stress can accelerate inflammation by activating pro-inflammatory pathways, notably NOD-like receptor protein (NLRP) 3 inflammasome pathway. NLRP3 adaptor is composed of ASC and caspase-1 and simultaneously induces IL-1β as a major pro-inflammatory cytokine, which affects almost every cell type and mediates inflammation in multiple tissues. 13,14 It is well known that cold stress is closely associated with oxidative stress, leading to excessive accumulation of ROS. [15][16][17] In addition, as an important antioxidant transcription factor, nuclear factor (erythroid-derived 2)-related factor 2 (Nrf2) plays a protective role by regulating the expression of antioxidant proteins to resist oxidative damage. 18 The target genes of Nrf2 are involved in synthesizing of GSH and eliminating ROS, and Kelch-like ECH-associated protein 1 (Keap1) is essential for regulating the activity of Nrf2. 19,20 Under normal circumstances, Nrf2 is continuously degraded in a Keap1dependent manner by the ubiquitin-proteasome pathway. The degradation of Nrf2 can be suspended in the presence of ROS, and stable Nrf2 accumulates in the nucleus and activates the target genes for cell protection against oxidative stress. 21,22 More importantly, autophagy is an intracellular pathway through which lysosomes degrade and recover proteins and organelles. Lysosomes are generally considered one of the main targets of ROS. 22 Recent reports indicate that autophagy is widely regarded as a key regulator of cell survival and homeostasis, and the lack of autophagy promotes inflammatory response and oxidative stress. 23,24 In particular, previous studies have shown that autophagy disorders can also exacerbate liver disease. Dihydromyricetin regulates autophagy through the Nrf2 and p62 signalling pathway and thereby reducing ethanol-induced liver damage. 25 Recent study also shown that autophagy has a crucial role in the regulation of non-alcoholic fatty liver disease. 26 Macrophage autophagy prevents liver fibrosis in mice. 27 Recent studies indicate the important role of autophagy and oxidative stress, inflammatory responses in various liver diseases. However, their exact role in liver injury induced by CS has not been fully elucidated. Hedgehog signalling is essential for development during embryogenesis, regulating the wound healing response of adult tissues and homeostasis. When ligands such as Sonic Hedgehog (SHH) bind to Patched (Ptc) receptors, this pathway is activated, leading to the release of Ptc-mediated smoothing (Smo) inhibition. 28,29 Subsequently, Smo induced the activation and nuclear accumulation of Gli transcription factors, which in turn triggered the activation of a large number of target genes. 30,31 According to reports, Shh signalling is abnormally activated in various liver pathological conditions (such as inflammation, liver regeneration, liver fibrosis and HCC). [32][33][34][35] However, the mechanism of Shh signalling pathway in CS-induced liver injury has not been elucidated. Procyanidins are a subtype of flavonoid family that possess abundant biological functions, including anti-inflammatory, antitumor and antioxidant activities. [36][37][38] Procyanidin B2 (PB2), a B-type dimer of procyanidin, has been shown to possess greater antioxidant and anti-inflammatory effects than other procyanidins. A previous study has shown that PB2 as the ingredient in pericarp extract of Annona crassiflora which exhibits hepatoprotective properties. 39 The protective mechanism of PB2 in acute liver damage induced by CCl 4 was closely related to inhibiting inflammatory response and apoptosis. 40 Additionally, procyanidin could trigger autophagy of human hepatoma G2 cells via ROS generation. 41 However, the protective effect of PB2 on CS-induced liver injury has not been reported. Therefore, the present study was undertaken to investigate the effects of PB2 on CS-induced liver injury and the potential mechanisms, with particular attention to the activation of autophagy and association with the Shh signalling pathway. | MATERIAL S AND ME THODS All experimental procedures were performed in accordance with the Guidelines established in Heilongjiang Bayi Agricultural University (Daqing, China) for the Care and Use of Experimental Animals. The Animal Ethics Committee of Heilongjiang Bayi Agricultural University approved the study protocol. | Animal model of cold stimulated-induced liver injury C57BL/6 male mice weighing 22-24 g, 5 weeks old, were purchased from Charles River Lab and raised under environmentally controlled conditions (temperature 24℃ ± 2℃, humidity 40% and a 12-hours light/dark cycle) with food and sterile water ad libitum for 1 week. After acclimatization, the cold-stimulated (CS) group were transferred to a 4℃ climatic chamber and kept for 4 hours per day, and transferred back to room temperature by day (8:00 AM to 8:00 PM). The process of chronic CS persists for 3 weeks. 42 Mice were randomly divided into seven groups (n = 6 mice per group): Control group, no treatment; Cold-Stimulated group (CS group); PB2 (50 mg/kg or 100 mg/kg, iG) group; PB2 (50 mg/kg or 100 mg/kg bw per day, iG) + CS group. To further verify the effect of PB2 on Hh pathway, mice were treated with Hh inhibitor cyclopamine intragastrically (iG). Mice were randomly divided into five groups (n = 6 mice per group): CS group; PB2 (100 mg/kg bw per day, iG) group; Cyclopamine (20 mg/kg bw per day, iG) group; PB2 (100 mg/kg bw per day, iG) + Cyclopamine (20 mg/ kg bw per day, iG) group; Cyclopamine (20 mg/kg bw per day, iG) + CS group; PB2 (100 mg/kg bw per day, iG) + Cyclopamine (20 mg/kg bw per day, iG) + CS group. After 24 hours of the last treatment, all mice were anaesthetized with pentobarbital intraperitoneally. Subsequently, liver tissue and serum were collected and used for biomarker profiling, histopathological analysis, ELISA or Western blot assays. | Histopathological evaluation and Immunohistochemistry (IHC) staining Formalin-fixed, paraffin-embedded liver tissue sample was cut into 5 μm thick sections and then stained with haematoxylin-eosin and liver IHC staining for Shh, Smo and Gli1 to evaluate liver pathological lesions under light microscopy. | Measurement of liver function index and oxidative stress All mice were killed the last cold stimulation treatment, and liver and blood were collected for biochemical analysis. ALT and AST levels in serum and liver were measured using the corresponding detection kits in accordance with the manufacturer's instructions. All mice were killed in the final cold stimulation treatment, and liver and blood were collected for biochemical analysis. According to the manufacturer's instructions, use the corresponding test kit to measure the ALT and AST levels in the serum and liver. In addition, the mouse liver tissue was homogenized and dissolved in extraction buffer, and | Enzyme-linked immunosorbent assay (ELISA) Blood serum was collected for measurement of the inflammation biomarkers TNFα, IL-6 and IL-1β levels by ELISA kits as the manufacturer's instructions (BioLegend), and the absorbance at 450 nm was read. | Western blot analysis Western blot was performed as previously described by Xu et al. 43 In brief, total protein was extracted from the liver tissues using a protein extract kit according to the manufacturer's protocol. All Lastly, the membranes were visualized with the enhanced chemiluminescence (ECL) reagent in Western blotting analysis system with Image Lab (Bio-Rad). | Statistical analysis All data were analysed using the appropriate statistical analysis methods with the Statistical Package for the SPSS software version 25.0 (IBM) and GraphPad Prism program (Prism 8.3.0; GraphPad Software). All data were tested for normality and homogeneity of variance using the Shapiro-Wilk and Levene tests, respectively. One-way ANOVA was performed for multiple comparisons with Bonferroni correction. A P-value <0.05 was considered statistically significant, and a P-value <0.01 was considered highly significant. | PB2 relieves CS-induced liver injury To investigate whether PB2 alleviated CS-induced liver injury in mice, the effect of various dose of PB2 on liver safety was measured. Compared with the control group, treatment with vehicle and PB2 (50 and 100 mg/kg) have no significant effect on serum activities of ALT, AST or the hydroxyproline concentration of the liver tissue ( Figure 1A,B). Histological analysis of liver slices in the four groups revealed normal morphology ( Figure 1C,D). These results clearly demonstrated that treatment with PB2 at 50 and 100 mg/kg had no effect on liver toxicity in mice. . Similar results were obtained from three independent experiments. All data are presented as the mean ± SEM (n = 6 in each group). *P < 0.05 and **P < 0.01 vs Control group; ## P < 0.01 vs CS group Cold stimulation significantly up-regulated serum ALT, AST and liver hydroxyproline levels, whereas PB2 treatment significantly reduced serum activities of ALT, AST and liver hydroxyproline levels in a dose-dependent manner ( Figure 1E-G). Histological analysis showed that liver tissue was well structured and hepatocytes were with clear cytoplasm and prominent nuclei in the control group. However, significant structural disturbances such as bleeding, neutrophil infiltration and hepatocyte necrosis were observed in CS group, whereas PB2 treatment alleviated the pathological changes induced by CS ( Figure 1H,I). These results clearly demonstrated that the hepatoprotective effect of PB2 treatment on CS-induced liver injury was dose-dependent. | PB2 suppressed inflammatory responses in CS-induced liver injury Expression of inflammatory factors TNFα, IL-6 and IL-1β is related to liver injury. The levels of inflammatory cytokines in serum induced by CS were measured using ELISA. Cold stimulation significantly stimulated secretion of TNFα, IL-6 and IL-1β in the serum compared to the control group, whereas PB2 treatment reduced the production of inflammatory cytokines induced by CS (Figure 2A-C). The TLR4 signalling pathway is considered a key pathway that mediates inflammatory responses, and it functions upstream of NF-κB. Therefore, we examined the effect of PB2 on TLR4/NF-κB signalling pathways under cold stimulation. The results showed that compared to the CS group, PB2 significantly reduced phosphorylation of NF-κB (p65), prevented phosphorylation and degradation of IκBα and inhibited up-regulation of TLR4 expression ( Figure 2D-F). These results demonstrate that inflammatory responses attenuated by PB2 may partly contribute to suppression of the TLR4/NF-κB signalling pathway. | PB2 treatment suppressed Txnip/NLRP3 inflammasome signalling pathway in mice with CSinduced liver injury We investigated whether liver damage caused by CS also triggered activation of NLRP3 inflammation. Western blotting showed that cold stimulation significantly increased abundance of NLRP3, ASC, cleaved-caspase-1 (p20) and mature-IL-1β (p17) proteins. PB2 treatment dramatically inhibited expression of NLRP3, ASC, cleavedcaspase-1 and mature-IL-1β proteins ( Figure 2G,H), suggesting that F I G U R E 2 PB2 treatment suppressed CS-activated inflammatory response of liver injury in mice. A-C, Effects of PB2 on CS-induced serum TNFα, IL-6 and IL-1β generation. Similar results were obtained from three independent experiments. D-F, Effects of PB2 on p-p65, p65, p-IκBα, p-IκBα and TLR4 protein expression were measured by Western blotting, and quantification of protein expression was performed by densitometric analysis. G-H, Effects of PB2 on NLRP3, ASC, cleaved-caspase-1 and mature-IL-1β protein expression were measured by Western blotting, and quantification of protein expression was performed by densitometric analysis. Similar results were obtained from three independent experiments. All data are presented as the mean ± SEM (n = 6 in each group). *P < 0.05 and **P < 0.01 vs Control group; ## P < 0.01 vs CS group PB2 inhibits inflammation partly through the inhibition of the NLRP3 inflammasome. | PB2 treatment alleviated oxidative stress in mice with CS-induced liver injury Cold stress alters homeostasis, which results in production of ROS and alterations in the antioxidant defence system. 1 Oxidative damage is also one of the main factors of liver injury in mice caused by CS. Therefore, we measured whether PB2 improved cold-induced liver oxidative damage. CS increased excess accumulation of MDA and ROS, and consumption of GSH and SOD, leading to oxidative damage in the liver of mice. However, PB2 treatment reversed these effects ( Figure 3A-D). Expression of SOD1, CAT and HO-1 proteins was consistent with the above results ( Figure 3E-H). To investigate further the protective mechanism of PB2 treatment on CS-induced liver injury, we analysed involvement of the Nrf2/Keap1 and AMPK/ GSK3β signalling pathways by Western blotting. PB2 treatment increased AMPK and GSK3β phosphorylation and enhanced Nrf2 and Keap1 expression compared with the CS group ( Figure 3I-M). These results show that PB2 may protect against oxidative stress damage induced by CS by enhancing the Nrf2/Keap1 and AMPK/ GSK3β signalling pathways. | PB2 treatment induced autophagy in mice with CS-induced liver injury Autophagy can regulate inflammatory responses and oxidative stress and plays an essential role in the amelioration of liver injury induced by CS. We examined the protein expression of key autophagy genes, including Beclin-1 and LC3. The expression of key autophagy proteins was decreased by CS but recovered by PB2 treatment Similar results were obtained from three independent experiments. All data are presented as the mean ± SEM (n = 6 in each group). *P < 0.05 and **P < 0.01 vs Control group; ## P < 0.01 vs CS group which was conducive to activation of autophagy ( Figure 4D-G). This indicates that PB2-induced autophagy activation may be dependent on the PI3K/AKT/mTOR signalling pathway. | PB2 alleviates CS-induced liver injury by cooperating with the Shh pathway and autophagy To determine further the molecular mechanism of PB2 in CSinduced liver injury, the involvement of the Hh pathway was stud- | D ISCUSS I ON Liver injury is a major health issue worldwide which can develop to liver fibrosis and even hepatic carcinoma. 44 During cold stimulation, changes in antioxidant defence systems and anti-inflammatory responses are activated in the liver. 1 Therefore, reducing inflammatory response and oxidative stress can prevent or treat liver injury. It has been reported that proanthocyanidins have antioxidant and anti-inflammatory properties, which may be closely related to the defence mechanism of autophagy. We found that PB2 has a protective effect on CS-induced liver injury and found for the first time that PB2 has an effective antioxidant and anti-inflammatory effect on CS-induced liver injury through a mechanism that depends on Shh signalling and autophagy. Accumulating evidence suggests that cold stress is the most common source of environmental stress and can damage the nervous, cardiovascular and immune systems. 45,46 The liver is an important heat-generating organ in the body, which is responsible for heat production during acute cold stimulation to maintain core body temperature. 47 Cold-stress-induced liver injury significantly increases serum ALT and AST levels and liver histopathological changes. Treatment with PB2 significantly prevented these increases, indicating that PB2 protects liver tissue from cold-stimulated liver injury. It is well known that cold stimulation causes liver damage by inducing oxidative stress and inflammation. 48,49 To further investigate the effect of PB2 on inflammatory responses and oxidative stress, serum levels of inflammatory cytokines and liver abundance of oxidative markers were measured. The present study showed that PB2 treatment significantly reduced production of IL-1β, IL-6 and TNFα, reduced levels of MDA F I G U R E 4 Effect of PB2-mediated autophagy on liver injury induced by CS in mice. A, Effects of PB2 on Beclin-1 and LC3 protein expression were measured by Western blotting. B-C, Quantification of protein expression was performed by densitometric analysis. E, Effects of PB2 on PI3K, p-AKT, AKT, p-mTOR and mTOR protein expression were measured by Western blotting. D-G, Quantification of protein expression was performed by densitometric analysis. Similar results were obtained from three independent experiments. All data are presented as the mean ± SEM (n = 6 in each group). *P < 0.05 and **P < 0.01 vs Control group; ## P < 0.01 vs CS group and ROS, and reversed consumption of GSH and SOD. Furthermore, PB2 treatment reversed histological changes induced by CS in mice. These results indicate that PB2 treatment alleviated the hepatic inflammation, oxidative stress and pathological damage in mice. It is well known that the signalling cascade that produces proinflammatory cytokines is mainly regulated by NF-κB/TLR4-mediated signalling. 50 TLR4 recognizes endogenous ligands induced during the inflammatory response and then activates NF-κB through phosphorylation and degradation of IκBα, thereby producing inflammatory factors and ultimately causing liver inflammation. 51 Additionally, activation of NF-κB is the basic initial step to trigger activation of NLRP3, and ROS produced by NF-κB-mediated inflammation. 52 After activation of NLRP3, the adapter protein ASC is required to activate caspase-1 further. The maturation of inflammatory cytokine IL-1β is related to the pathogenesis of liver injury. 53 Our results indicate that PB2 significantly inhibit the protein expression cold stimulation-induced NLRP3, ASC, Caspase-1 cleavage and I mature L-1β. Moreover, activation of Nrf2 could improve various diseases caused by inflammation and oxidative stress. 54,55 Under normal circumstances, Nrf2 is continuously degraded in a Keap1-dependent manner through the ubiquitin-proteasome pathway. 56 However, after exposure to stress inducers, the Nrf2 released from Keap1 translocates into the nucleus, heterodimerizes with the small Maf protein and activates cells through antioxidant response elements/ electrophilic response elements. 22 There is increasing evidence that during this process, AMPK leads to the accumulation of Nrf2 nuclear transcription by inhibiting phosphorylation of GSK3β, thereby attenuating stress-induced liver injury. 57 AMPK has the ability to maintain metabolic homeostasis and plays a key role in the survival of cells and organisms during metabolic stress. However, it also controls the redox state and mitochondrial function. 58 We speculate that PB2 may mediate other mechanism to predominantly relieve CS-induced liver injury. A large number of experimental studies have shown that enhancing autophagy can reduced inflammation and improve liver toxicity induced by LPS/GalN. 59 In addition, several reports believe that enhanced autophagy can reduce APAP-induced liver toxicity by blocking oxidative stress. 60 However, the relationship between PB2, CS-induced liver injury and autophagy has not yet been researched. Our results show that PB2 treatment induces autophagy by increasing protein levels of Beclin-1 and LC3, whereas CS reduces these levels. Importantly, the PI3K/Akt/mTOR signalling is crucial in the initial stage of autophagosome formation. 61 Initial activation of PI3K under stress conditions may increase free radicals in the vicinity of mitochondria. Proanthocyanidins induce autophagy because of the inhibitory effect of PI3K/AKT/mTOR in HepG2 cells. 62 Therefore, it is necessary to explore the effect of PB2 on PI3K/AKT/mTOR signalling pathway. In the current study, we found that PB2 attenuated expression of PI3K, p-AKT and mTOR, which is conducive to the activation of autophagy, indicating that PB2-induced autophagy may be dependent on the PI3K/AKT/mTOR signalling pathway. The precise target of PB2 was further investigated in the process of liver injury, and the Hh pathway attracted our attention. The Hh pathway plays an important role during various types of liver injury, such as fibrosis, inflammation-related injury and carcinogenesis. 63,64 Emerging data show that Hh is a key regulator of adaptive and maladaptive responses to liver injury. 30 The severity of liver fibrosis parallels the level of Hh activity in patients with chronic liver diseases. 65 A recent study revealed that Hh signalling regulated hepatic inflammation in mice with non-alcoholic fatty liver disease. 66 Shh is the most studied ligand of the Hh signalling pathway and can interact with the receptor Patched in liver fibrosis and liver cancer cells. 67 Patched eliminates the inhibitory effect on Smo, thereby promoting activation of transcription factor Gli1 and nuclear translocation, resulting in expression of Shh-target genes, such as Smo and Gli1. 68 We wanted to understand the interaction between PB2 and Shh signalling pathway in the liver under CS. Therefore, we used the Smo inhibitor cyclopamine to determine the role of PB2 in the Hh pathway in CS-induced liver injury. PB2 exerted almost the same inhibitory effect on the Hh pathway as cyclopamine did. These findings primarily indicate that the Hh pathway is the target of PB2 in liver injury. We studied the relationship between Shh and autophagy activation and found that the Smo inhibitor cyclopamine increased activation of autophagy, indicating that addition of cyclopamine promotes the compensatory effect of autophagy activation. Taken together, we suggested that PB2-mediated autophagy activation and Shh signalling inhibition improve liver injury caused by CS and that Shh signalling inhibition may enhance autophagy activation compensation. E-G, Quantitative analysis of Western blotting results. Similar results were obtained from three independent experiments. All data are presented as the mean ± SEM (n = 6 in each group). *P < 0.05 and **P < 0.01 vs Control group; ## P < 0.01 vs CS group; && P < 0.01 vs PB2+Cyclopamine group In conclusion, we confirmed that PB2 can induce autophagy and inhibit the Shh signalling pathway to alleviate inflammation and oxidative stress by inhibiting Txnip/NLRP3 and TLR4/NF-κB, and activating the Nrf2/Keap1 and AMPK/GSK3β signalling pathways, which improves CS-induced liver injury ( Figure 8). Additionally, inhibiting the Shh signalling pathway may promote the compensatory effect of autophagy activation, thereby reducing the sensitivity of mice to CS, inhibits the Shh signalling pathway and enhances PB2induced autophagy activation, thereby enhancing the ability of PB2 to protect liver injury in mice. Overall, the study provides new insights into the functional mechanism of PB2 and the inhibition of the Shh signalling pathway to protect the liver from inflammation and oxidative stress during CS-induced liver injury. ACK N OWLED G EM ENTS This study was supported by grants from the National Natural Science Foundation of China (No. 81700573; 81600504) and administered by the National Natural Science Fund Committee. Both funds facilitated the study design and data collection. CO N FLI C T S O F I NTE R E S T The authors declare that there is no conflict of interest. DATA AVA I L A B I L I T Y S TAT E M E N T All data generated or analysed during this study are included in this article. R E FE R E N C E S F I G U R E 8 Scheme of the protective effects of PB2 on CS-induced liver injury. A, Direct effects. PB2 induces autophagy activation and inhibits the Shh signalling pathway to alleviate inflammation and oxidative stress through inhibition of Txnip/NLRP3 as well as TLR4/NF-κB, and activation of Nrf2/Keap1 and AMPK/GSK3β signalling pathways, which improves CS-induced liver injury. B, Compensatory effects. Inhibition of the Shh signalling pathway may promote the compensatory effect of autophagy activation, thereby reducing the sensitivity of mice to CS. Inhibits the Shh signalling pathway and enhances PB2-induced autophagy activation, thereby enhancing the ability of PB2 to protect liver injury in mice
5,600
2021-06-21T00:00:00.000
[ "Biology" ]
Sol-Gel Derived Tungsten Doped VO2 Thin Films on Si Substrate with Tunable Phase Transition Properties Vanadium dioxide (VO2) with semiconductor-metal phase transition characteristics has presented great application potential in various optoelectrical smart devices. However, the preparation of doped VO2 film with a lower phase transition threshold on Si substrate needs more investigation for the exploration of silicon-based VO2 devices. In this work, the VO2 films doped with different contents of W element were fabricated on high-purity Si substrate, assisted with a post-annealing process. The films exhibited good crystallinity and uniform thickness. The X-ray diffraction and X-ray photoelectron spectroscopy characterizations illustrated that W element can be doped into the lattice of VO2 and lead to small lattice distortion. In turn, the in situ FT-IR measurements indicated that the phase transition temperature of the VO2 films can be decreased continuously with W doping content. Simultaneously, the doping would lead to largely enhanced conductivity in the film, which results in reduced optical transmittance. This work provides significant insights into the design of doped VO2 films for silicon-based devices. Introduction Phase transition oxides have attracted great attention due to the rich physics of phase transition phenomena and their huge application potential in various optoelectrical devices [1,2]. Vanadium dioxide (VO 2 ) is one of the most intriguing prototypes. It exhibits reversible semiconductor-metal phase transition, accompanied by giant and steep changes in resistivity, optical transmission, reflection, etc. Moreover, this phase transition can be triggered by lots of excitation sources, such as temperature, electric field, laser, and strain [3][4][5]. Thus, VO 2 has been proposed to be available for thermochromic windows, sensors, memristors, uncooled infrared focal planes for thermal imagers, etc. [6][7][8]. Particularly, recent progress in terahertz (THz) technology indicates that VO 2 is quite suitable for smart devices including modulators, switching, and filters for THz communications and imaging [9,10]. Silicon-based devices are fundamental for the present semiconductor industry. Thus, the preparation of VO 2 films on a silicon substrate is significant for its further applications. There are already plenty of reported methods for the preparation of VO 2 films, such as magnetron sputtering, pulse laser deposition, chemical vapor deposition (CVD), and solgel method [11][12][13][14]. Wherein, the sol-gel method presents many advantages, such as being simpler and having faster processing, suitable for large-scale deposition, and easy to carry out the composition design. It has been reported that both inorganic and organic sol-gel methods can be optimized to fabricate the VO 2 film [14][15][16]. Particularly, Shi et al. developed a method to pre-treat the Si substrate with a hydrophilic solution and obtained enhanced hydrophilicity, and then the bonding of Si substrate with precursor V 2 O 5 gel can be improved largely. It provided a route to fabricate high-quality VO 2 films by overcoming the contradiction between the hydrophobicity of substrate and hydrophilicity of inorganic sol-gel [14]. Wu et al. proposed a design of organic sol-gel method to fabricate VO 2 film, which was available for various hydrophobic substrates [16]. Furthermore, the sol-gel method shows unique convenience in the design and fabrication of VO 2 films doped with tungsten (W), molybdenum (Mo), titanium (Ti), etc. Particularly, the VO 2 films doped with W element can decrease the phase transition threshold (temperature, laser pump fluence, etc.) remarkably [17][18][19][20][21], which paves the way for the application of VO 2 film in low-power optoelectrical devices. Nevertheless, the preparation of W-doped VO 2 film on Si substrate using the sol-gel method has been reported rarely. In this work, we used an inorganic sol-gel method to fabricate VO 2 films doped with different contents of W element. The precursor sol containing W element was designed. Additionally, the Si substrate was pre-treated using a reported hydrophilic treatment process, which then exhibited good compatibility with the sol. The gel films were annealed for the crystallization and stoichiometry evolution of vanadium oxides to form the VO 2 phase eventually. The films presented tunable phase transition temperature and optical switching properties. The results would be significant for the fabrication and application of silicon-based VO 2 devices. Experimental Section Preparation of W-doped VO 2 films on Si substrate. The VO 2 films doped with different contents of W element were fabricated using an inorganic sol-gel method. Firstly, the ammonium tungstate ((NH 4 ) 5 H 5 [H 2 (WO 4 ) 6 ]•H 2 O) and V 2 O 5 powder were mixed and molted at around 850 • C. Subsequently, the precursor sol was fabricated by pouring the molten mixed powder into deionized (DI) water, with the proportion of V 2 O 5 1 g/DI water 40 mL. The single crystal Si (100) substrates (~2000 Ω cm resistivity) were pre-treated with ethyl alcohol and hydrophilic solutions subsequently [22]. Then, the precursor films were spin-coated on Si substrate and annealed at around 500 • C at nitrogen atmosphere for 1.5 h. In this process, the crystallization and phase evolution of vanadium oxides will occur and lead to the formation of W-doped VO 2 film. Characterization. The crystalline structures of the films were analyzed by X-ray diffraction (XRD, X' Pert, Philips, Amsterdam, The Netherlands) with Cu Kα (λ = 0.154056 nm) radiation source. The morphologies of products were investigated by scanning electron microscopy (SEM, S-4800, Hitachi, Tokyo, Japan). Additionally, the thickness of the film was determined by the cross-sectional SEM morphology. The vanadium valence states and chemical composition of the VO 2 thin films were detected by X-ray photoelectron spectroscopy (XPS, Kratos, Manchester, UK) using Al Kα (hv = 1486.6 eV) exciting source. The optical properties of the films were investigated by Tensor 27 (Bruker, Bremen, Germany) spectrometer attached with an adapted heating-controlled unit, and then the hysteresis loops of VO 2 films were received by collecting the transmittance of films at a fixed wavelength (4 µm). The square resistances of the films were measured using a four-point probe system (280SI) with a controllable heating system. Figure 1a shows the XRD patterns of the VO 2 films doped with different concentrations of W element. A strong diffraction peak is observed at the angle 2θ = 27.52 • for the undoped VO 2 film and W-doped VO 2 films, which can be indexed to the (011) plane of VO 2 . It indicates that the incorporation of W does not affect the preferential orientation of VO 2 in the (011) direction on the single crystal Si substrate. No peaks of any other vanadium oxides (such as V 2 O 5 and V 2 O 3 ) are observed, revealing that the products have high purity. Additionally, there are no peaks related to ammonium tungstate or their derivatives, suggesting that the W atoms are incorporated into the crystal lattice of VO 2 and forming the substitutional solid solution. Furthermore, the magnified image at around 27.52 • indicates that the doping with W element will lead to a redshift of the (011) peak with an increased doping amount of W (as shown in Figure 1b). It can be ascribed to the larger atom size of W compared to V, which results in lattice expansion in VO 2 . It should be noted that the precursor films were annealed at around 500 • C in a nitrogen atmosphere for 1.5 h [14][15][16]. This annealing process has been widely reported for obtaining a high-purity VO 2 phase. In this work, the W-doped films present still demonstrate high quality traits. Results and Discussion forming the substitutional solid solution. Furthermore, the magnified image at around 27.52° indicates that the doping with W element will lead to a redshift of the (011) peak with an increased doping amount of W (as shown in Figure 1b). It can be ascribed to the larger atom size of W compared to V, which results in lattice expansion in VO2. It should be noted that the precursor films were annealed at around 500 °C in a nitrogen atmosphere for 1.5 h [14][15][16]. This annealing process has been widely reported for obtaining a highpurity VO2 phase. In this work, the W-doped films present still demonstrate high quality traits. The typical SEM morphology of VO2 films deposited on the Si substrates with different W doping contents are presented in Figure 2. It is worth noting that W doping has a great influence on the morphology of VO2 films. Figure 2a shows the SEM photograph of the film without W doping. Most of the particles are bonded together, and boundaries can hardly be seen. Moreover, the film shows obvious microcracks which are harmful to the phase transition properties of VO2 films [23]. For the sample that had a W-doping of 0.61%, the film is uniform and compact with large grains of about 120 nm, and fuzzy boundaries can be seen. With 1.12% doping, the film is more compact, and the grain sizes are reduced slightly compared with 0.61% W-doped VO2 film. In addition, the film exhibits more clearly grain boundaries. Subsequently, as the W doping content increases to 1.62%, the grain size of the films is further reduced. This result suggests that W doping could make the VO2 film more compact and reduce the grain size. Furthermore, the cross-sectional shapes of the W-doped film were observed, as shown in Figure 3, displaying the thickness of the film of about 404.3 nm. The typical SEM morphology of VO 2 films deposited on the Si substrates with different W doping contents are presented in Figure 2. It is worth noting that W doping has a great influence on the morphology of VO 2 films. Figure 2a shows the SEM photograph of the film without W doping. Most of the particles are bonded together, and boundaries can hardly be seen. Moreover, the film shows obvious microcracks which are harmful to the phase transition properties of VO 2 films [23]. For the sample that had a W-doping of 0.61%, the film is uniform and compact with large grains of about 120 nm, and fuzzy boundaries can be seen. With 1.12% doping, the film is more compact, and the grain sizes are reduced slightly compared with 0.61% W-doped VO 2 film. In addition, the film exhibits more clearly grain boundaries. Subsequently, as the W doping content increases to 1.62%, the grain size of the films is further reduced. This result suggests that W doping could make the VO 2 film more compact and reduce the grain size. Furthermore, the cross-sectional shapes of the W-doped film were observed, as shown in Figure 3, displaying the thickness of the film of about 404.3 nm. XPS was performed to investigate the composition and chemical state of W-doped VO 2 film deposited on the Si substrates. Figure 4a shows the wide-range survey spectrum of the 1.62% W-doped VO 2 film. It reveals that the sample consists of vanadium, oxygen, nitrogen, carbon, silicon, and tungsten, where the peak of silicon signals comes from the substrates, and nitrogen and carbon are attributed to the adventurous hydrocarbon contamination on the sample surface. In Figure 4b, the V2p peaks were fitted with a Shirley function. The V2p 3/2 peak is separated into two peaks, meaning two valence states of vanadium (+4 valence and +5 valence) exist in the sample, where the binding energy of 515.81 eV and 516.98 eV correspond to V 4+ and V 5+ , respectively. Both of the binding energy for the two valence states of vanadium are consistent with the value in VO 2 in previous reports [24]. The nonstoichiometry of vanadium in the film can be attributed to oxidation at the surface of the sample when exposed to air. The fractional percentage of the +4 valence state in the VO 2 film on the Si substrates can be evaluated according to the peak area of V 4+ and V 5+ , as about 66.3%, illustrating that the main component of W-doped VO 2 film is VO 2 . Figure 4c shows the binding energies of 35.18 eV and 36.98 eV for W4f 7/2 and W4f 5/2 , respectively, revealing that the existing form of W ion in this sample is W 6+ [25,26]. can be seen. With 1.12% doping, the film is more compact, and the grain sizes are reduced slightly compared with 0.61% W-doped VO2 film. In addition, the film exhibits more clearly grain boundaries. Subsequently, as the W doping content increases to 1.62%, the grain size of the films is further reduced. This result suggests that W doping could make the VO2 film more compact and reduce the grain size. Furthermore, the cross-sectional shapes of the W-doped film were observed, as shown in Figure 3 XPS was performed to investigate the composition and chemical state of W-doped VO2 film deposited on the Si substrates. Figure 4a shows the wide-range survey spectrum of the 1.62% W-doped VO2 film. It reveals that the sample consists of vanadium, oxygen, nitrogen, carbon, silicon, and tungsten, where the peak of silicon signals comes from the substrates, and nitrogen and carbon are attributed to the adventurous hydrocarbon contamination on the sample surface. In Figure 4b, the V2p peaks were fitted with a Shirley function. The V2p3/2 peak is separated into two peaks, meaning two valence states of vanadium (+4 valence and +5 valence) exist in the sample, where the binding energy of 515.81 eV and 516.98 eV correspond to V 4+ and V 5+ , respectively. Both of the binding en- XPS was performed to investigate the composition and chemical sta VO2 film deposited on the Si substrates. Figure 4a shows the wide-range su of the 1.62% W-doped VO2 film. It reveals that the sample consists of vana The optical properties of VO2 films were investigated by infrared spectra. Figure 5 shows the infrared transmittance at room temperatures for VO2 films with W doping concentrations of 0% and 1.62%. Their infrared transmittances at the wavelength of 4 μm are 58% and 40%, respectively. The reason for the transmittance decreases significantly with increasing W doping contents could be as follows: the concentration of carriers generated by the thermal excitation in the band gap of semiconductor VO2 increases with the increase of the W doping content, and the infrared shielding effect caused by carrier reduces the infrared transmittance of the VO2 films [27]. Figure 6 displays the hysteresis loop of transmittance-temperature at a fixed wavelength of 4 μm for VO2 films synthesized with different W-doped contents. The corresponding first-order derivative curves are shown in Figure 6 inset. The figures clearly illustrate the influence of W doping on the phase transition of VO2 films. The calculated results from the figures are shown in Table 1. It can be seen from Figure 6 and Table 1, the Tc are 67.35, 47.9, 40.4, and 33.35 °C and hysteresis widths are 9.9, 5.2, 6.0, and 9.3 °C for 0%, 0.61%, 1.12% and 1.62% W-doped VO2 films, respectively. Several characteristics can be concluded. First, compare with the undoped film, the Tc of VO2 films decrease effectively. With the introduction of ammonium tungstate, the V 4+ -V 4+ pairs are destroyed by the partial substitution of V atoms with W atoms, which reduces the stability of the structure of VO2. Therefore, the phase transition temperature of VO2 films reduces [28]. Second, the hysteresis widths increase with the increase of the W doping content. However, all of them are less than the undoped film. In the case of Lopez et al., the driving force of phase transition comes from various defects in the film [29]. According to the research of Jing The optical properties of VO 2 films were investigated by infrared spectra. Figure 5 shows the infrared transmittance at room temperatures for VO 2 films with W doping concentrations of 0% and 1.62%. Their infrared transmittances at the wavelength of 4 µm are 58% and 40%, respectively. The reason for the transmittance decreases significantly with increasing W doping contents could be as follows: the concentration of carriers generated by the thermal excitation in the band gap of semiconductor VO 2 increases with the increase of the W doping content, and the infrared shielding effect caused by carrier reduces the infrared transmittance of the VO 2 films [27]. Figure 6 displays the hysteresis loop of transmittance-temperature at a fixed wavelength of 4 µm for VO 2 films synthesized with different W-doped contents. The corresponding first-order derivative curves are shown in Figure 6 inset. The figures clearly illustrate the influence of W doping on the phase transition of VO 2 films. The calculated results from the figures are shown in Table 1. It can be seen from Figure 6 and Table 1, the T c are 67.35, 47.9, 40.4, and 33.35 • C and hysteresis widths are 9.9, 5.2, 6.0, and 9.3 • C for 0%, 0.61%, 1.12% and 1.62% W-doped VO 2 films, respectively. Several characteristics can be concluded. First, compare with the undoped film, the T c of VO 2 films decrease effectively. With the introduction of ammonium tungstate, the V 4+ -V 4+ pairs are destroyed by the partial substitution of V atoms with W atoms, which reduces the stability of the structure of VO 2 . Therefore, the phase transition temperature of VO 2 films reduces [28]. Second, the hysteresis widths increase with the increase of the W doping content. However, all of them are less than the undoped film. In the case of Lopez et al., the driving force of phase transition comes from various defects in the film [29]. According to the research of Jing Du et al., the introduction of W could probably enhance the nucleation density (ρ) of defects, the bulk free energy (∆g ex ) per unit volume decreases, while ∆g ex = c|T−T 0 | (where c is a constant, T 0 is the T c for undoped film, T represents the actual T c of films), |T−T 0 | decreases with decreasing of g ex , and smaller value of |T−T 0 | can trigger phase transition, therefore resulting in decreased hysteresis widths [30]. Molecules 2023, 28, x FOR PEER REVIEW 6 of 9 Du et al., the introduction of W could probably enhance the nucleation density (ρ) of defects, the bulk free energy (Δgex) per unit volume decreases, while Δgex = c|T-T 0 | (where c is a constant, T 0 is the Tc for undoped film, T represents the actual Tc of films), |T-T 0 | decreases with decreasing of gex, and smaller value of |T-T 0 | can trigger phase transition, therefore resulting in decreased hysteresis widths [30]. Du et al., the introduction of W could probably enhance the nucleation density (ρ) of defects, the bulk free energy (Δgex) per unit volume decreases, while Δgex = c|T-T 0 | (where c is a constant, T 0 is the Tc for undoped film, T represents the actual Tc of films), |T-T 0 | decreases with decreasing of gex, and smaller value of |T-T 0 | can trigger phase transition, therefore resulting in decreased hysteresis widths [30]. The electrical performance of VO 2 films was tested using the conventional four-pointprobe method. Figure 7 shows the square resistance versus temperature curves of VO 2 films. It can be seen that the square resistances of the VO 2 thin films decrease exponentially as the temperature increases. The undoped VO 2 film has a quite high square resistance of 144 KΩ/ at 30 • C and 0.28 KΩ/ at 85 • C. The orders of magnitude of the square resistance transition is approximately 3. For 1.62% W-doped VO 2 thin film, its square resistance is 64 KΩ/ at 30 • C and 1.25 KΩ/ at 70 • C. These results imply that W doping can reduces not only the square resistance of VO 2 thin films but also the orders of magnitude of square resistance transition, which is consistent with the previous reports [31][32][33]. The 3d 1 configuration of vanadium ions and the 3d 1 tied around V 4+ -V 4+ pairs offer a high activation energy between the conduction and valence band for undoped VO 2 thin films, resulting in poor conductivity [34]. By doping with W, the partial substitution of V atoms with W atoms favor the enhancement of the electron concentration, and the Fermi level shifts toward the conduction band. Consequently, the activation energy of W-doped VO 2 thin films decreases, and the conductivity increase [35]. It also can be concluded that the transition temperature of VO 2 thin films can be tuned by doping W effectively. This is in good agreement with what we have previously obtained from FTIR. The electrical performance of VO2 films was tested using the conventional four-pointprobe method. Figure 7 shows the square resistance versus temperature curves of VO2 films. It can be seen that the square resistances of the VO2 thin films decrease exponentially as the temperature increases. The undoped VO2 film has a quite high square resistance of 144 KΩ/□ at 30 °C and 0.28 KΩ/□ at 85 °C. The orders of magnitude of the square resistance transition is approximately 3. For 1.62% W-doped VO2 thin film, its square resistance is 64 KΩ/□ at 30 °C and 1.25 KΩ/□ at 70 °C. These results imply that W doping can reduces not only the square resistance of VO2 thin films but also the orders of magnitude of square resistance transition, which is consistent with the previous reports [31][32][33]. The 3d 1 configuration of vanadium ions and the 3d 1 tied around V 4+ -V 4+ pairs offer a high activation energy between the conduction and valence band for undoped VO2 thin films, resulting in poor conductivity [34]. By doping with W, the partial substitution of V atoms with W atoms favor the enhancement of the electron concentration, and the Fermi level shifts toward the conduction band. Consequently, the activation energy of W-doped VO2 thin films decreases, and the conductivity increase [35]. It also can be concluded that the transition temperature of VO2 thin films can be tuned by doping W effectively. This is in good agreement with what we have previously obtained from FTIR. Conclusions We used an inorganic sol-gel method to assist with the post-annealing process to fabricate VO2 films on a Si substrate. Additionally, the films were doped with different contents of W elements. The results indicated that the films exhibited good crystallinity, uniform thickness, and continuously decreased phase transition temperature with W doping content. However, the doping would lead to enhanced conductivity in the film, which results in reduced optical transmittance. Thus, the balance between lower phase transition temperature and optical switching properties should be considered with regard to the possible devices. This work provides significant insights into the design of doped VO2 films for silicon-based devices. Conclusions We used an inorganic sol-gel method to assist with the post-annealing process to fabricate VO 2 films on a Si substrate. Additionally, the films were doped with different contents of W elements. The results indicated that the films exhibited good crystallinity, uniform thickness, and continuously decreased phase transition temperature with W doping content. However, the doping would lead to enhanced conductivity in the film, which results in reduced optical transmittance. Thus, the balance between lower phase transition temperature and optical switching properties should be considered with regard to the possible devices. This work provides significant insights into the design of doped VO 2 films for silicon-based devices. Author Contributions: All authors contributed to the experiments design. Material preparation and data collection were performed by X.D. and Y.Z. The data curation were performed by Y.L. All authors commented on the preparation of the manuscript. All authors have read and agreed to the published version of the manuscript. Funding: This research received no external funding. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: The data can be found by contacting the corresponding author.
5,531.4
2023-04-27T00:00:00.000
[ "Materials Science", "Physics" ]
Search of Potential Vaccine Candidates against Trueperella pyogenes Infections through Proteomic and Bioinformatic Analysis Trueperella pyogenes is an opportunistic pathogen, responsible for important infections in pigs and significant economic losses in swine production. To date, there are no available commercial vaccines to control diseases caused by this bacterium. In this work, we performed a comparative proteomic analysis of 15 T. pyogenes clinical isolates, by “shaving” live cells, followed by LC-MS/MS, aiming at the identification of the whole set of surface proteins (i.e., the “pan-surfome”) as a source of antigens to be tested in further studies as putative vaccine candidates, or used in diagnostic tools. A total of 140 surface proteins were detected, comprising 25 cell wall proteins, 10 secreted proteins, 23 lipoproteins and 82 membrane proteins. After describing the “pan-surfome”, the identified proteins were ranked in three different groups based on the following criteria: to be (i) surface-exposed, (ii) highly conserved and (iii) widely distributed among different isolates. Two cell wall proteins, three lipoproteins, four secreted and seven membrane proteins were identified in more than 70% of the studied strains, were highly expressed and highly conserved. These proteins are potential candidates, alone or in combination, to obtain effective vaccines against T. pyogenes or to be used in the diagnosis of this pathogen. Introduction Trueperella pyogenes is a Gram-positive bacterium that is part of the normal biota of skin and mucous membranes of upper respiratory, gastrointestinal, reproductive and urinary tracts of domestic and wild life animals [1]. However, it can be an opportunistic pathogen responsible for purulent infections, such as metritis, mastitis, pneumonia and abscesses, of special importance in livestock breeding animals because of the economic losses it generates [2]. Antimicrobial treatment is the main tool to control the infections caused by this microorganism so far [3]. However, the growing concern about the use of antimicrobials requires studying other alternatives for the control of the diseases caused by this pathogen [4]. Among them, vaccination is one of the most recommended measures, and should be considered a method of first choice to prevent T. pyogenes diseases [2]. Various approaches to stimulate a protective immunity against T. pyogenes infection in animals have been tried. Whole-cell vaccines based on killed or attenuated strains or culture supernatant have given inconsistent results [5][6][7]. Bacterial Strains and Culture Conditions Fifteen T. pyogenes isolates recovered from pigs totally or partially condemned at the slaughterhouse after veterinary inspection (Regulation 2004/854/EC) were studied. Those isolates were genetically characterized by our group in a previous work [4]. Samples were obtained from different locations with macroscopic lesions of pneumonia, endocarditis, arthritis, lymphadenitis, abscess or pyogranuloma-like lesions ( Table 1). All strains, maintained at −80 • C, were plated on Columbia CNA agar (Oxoid ltd., Hampshire, UK), supplemented with 5% (v/v) sterile defibrinated sheep blood. Plates were incubated under microaerophilic conditions (5% CO 2 ) at 37 • C for 24-48 h [23]. Once grown, the whole bacterial growth was inoculated in 45 mL of brain heart infusion (BHI, Oxoid ltd., Hampshire, UK) [24], and incubated at 37 • C for 48 h under aerophilic conditions. After this incubation, T. pyogenes reached an OD 595 of 0.4, corresponding to mid-exponential phase. Abscess Extensive 64 C a All the isolates were recovered from pigs totally or partially condemned at the slaughterhouse after veterinary inspection (Regulation 2004/854/EC). Samples were obtained from lymph node (n = 4), lung (n = 2), joint (n = 2), liver (n = 2), heart (n = 2), spleen (n = 1), abscess (n = 1), brain (n = 1), with macroscopic lesions of pneumonia, endocarditis, arthritis, lymphadenitis, abscess or pyogranuloma-like lesions. b Strains were isolated from carrier pigs which were reared under intensive or extensive farming conditions. c Pulsed field gel electrophoresis (PFGE) patterns obtained after macrorestriction with the BcuI enzyme showing the genetic relationship between T.pyogenes isolates [4]. d All the isolates analysed in the previous study were grouped within three main PFGE clusters at an 85% of genetic similarity (A-C) [4]. "Shaving" of Bacterial Live Cells and Peptide Extraction Bacteria from 45 mL cultures at mid-exponential growth phase (approximately 10 7 cells at OD 595 = 0.4) were harvested by centrifugation at 3500× g for 10 min at 4 • C and washed three times with 20 mL PBS. Cells were resuspended in 0.4 mL of PBS/30% sucrose in a 1.5 mL tube. Proteolytic reactions were carried out with trypsin (Promega, Madison, WI, USA) at 5 µg/mL, for 30 min at 37 • C with top-down agitation. The digestion mixtures were centrifuged at 3500× g for 10 min at 4 • C, and the supernatants (the "surfomes" containing the peptides) were filtered using 0.22-µm pore-size filters (Millipore, Burlington, MA, USA). "Surfomes" were re-digested with 2 µg trypsin during 2 h at 37 • C with top-down agitation. Salts were removed using Oasis HLB extraction cartridges (Waters, Milford, MA, USA). Peptides were eluted with increasing concentrations of acetonitrile/0.1% formic acid, according to manufacturer's instructions. Peptide fractions were concentrated with a vacuum concentrator (Eppendorf, Hamburg, Germany), and kept at −20 • C until further analysis. Liquid Chromatography-Mass Spectrometry (LC-MS/MS) Analysis Peptide separation was performed by nano-LC using a Dionex Ultimate 3000 nano UPLC (Thermo Scientific, San Jose, CA, USA), equipped with a reverse phase C 18 75 µm × 50 mm Acclaim Pepmap column (Thermo Scientific) at 300 nL/min and 40 • C for a total run time of 85 min. The mix of peptides was previously concentrated and cleaned up on a 300 µm × 5 mm Acclaim Pepmap cartridge (Thermo Scientific) in 2% acetonitrile/0.05% formic acid for 5 min, with a flow of 5 µL/min. Buffer A (0.1% formic acid) and Buffer B (80% acetonitrile, 0.1% formic acid) were used as a mobile phase for the chromatographic separation, according to the following elution conditions: 4-35% Buffer B for 60 min; 35-55% Buffer B for 3 min; 55-90% Buffer B for 3 min, followed by 8 min washing with 90% Buffer B, and re-equilibration during 12 min with 4% Buffer B. Peptide positive ions eluted from the column were ionized by a nano-electrospray ionization source, and analyzed in positive mode on a trihybrid Thermo Orbitrap Fusion (Thermo Scientific) mass spectrometer operating in Top30 Data Dependent Acquisition mode, with a maximum cycle time of 3 s. MS1 scans of peptide precursors were acquired in a 400-1500 m/z range at 120,000 resolution (at 200 m/z), with a 4 × 10 5 ion count target threshold. For MS/MS, precursor ions were previously isolated in the quadrupole at 1.2 Da, and then CID-fragmented in the ion trap with 35% normalized collision energy. Monoisotopic precursor selection was turned on. Ion trap parameters were: (i) the automatic gain control was 2 × 10 3 ; (ii) the maximum injection time was 300 ms; and (iii) only those precursors with charge state 2-5 were sampled for MS/MS. In order to avoid redundant fragmentations a dynamic exclusion time was set to 15 s with a 10-ppm tolerance around the selected precursor and its isotopes. Database Searching and Protein Identification The mass spectrometry raw data were processed using Proteome Discoverer (version 2.1.0.81, Thermo Scientific). Charge state deconvolution and deisotoping were not performed. MS/MS spectra were searched with SEQUEST engine against a database of Uniprot_Trueperella pyogenes_Jun2018 (www.uniprot.org, Taxonomy ID: 1661) containing all the strain sequences available to date, and applying the following search parameters: trypsin digestion with 4 missed cleavages. Methionine oxidation was set as variable modification. A value of 10 ppm was set for mass tolerance of precursor ions, and 0.1 Da tolerance for product ions. Peptide identifications were accepted if they exceeded the filter parameter Xcorr score versus charge state with SequestNode Probability Score (+1 = 1.5, +2 = 2.0, +3 = 2.25, +4 = 2.5). All the identifications were manually inspected to eliminate protein redundancies using BLASTp. For those hits which resulted in homology, that with the highest score was selected, and the other ones were discarded. The mass spectrometry raw data have been deposited to PeptideAtlas (www.peptideatlas.org) with the dataset identifier PASS01586. Bioinformatic Analysis of Protein Sequences Primary computational predictions of subcellular localization were carried out by using PsortB v3.0 (https://www.psort.org/psortb/). Feature-based algorithms were also used to contrast PsortB predictions: TMHMM 2.0 (http://www.cbs.dtu.dk/services/TMHMM/) for searching transmembrane helices; SignalP 5.0 (http://www.cbs.dtu.dk/services/SignalP/) for type-I signal peptides: those proteins containing only a cleavable type-I signal peptide as the featured sequence were classed as secreted; and LipoP (http://www.cbs.dtu.dk/services/LipoP/) for identifying type-II signal peptides, which are characteristic of lipoproteins. Topological representations of membrane proteins ( Figure 1) were performed with the web-based TOPO2 software (http://www.sacs.ucsf.edu/TOPO2/). Moreover, the algorithm VaxiJen (http: //www.ddg-pharmfac.net/vaxijen/VaxiJen/VaxiJen.html), based on protein physicochemical properties, was used to predict in silico the protective capacity of the proteins included in the ranking. The VaxiJen model used was "bacterial", with the threshold fixed on 0.5 [25,26]. separated from the rest in the principal component (PC) 1 axis, as well as isolate I to a lesser extent, thus indicating that their overall surface protein pattern was different in terms of protein abundances. Then, we represented in hierarchically-clustered heatmaps the z-scored abundances of the 140 identified surface proteins, grouped according to their subcellular localization: lipoproteins, cell wall, membrane, and secreted proteins ( Figure 1). Regarding lipoproteins, the isolate I differed from the others in a relatively higher expression of almost half of the identified lipoproteins, followed by the isolate B to a lesser extent ( Figure 1a). Additionally, isolate B was the one showing the highest abundances of many cell wall proteins (Figure 1b). A high diversity and variability of membrane protein abundances was found throughout all the isolates ( Figure 1c). However, differences for secreted proteins were less clear ( Figure 1d). Especially in these last two categories, there was a greater dispersion of values between replicates. In summary, it appeared that the discrimination in surface protein abundances among isolates was mainly due to lipoproteins and cell wall proteins (at least, regarding separation of B and I from the rest of isolates), and to membrane proteins to a lesser extent. Ranking of Proteins from the "Pan-Surfome" of T. pyogenes Based on Their Potential as Putative Vaccine Candidates Finally, the identified surface proteins were ranked in three groups (A, B and C, from best to worst) of a priori potentiality for further immunization and/or vaccination studies on the basis of previous works [16,[25][26][27] (Table 3). Briefly, the proteins were ranked according to the following parameters: to be surface expressed, highly conserved and widely distributed among isolates. To ensure the wide distribution of selected proteins among the different isolates, proteins were classified in the three groups: proteins present in more than 70% of the isolates were included in the group A Data Analysis and Statistics For the 15 strains the "shaving" experiments were conducted in triplicate, with each replicate being an independent culture. Proteins were considered to be present in a given sample whenever they were detected in at least two out of the three biological replicates for each strain. Otherwise, proteins found only in one biological replicate were discarded from the overall count of identified proteins for a given strain. For further quantitative analysis, means and standard deviations were calculated using an Excel spreadsheet (Microsoft Excel 2011 v14.0.0 for Mac, Microsoft, Redmond, WA, USA). Z-scored values were calculated before the principal component and clustering analysis was performed. Principal component analysis was done using the R FactoMineR package. The factoextra package was used to represent these analyses, and the pheatmap package was used to cluster the data and represent the corresponding heatmaps. Non-detected proteins in samples were assigned a 0 value to avoid the processing of NA (not available) data. Describing the "Pan-Surfome" of T. pyogenes For this study, the surface proteome of 15 clinical isolates (Supplementary Materials Table S1) was obtained by "shaving" bacterial live cells with trypsin and further LC-MS/MS analysis. We defined the "surfome" of each isolate as the set of predicted surface proteins identified in at least two biological replicates of the given isolate, and the global "pan-surfome" as the set of all the proteins found in the whole collection of strains. A total of 140 surface proteins were identified in the 15 T. pyogenes isolates analyzed, grouped in the following categories ( Table 2): 25 were cell wall proteins (representing 17.9% of total identified surface proteins); 10 (7.1%) were proteins possessing a signal peptide I, i.e., proteins secreted via the SPI secretory pathway; 23 (16.4%) were lipoproteins with a signal peptide II; and 82 (58.6%) were membrane proteins with one or more transmembrane domains (TMD). Table 2 also shows the range of proteins of each category detected per isolate. Interestingly, 11 out of the 25 identified proteins predicted to be cell wall-attached, possessed an LAXTG sortase E-recognizing motif instead of the most common LPXTG, and the membrane protein sortase E was also identified. In addition, 10 of the identified cell wall proteins had an LSXTG motif (Supplementary Materials Table S1). On the other hand, 820 proteins were classified as cytoplasmic proteins, and 64 proteins were predicted by PsortB to be "unknown". For these, no exporting motif was identified after manual inspection and using other primary prediction algorithms. The membrane proteins were those exhibiting the highest expression frequencies: 31 proteins were found at least in the 50% of the analyzed isolates (37.8% of the membrane proteins found in the "pan-surfome"). In a second place, eight cell wall anchored proteins were identified in a minimum of 50% of the isolates (32% of the cell wall proteins described in the "pan-surfome"). Regarding the lipoproteins and secreted proteins categories, six (26.1%) and six (60%) proteins were found in or more than a half of the T. pyogenes strains, respectively. However, if we compare the identification frequencies of the different categories in relative terms, the secretory proteins and the cell wall category were the most prevalent ones, as these categories showed the highest number of proteins identified in a high proportion of isolates: six out of 10 proteins (60%) and eight out of 25 proteins (32%) in ≥50% of the isolates, respectively. Analysis of Differences in Surface Protein Abundances of T. pyogenes Clinical Isolates After protein identification, we determined the differences in the abundances of surface proteins within the 15 T. pyogenes isolates by a label free-based semi-quantitative analysis, based on chromatography peak areas (Supplementary Materials Table S1). First, we performed a principal component analysis (PCA) to evaluate differences in the overall pattern of surface protein abundance when the 15 isolates were compared (Supplementary Materials Figure S1). The measurement of the Euclidean distances showed that in general terms, the three replicates of each isolate were grouped, with some major dispersions in relative terms for isolates A and M, and major absolute dispersions for isolates B, E, and I (Supplementary Materials Figure S2). However, B and E strains were clearly separated from the rest in the principal component (PC) 1 axis, as well as isolate I to a lesser extent, thus indicating that their overall surface protein pattern was different in terms of protein abundances. Then, we represented in hierarchically-clustered heatmaps the z-scored abundances of the 140 identified surface proteins, grouped according to their subcellular localization: lipoproteins, cell wall, membrane, and secreted proteins (Figure 1). Regarding lipoproteins, the isolate I differed from the others in a relatively higher expression of almost half of the identified lipoproteins, followed by the isolate B to a lesser extent (Figure 1a). Additionally, isolate B was the one showing the highest abundances of many cell wall proteins (Figure 1b). A high diversity and variability of membrane protein abundances was found throughout all the isolates (Figure 1c). However, differences for secreted proteins were less clear (Figure 1d). Especially in these last two categories, there was a greater dispersion of values between replicates. In summary, it appeared that the discrimination in surface protein abundances among isolates was mainly due to lipoproteins and cell wall proteins (at least, regarding separation of B and I from the rest of isolates), and to membrane proteins to a lesser extent. Ranking of Proteins from the "Pan-Surfome" of T. pyogenes Based on Their Potential as Putative Vaccine Candidates Finally, the identified surface proteins were ranked in three groups (A, B and C, from best to worst) of a priori potentiality for further immunization and/or vaccination studies on the basis of previous works [16,[25][26][27] (Table 3). Briefly, the proteins were ranked according to the following parameters: to be surface expressed, highly conserved and widely distributed among isolates. To ensure the wide distribution of selected proteins among the different isolates, proteins were classified in the three groups: proteins present in more than 70% of the isolates were included in the group A (n = 16), proteins present in 50-70% of strains in the group B (n = 9), and in the group C, proteins present in 30-50% of strains (n = 15). Table 3. Rating of proteins according to their potentiality as putative antigens for further immunization and/or vaccination studies. A special mention is needed for membrane proteins. They are the most embedded ones in the membrane because they have transmembrane domains (TMD). They can be subdivided into membrane proteins with one TMD, which usually have domains in the extracellular side with hundreds of amino acid residues, and membrane proteins with more than one TMD, with a lower probability of having loops large enough to reach the surface and be accessible to antibodies. For this reason, membrane proteins were divided into two sub-groups: membrane proteins with one TMD (n = 8), and those with more than one TMD (n = 6) ( Table 3). Ranking Proteins Their topology was studied by means of TOPO2 after mapping the experimentally identified peptides (Appendix A, Supplementary Materials Dataset A1) on the predicted sequences. The membrane proteins included in the ranking were those with the majority of peptides oriented to the external side of the membrane (Figure 2), either with one or more than one TMD. Another criterion to select proteins as antigen candidates for further studies was that they were highly conserved among the isolates. The degree of homology in the amino acid sequence of each protein was compared with the 10 sequenced T. pyogenes strains that are published so far. All the proteins listed in the ranking showed a degree of homology in their amino acid sequence that ranged from 82.5% to 100% among all the completely sequenced isolates of T. pyogenes (data not shown). Finally, the algorithm VaxiJen was used to predict the protective capacity of those proteins included in the ranking. Most of the proteins which were previously included in the ranking reached 0.5 (ranged from 0.50 to 0.8727). Considering the average of the score per protein category (regardless of the group A, B or C), an average of 0.61 on the VaxiJen score was observed for cell wall proteins, secreted proteins and lipoproteins. A lower score was obtained for the membrane proteins, either with one TMD (0.55) or more than one TMD (0.55). Some proteins were removed from the rating because their VaxiJen scores were lower than 0.5, and therefore they were considered as potential non-antigens: in the category A, the membrane protein X4RDW5, annotated as a signal peptidase I and present in all the clinical isolates, was removed from the rating because its VaxiJen score was 0.43. Within the category C, the cell wall anchored protein X4QMI5, the secreted protein A0A0M4JYB5, the lipoproteins A0A0M5KPJ2 and A0A2G9KEH2 and the membrane proteins A0A2G9KB80, X4QR96 and A0A0M3SNU6 (0.40; 0.48; 0.45; 0.46; 0.48, 0.48 and 0.47, respectively) were also removed. The majority of the excluded proteins (shown in bold in Table 3) belonged to the category of membrane proteins. Discussion Surface proteins play an essential role in the interplay between cells and their environment, which is even more relevant for microorganisms causing infectious diseases, as many of these proteins are involved in virulence or pathogenicity [11,27]. Moreover, surface proteins, as being exposed, have the highest chances to raise an effective immune response and, therefore, to become ideal candidates for drug or vaccine development [28]. Proteomics offers an adequate tool for massive identification of proteins. Particularly, the "shaving" of bacterial live cells with proteases followed by LC-MS/MS analysis constitutes a powerful technique for the fast identification of the most abundant and exposed surface proteins, i.e., the "surfome" [12,19,29]. In the present study, we performed, for the first time, the "shaving" approach to identify the "pan-surfome" of T. pyogenes and to carry out a comparative analysis of the surface protein profile among several clinical isolates. A total of 140 surface proteins were identified, similar to that obtained for other pathogens [12,19,30,31]. It demonstrates the utility of the proteomic "shaving" of live cells for the detection of surface proteins and to describe the "pan-surfome" of T. pyogenes. As expected from previous applications of this proteomic approach, we also found a substantial number of cytoplasmic proteins. It is widely known that the identification of predicted cytoplasmic proteins in bacterial surface fractions is not strange and an ineluctable fact. It can be due to several reasons, as residual cell lysis, export by non-canonical secretion pathways (i.e., "moonlighting" proteins) or release via extracellular membrane vesicles [32]. When we compared the expression frequencies of the different categories in relative terms, the secreted and the cell wall proteins were those most prevalent, as those two categories exhibited the highest number of proteins identified in a high proportion of isolates (six out of 10 secreted proteins and eight out of 25 cell-wall proteins in ≥50% of the isolates, respectively). This indicates that those protein categories are most exposed on the surface of Gram-positive bacteria, as already reported [11]. Very similar results have been obtained by proteomic analysis in other Gram-positive species, such as Streptococcus pneumoniae, Streptococcus suis, Enterococcus faecalis and group A Streptococcus [12,19,30,31]. Noticeably, we identified 11 predicted cell wall proteins that possessed an LAXTG motif instead of the LPXTG that is the most common in Gram-positive bacteria. In addition, 10 out of the 25 identified cell wall proteins had an LSXTG motif. Additionally, we identified the membrane protein sortase E. This class of sortases has been recently described in bacteria with high G + C content [33,34]. Sortase E would act as the housekeeping sorting enzyme in T. pyogenes, as sortases A and E are never found in the same organism. Sortases E recognize substrates containing the LAXTG motif [35,36]. However, there is no published evidence that they can act on LSXTG, although it cannot be discarded. Considering that intra-species genetic variability can take place and that the protein expression pattern among isolates varies, finding a lot of proteins that are common to the majority of the analyzed isolates would not be expected [12,19,37]. In fact, in Streptococcus pneumoniae it has been reported that only 10.5% of the identified surface proteins were common to all the analyzed isolates [12] and in Streptococcus suis, Gómez-Gascón et al., did not identify any common protein in the 100% of the tested strains (n = 39) [19]. In this study, 51 surface proteins were identified in more than 50% of the strains. This also indicates a variability of protein expression pattern among all the isolates, as shown by the PCA. In addition, our hierarchically-clustered heatmap analysis showed that lipoproteins, cell wall proteins and membrane proteins to a lesser extent contributed to these differences among the clinical isolates, in a similar way to a recent study in S. suis [38]. Moreover, when we searched for a relationship between the protein pattern expression and the pulsed field gel electrophoresis (PFGE) clusters in our T. pyogenes isolates, no significant correlation was obtained (data not shown). Those findings coincide with the results showed previously by our group, i.e., a relatively high genetic diversity in this bacterial species [4]. We do not know whether there is a correlation or not between these differences and other biological features, such as antimicrobial susceptibility/resistance patterns. After describing the "pan-surfome", we ranked the identified proteins in three different groups (A, B, C; from best to worst) based on three criteria: to be (i) surface-exposed, (ii) highly conserved and (iii) widely distributed among different isolates. Following these criteria, 16 proteins were included in the group A, nine proteins were included in the group B, and 15 proteins were included in the group C. We assumed that proteins belonging to the categories of cell-wall anchored, secreted and lipoproteins are highly accessible to antibodies if they were identified with this proteomic procedure, as already demonstrated [7], and must be considered as the best option for further studies. Therefore, we included two, two and six cell-wall proteins in the groups A, B and C, respectively. For the lipoprotein category, three were included in the groups A and C and 2 were ranked in the group B. Regarding secreted proteins, four, two and one were included in the groups A, B and C, respectively. Although several membrane proteins could be included in any group, just seven, three and five proteins were included in the groups A, B and C, respectively. Membrane proteins are, in principle, more embedded and therefore less surface-exposed and accessible, unless they have domains large enough to reach the surface through the peptidoglycan layer. In addition, for some of these membrane proteins the identified peptides matched loops that are theoretically predicted to be in intracellular domains. This can be due either to misleading predictions by subcellular localization/topology algorithms [12,19,30,39], or to release of such domains because of residual lysis. These facts make membrane proteins, a priori, worse candidates than cell wall-anchored or lipoproteins. Finally, we used the algorithm VaxiJen, which is based on the physico-chemical properties of proteins, to predict the protective capacity of those proteins included in the ranking categories. According to the average score obtained in VaxiJen for cell wall proteins, secreted proteins and lipoproteins (score = 0.61) would be considered the best antigens to raise a high and effective immune response, in comparison with membrane proteins possessing one TMD (0.55) and membrane proteins with more than one TMD (0.55). Those findings agree with the statement published by other authors that cell wall-anchored proteins, lipoproteins and secreted proteins are the best options to this purpose [10,11]. Moreover, most of the proteins that were previously included in the ranking reached the threshold value of 0.5. Some proteins belonging to different categories were removed from the ranking as they had a VaxiJen score lower than 0.5 and were classified as non-antigens. Most of the excluded proteins belonged to the category of membrane proteins. According to our criteria, a total of 16 proteins (two cell wall proteins, three lipoproteins, four secreted proteins and seven membrane proteins) fulfilled requirements to be appropriate candidates for further immunization and vaccine studies. All of them were widely distributed (present in ≥70% of isolates) and highly conserved. It should be noted that pyolysin (Q9S0W7) was rated in the best group of our ranking, as it was identified in 100% of the analyzed isolates. This reveals that this protein is a good putative candidate to be considered in further studies to develop a vaccine or for being included in new diagnostic tools. This is not surprising because it is considered the major virulence factor of T. pyogenes encoded by the gene plo, which has been detected in all wild-type strains described until now [2]. Jost et al., in 2003, tested a vaccine based on PLO with the detection of specific antibodies in sera of immunized mice and showing protection against infection [40]. Other authors have developed a vaccine against multiple pathogens, T. pyogenes and C. perfringens, with really promising results [17,41]. However, different authors have questioned that the results obtained in murine models can be applied to pigs [42]. Nonetheless, the current tendency is genetic immunization, like the vaccine developed by Huang et al. in 2018 that consisted of a DNA vaccine containing genes encoding four different T. pyogenes virulence factors [43]. This work provides interesting information on the field of proteomics applied to the control of infectious diseases, since it makes it possible to find surface antigens for the development of new diagnostic tools and recombinant subunit vaccines against T. pyogenes, a not well known pathogen. Conclusions The proteomic "shaving" of live cells is a useful tool for the detection of common proteins to describe the "pan-surfome" of Trueperella pyogenes Moreover, two cell wall proteins (X4QWN2, X4R8M3), three lipoproteins (A0A0M3SNR1, A0A0M4K9G4, A0A0M4JY33), four secreted proteins (X4R0V4, A0A2G9KEL5, Q9S0W7, X4QUK6) and seven membrane proteins (A0A0M4K7E7, A0A2G9KDB2 A0A0M3SNZ9, A0A0M4KS30, A0A2G9KB86, A0A2G9KCY0, A0A0M4JWL1) were identified in more than 70% of the studied isolates, were highly expressed and highly conserved. These proteins could be good putative antigen candidates, alone or in combination, in future vaccination studies, or for being included in appropriate tools to diagnose infections caused by this pathogen.
6,701.8
2020-06-01T00:00:00.000
[ "Biology" ]
Numerical Solution of Blood Flow and Mass Transport in an Elastic Tube with Multiple Stenoses The simultaneous effect of flexible wall and multiple stenoses on the flow and mass transfer of blood is investigated through numerical computation and simulations. The solution is obtained using the Marker and Cell technique on an axisymmetric model of Newtonian blood flow. The results compare favorably with physical observations where the pulsatile boundary condition and double stenoses result in a higher pressure drop across the stenoses. The streamlines, the iso-concentration lines, the Sherwood number, and the mass concentration variations along the entire wall segment provide a comprehensive analysis of the mass transport characteristics. The double stenoses and pulsatile inlet conditions increase the number of recirculation regions and effect a higher mass transfer rate at the throat, whereby more mass is expected to accumulate and cause further stenosis. Introduction Caro et al. [1] postulated that atherosclerosis, which is a narrowing of the artery as a result of plaque build-up may occur due to shear-dependent mass transfer mechanism between blood cholesterol and the arterial wall. Cholesterol exists in blood in the form of low density lipoproteins (LDLs) whose deposition along the walls of the artery is a key step in atherogenesis, which would lead to stenosis. Stenosis can affect the velocity of blood flowing through the artery, affecting blood pressure, collapsing the heart, which could in turn lead to disastrous consequences. us, an understanding of the behavior of local mass transport in arterial stenosis is important in the study of the formation and development of atherosclerotic lesions for appropriate assessment on the possible correlation between the site of atherosclerotic lesions and the pattern of mass transport. Ethier [2] carried out computational modelling of mass transfer and studied its links to atherosclerosis. Other studies on mass transport and fluid flow in stenotic arteries of axisymmetric and asymmetric models have been carried out by [3][4][5][6]. In these studies, the arterial wall was considered as rigid and the artery is assumed to have single mild stenosis, in which the geometry of the stenosis is represented by the usual cosine curve along with a restriction that the ratio of the severity of stenosis and the radius of the artery is very small. In reality, this is not the case where in many medical situations, the patient is found to have multiple stenoses in the same arterial segment. Investigations on the effect of multiple stenoses on blood flow have been carried out amongst others by [7][8][9][10]. ese studies showed that from both experimental results and theoretical calculations, the total effect of a series of noncritical stenoses is approximately equal to the sum of their individual effects where they can be critical and produce symptoms of arterial insufficiency. e flow energy loss due to the presence of the stenoses, which is directly related to the pressure drop across them, increases with the number of stenoses but is not strongly dependent on the spacing between them. e authors of [11][12][13][14][15][16][17][18][19] have also investigated blood flow through multiple stenoses; however, these studies have not considered the mass transfer. Another aspect to be considered in arterial blood flow is the cyclic nature of the heart pump which creates pulsatile conditions in the arteries, giving rise to unsteady flow. It is observed that most CFD models of arterial hemodynamics make the simplifying assumptions of rigid walls and fully developed inlet velocities (cf. [13][14][15][16][17][18][19]). But the arteries are not rigid tubes. ey adapt to varying flow conditions by enlarging or shrinking. All of these physiological conditions make the modelling and consequently the solution to be almost impossible to be obtained analytically and challenging computationally. Nandakumar and Anand [20] studied steady and pulsatile flow of blood through a channel with single as well as double stenoses on the assumption that the pulsations of flow are damped in the small vessels; thus the flow is effectively steady in the capillaries and the veins while Liu and Tang [21] investigated the influence of distal stenosis on blood flow through curved arteries with two stenoses. But again, these studies on pulsatile flow have also not considered the mass transfer. In another study, Layek et al. [22] investigated the effect of multiple stenoses on the flow of Newtonian fluid in a rigid tube and opined that the disturbance created by the constrictions is mainly concentrated at the downstream of the last constriction. Considering the flow of Newtonian fluid in a two-dimensional channel having a single constriction, Layek and Midya [23] concluded that the maximum stress and the length of the recirculation region associated with two shear layers of the constriction do increase with the increasing area reduction of the constriction. ey further concluded that the flow-field separates after the symmetry breaking bifurcation, and the symmetry of the flow depends on Reynolds' number and the height of the constriction. e flow of a fluid having hematocrit-dependent viscosity past a tube with partially overlapped constriction has been investigated by Layek et al. [24]. ey observed that the peak value of the wall shear stress decreases with increasing haematocrit parameter while a reverse trend is observed for the flow separation region. ey also opined that the deformability of the wall does reduce the wall shear stress as compared to the rigid wall case. All these studies [22][23][24] ignored the flow pulsatility and/or consideration of multiple constrictions, and the mass transfer as well which plays a pivotal role in the genesis and evolution of atherosclerosis. Based on the gap established above, with regard to studies involving mass transfer, the following work seeks to analyze the flow and mass transfer characteristics of pulsatile blood flow through an artery with double stenoses. e fluid considered is Newtonian in an axisymmetric setting, while the pair of stenoses vary in severities, lengths, and distances between them. e equation for stenoses is given in an algebraic form which could represent both moderate and severe stenoses instead of the usual cosine function which could only describe mild stenosis. e objective of the present study lies in the consideration of the transport of mass as well as momentum together through a tube with a flexible wall, resembling the flexibility of the artery in the presence of double stenoses. e flow pulsatility cannot be ruled out from the present investigation. Formulation of the Problem We consider a fully developed two-dimensional axisymmetric flow of an incompressible Newtonian fluid of density ρ in a tube. e relevant equations of motions in vector forms are the continuity, momentum, and mass as follows: with D/Dt is the material derivative, V � (u, 0, w) where u and w are the radial and axial velocity components, respectively, p is the pressure, μ is the constant viscosity, C is the mass concentration, and D m is the coefficient of mass diffusion. In the cylindrical coordinate system, the corresponding equations (1)-(3) are written in a conservative form as follows: e schematic diagram for the double stenoses is given in Figure 1, where r � R(z, t) is the radius of the artery in the stenotic region and R 0 is the radius of the artery in the nonstenotic regions. δ 1 , δ 2 are the critical heights of the first and second stenosis respectively; l 0 is the inlet segment, l 02 is the distance between stenoses, l 01 , l 03 are the lengths of stenoses, and L is the length of the arterial segment under consideration. e equations describing the stenoses are given by the following: e time-variant parameter a 1 (t) is given by a 1 (t) � 1 + k cos(ωt) with k representing the amplitude parameter and ω the angular frequency is given by ω � 2πf p , f p being the pulse frequency and d � l 0 + l 01 + l 02 . To the best of our knowledge equation (8) is the first equation to address double stenoses without any control on the severity of stenoses which has not been considered before. Boundary Conditions where U is the cross-sectional average velocity of the fluid and C s is a constant. Solution Procedure e solution procedure involves the nondimensionalization, radial coordinate transformation, and the finite-difference Marker and Cell method (MAC) initially proposed by Harlow and Welch [25]. Sarifuddin et al. [26,27] and Mustapha et al. [15,16] have used the method to solve blood flow problems. Nondimensionalization of the Equations e nondimensional variables and parameters introduced are as follows: Using (13), equations (4)- (7) have their respective nondimensional forms as follows (omitting bar): BioMed Research International e boundary conditions (9)-(12) reduce to their respective dimensionless forms: at z � 0, where Re is the Reynolds number, Sc is the Schmidt number and α is the Womersley number defined as follows: Finite-Difference Method. e solution procedure consists of discretization of the governing equations, combining the discretized forms of the momentum and continuity equations to obtain the Poisson equation for pressure, the successive overrelaxation (SOR) method, and the pressure and velocity corrections. e schematic computational domain is given in Figure 2. e velocities and pressure are calculated at different locations of the control volume, as indicated in Figure 3. e difference equations are derived at three distinct cells, each corresponding to the continuity, axial and radial momentum equations. e discretization of the time derivative terms is based on the first-order accurate two-level forward time differencing formula, while the convective terms in the momentum equations are discretized with a hybrid formula consisting of central differencing and second-order upwinding scheme (cf. Courant et al. [28]). e diffusive terms are discretized using second-order accurate three- where n refers to time and Δt is the time increment. e length and width of the (i, j) n cell of the control volume are represented by Δz and Δx, respectively. e discretized version of the continuity equation (24) at the (i, j) cell is where . Here (z li , x lj ) and (z i , x j ) represent the respective coordinates of the center of the cell and the cell faces as shown in Figure 3, while w t and w b stand for w-velocities at the top and bottom middle positions of the control volume of the continuity equation. e momentum equations (24) and (25) are written in the following forms: where conw n i,j , conu n i,j , diffw n i,j and diffu n i,j are the finite-difference representation of convective and diffusive terms of the axial and radial momentum at the nth time level. BioMed Research International e Poisson equation for pressure is derived from equations (31)-(33) which takes the final forms: where D n+1 i,j represents the discretized form of the divergence of the velocity field at the (i, j) cell and the expressions for A i,j , B i,j , . . . , H i,j , S i,j are the same as Mustapha et al. [15,16]. e Poisson equation (34) for pressure is then solved using the successive overrelaxation (SOR) method to obtain the intermediate pressure field. e increment Δx is chosen to be 0.025 along x, while Δz � 0.1 along z. Δt is chosen to be equal to or less than a prescribed stability criterion as depicted in Figure 4, where here c is taken to be 0.05. (cf. [25,29]). e number of iterations is limited up to 10. e pressure and velocities then go through a correction stage to achieve better accuracy. e process is described in [15,16,26,27]. When the velocity field has been obtained, the mass concentration is calculated from the respective discretized versions of equation (26) with the relevant boundary conditions (equations (27)- (30)). e values chosen for k, α, Sc are 0.05, 2, and 3, respectively. Results and Discussions e influence of the pulsatile inlet is reflected in Figure 5 where the pressure drop in this case is higher than the one generated with the parabolic inlet. It decreases with increasing Re with a strong linear correlation between them in both cases of parabolic and pulsatile conditions (cf. Sarifuddin et al. [26]). Further, the pressure drop is seen to increase with the number of stenoses, which agrees with the experimental study of Talukder et al. [8]. e behavior of the axial and the radial velocity at the narrowest points (z � 10 and z � 19) for different Re are shown in Figures 6(a), 6(b), 7(a), and 7(b). e axial velocity has positive values, and it is noted that the parabolic case results in higher velocity. It is also observed that the axial velocity near the wall increases with increasing Re; however, there is a cross over at x � 0.65 and x � 0.73 for z � 10 and z � 19, respectively. Figures 7(a) and 7(b) show that the radial velocity corresponding to the pulsatile inlet assume positive values at z � 10 except the value on the wall at (x � 1) while negative values are observed at the narrowest point (z � 19) and the flow velocity increases with increasing Re near the centerline while it is reduced near the wall with increasing Re. Both figures reveal that the velocities in the case of the parabolic inlet are negative and substantially less than that with the pulsatile inlet. Figure 8(a) exhibits the axial velocity profiles at different locations of the stenosed arterial segment at Re � 300 for both parabolic and pulsatile inlet conditions. At (z � 19), the velocity in the parabolic case is higher than in the pulsatile case. A backflow occurs in the pulsatile case at the downstream of the narrowest point (z � 19) near the wall. e curves decrease from their individual maximum at the axis as one moves away from it and finally they approach a minimum value (zero) on the wall surface. Note that the curves of the axial velocity at (z � 10) and at (z � 15) are coincident. e axial velocity at the critical height of the second stenosis (z � 19) is considerably higher than that of the first stenosis (z � 10). Figure 8(b) shows that the radial velocity has positive values everywhere at (z � 5), (z � 10), and (z � 25), excluding the position on the wall. At (z � 15) and the narrowest point (z � 19), the radial velocity is observed to have all negative values. e nonzero values of the radial velocity near the wall clearly reflect the influence of the radial motion of the arterial wall in the pulsatile case. Figure 9(a) exhibits the distribution of the wall shear stress (wss) along the arterial segment for different Re considering pulsatile as well as parabolic inlet conditions. e results show that wss for both the parabolic and pulsatile inlet conditions attain their peaks at the critical heights of the stenoses (z � 10, 19). It is observed that BioMed Research International separation occurs (negative values of wss) only at the downstream of the second stenosis for parabolic inlet condition. In the pulsatile case, the separation zone occurs between the two stenoses with multiple separation regions at the downstream of the second stenosis. en, wss starts to increase slowly towards the wall surface (reattachment point). e effect of different severities on wss is depicted in Figure 9(b). In the pulsatile case, when the two stenoses have the same severities (δ 1 � δ 2 � 0.2) and, (δ 1 � δ 2 � 0.4), flow separation occurs at two specific places at the downstream of the first and the second stenoses with different peak values. In the case of stenoses with different severities (δ 1 � 0.2, δ 2 � 0.4) and (δ 1 � 0.4, δ 2 � 0.2) a larger separated region is formed at the downstream of the more severe stenoses (cf. Johnston and Kilpatrick [12]). A smaller separation region is produced in the case of pulsatile inlet condition and the peak wss is much higher than the parabolic inlet condition. Figure 9(c) determines the effects of the length of stenoses on wss. Peak wall shear stress decreases with increasing the length of stenosis but it increases with the gap between stenosis and at this position, there is a potential that plaque would rupture whereas, at the low shear stress position, atherosclerotic development may be induced. ese phenomena of separation and reattachment are due to the adverse pressure in these regions and are believed to be responsible for the malfunctioning of the cardiovascular system having atherosclerotic plaque. Figures 10(a)-10(d) show the instantaneous patterns of streamlines governing the flow of blood through the stenoses in case of (δ 1 � 0.4, δ 2 � 0.2) for both parabolic and pulsatile inlet conditions at Re � 300 and Re � 500. In the parabolic case, only one recirculation zone developed between the two stenoses where separation occurs at z � 11 (c.f Figure 9(b)). In the case of the pulsatile inlet, a multiple recirculation region is noted in between the two stenoses (separation point z � 11). us, an increase in Re and a consideration of the pulsatile flow increase the number of the recirculation region. increase as the flow gets accelerated towards the throat leading to the increase of solute concentration. It is also observed that the concentration at the throat (z � 19) is much higher for pulsatile flow than the parabolic one. 12(c). e Sherwood number defined by Sh D � 2R 0 c l /D m ΔC where c l is the local mass flux to the arterial wall and 2R 0 is the inlet diameter of the artery. It is observed that Sh D increases with increasing Re while Sh D distribution appreciably changes specifically at the throat, between the two stenoses and at the downstream position. e highest mass transfer is experienced at the upstream, while the minimum value occurs at the downstream of the stenoses. Note that pulsatile flow increases the Sherwood number and that it is much higher in the case of double stenosis. e iso-concentration lines considering pulsatile as well as parabolic inlet conditions based on (δ 1 � 0.2, δ 2 � 0.4) are displayed in Figures 13(a) and 13(b). e iso-concentration lines for parabolic and pulsatile inlets have different distributions with multiple recirculation regions nearby the downstream of the more severe stenosis in the pulsatile case. e general trend of the iso-concentration lines is that they move away from the inlet region towards the upstream of the stenosis and correspondingly impair the mass transport in this region, while they adhere to the outline of the stenosis at both the upstream and downstream ends. At this region of low wss (compare with Figure 9), cholesterol may tend to accumulate and cause more severe stenosis. is observation conforms with Schneiderman et al. [30]. Conclusion e hemodynamics of the pulsatile flow and the transport of mass in an arterial segment having a couple of stenoses have been studied in relation to the distensibility of the vessel wall. Predicted results show that the pulsatile inlet and double stenoses with varying severity affect the flow characteristics significantly, especially the development of the recirculation zone and the peak value of the wall shear stress. It is also predicted that the concentration at the throat (z � 19) is much higher for pulsatile flow than the parabolic inlet condition. Moreover, the pair of stenoses contributes much to the mass concentration than the case of single stenosis. e mass flux to the arterial wall (Sherwood number) does increase with the increasing values of Re and here too, mass flux increases with the flow pulsatility and the presence of double stenoses. At the downstream, cholesterol may tend to accumulate and causes more severe stenosis. For severe stenoses, the peak value of the wall shear stress is higher in the pulsatile flow case and the isoconcentration lines show more recirculation regions nearby the downstream end and their lengths are longer. In conclusion, the results presented agree well with physical observations and provide an insight into the link between atherosclerosis, stenosis, and the pattern of mass transport. ough the detailed knowledge of the dynamical variables is possible and provides useful elements, the mechanism of influence of the haemodynamical factors in the arterial disease is not clear. e characteristics of the red cells must be taken into consideration by including a shear-dependent viscosity in the diffusion terms in time-dependent flows highlighting the scope of further work. All these mechanical and biochemical aspects related to the biofluid dynamics are of some importance and demand further investigation. A great deal of work is needed to establish the rheological parameters for the physiological values and to understand the connection of the issues with biological facts. Data Availability e data on blood flow parameters used for analysis and validation purposes are from previously reported studies and datasets, which have been cited. Disclosure To the best of the authors' knowledge, the content of this work is correct . Each author has contributed in part to the problem formulated, numerical computations and simulations, analysis of results, and manuscript writing. Conflicts of Interest e authors declare that there are no conflicts of interest regarding the publication of this paper.
4,939.2
2020-01-31T00:00:00.000
[ "Engineering", "Physics" ]
Urolithin a attenuates IL-1β-induced inflammatory responses and cartilage degradation via inhibiting the MAPK/NF-κB signaling pathways in rat articular chondrocytes Background Osteoarthritis (OA) is characterized by inflammation and extracellular matrix (ECM) degradation and is one of the most common chronic degenerative joint diseases that causes pain and disability in adults. Urolithin A (UA) has been widely reported for its anti-inflammatory properties in several chronic diseases. However, the effects of UA on OA remain unclear. The aim of the current study was to investigate the anti-inflammatory effects and mechanism of UA in interleukin-1β (IL-1β)-induced chondrocytes. Results No marked UA cytotoxicity was noted, and UA protected cartilage from damage following IL-1β stimulation in micromasses. Moreover, UA promoted the expression of anabolic factors including Sox-9, Collagen II, and Aggrecan while inhibiting the expression of catabolic factors such as matrix metalloproteinases (MMPs) and a disintegrin and metalloproteinase with thrombospondin motifs 4 (ADAMTS-4) in rat chondrocytes. Protective effects of UA were also observed in ex vivo organ culture of articular cartilage. Mechanistically, IL-1β significantly activated and upregulated the expression of p-ERK 1/2, p-JNK, p-P38, and p-P65, while UA protected chondrocytes against IL-1β-induced injury by activating the mitogen-activated kinase (MAPK)/nuclear factor-κB (NF-κB) signaling pathways. Conclusion Our results provide the evidence that UA could attenuate IL-1β-induced cell injury in chondrocytes via its anti-inflammatory action. UA may be a promising therapeutic agent in the treatment of OA. Introduction Osteoarthritis (OA) is one of the most common forms of chronic degenerative joint disease and affects tens of millions of people around the world [1]. The main characteristic features observed in OA include progressive loss and destruction of articular cartilage, thickening of the subchondral bone, osteophyte formation, and synovial inflammation Multiple factors contribute to the initiation and progression of OA, such as aging, heredity, obesity, abnormal metabolism, joint injury, osteoporosis, and joint malformation [2,3]. At the cellular and molecular levels, inflammation and inflammatory mediators play crucial roles in initiating and accelerating OA development [4,5]. A growing body of evidence suggests that interleukin-1β (IL-1β), tumor necrosis factor-alpha (TNF-α), and IL-6 are found in OA cartilage [6]. Among these inflammatory cytokines, the effect of IL-1β was widely explored because of its vital role in inflammatory responses. The pro-inflammatory cytokine IL-1β is a master regulator of inflammation that has been reported to directly participate in the generation of multiple inflammatory mediators [3]. When chondrocytes are stimulated by IL-1β, they produce metalloproteinases (MMPs), a metalloproteinase with a thrombospondin type 1 motifs (ADAMTS), and some inflammation-associated proteins including inducible nitric oxide synthase (iNOS) and cyclooxygenase-2 (COX-2), which trigger the alteration of cartilage from the normal homeostatic state toward a catabolic state and eventually leads to extracellular matrix (ECM) degradation [7,8]. Therefore, targeting IL-1β-induced catabolic metabolism and inflammatory responses may be an effective strategy to delay OA progression. Urolithin A (UA) is metabolized by intestinal microbiota from Ellagitannins (ETs) and Ellagic acid (EA) in the gut [9,10]. According to previous studies, ETs and EA may inhibit the inflammatory response. Dietary consumption of EA-rich food has been demonstrated to suppress inflammatory cytokine release in the brains of Alzheimer's disease mice [11]. Similarly, EA protects against cisplatin-induced kidney nephrotoxicity by inhibiting renal inflammation and apoptosis [12]. Nevertheless, EA is poorly absorbed and quickly eliminated, and the biological activity of EA is controversial. Interestingly, recent published studies have described the biological effects of UA, including anti-proliferation in cancer, antiinflammation, anti-oxidant activity, improved lipid metabolism [13][14][15]. UA inhibits the catabolic effect of TNF-α on nucleus pulposus cells and alleviates intervertebral disc degeneration in vivo [16]. Moreover, UA can protect skeletal muscle against acute inflammation in vitro and in vivo [17]. Mechanistically, UA could significantly inhibit the activation of NF-κB induced by IL-1β in colon fibroblasts [18]. Meanwhile, Fu et al. investigated the anti-inflammatory effect of UA in human OA and revealed the underlying mechanism by blockage of PI3K/Akt/NF-κB pathway [19]. Although the potential anti-inflammatory role of UA has been extensively investigated, there is limited knowledge whether UA has other potential therapeutic targets to attenuate the pathogenesis of OA. In this study, we investigated the anti-inflammatory role of UA by attenuating IL-1β-induced degradation of Collagen II and Aggrecan and by reducing the production of inflammatory mediators via the ERK, JNK, P38, and NF-κB pathways in rat chondrocytes. Cell culture Primary chondrocytes were obtained from knee joints cartilage of 2-week-old Sprague Dawley rats. The detailed procedure was performed according to a previously described method [20]. Briefly, cartilage of the knee joint was isolated and cut into pieces and then incubated in 0.5% trypsin-EDTA (containing 0.5 g/L of trypsin (1:250) and 0.2 g/L EDTA•4 Na in 0.85% saline solution) for 30 min and subsequently 0.2% collagenase for 24 h at 37°C. The chondrocytes were collected and cultured in Dulbecco's minimum essential medium: F12 medium containing 10% FBS with humid air with 5% CO 2 at 37°C. Cells were trypsinized with 0.5% trypsin-EDTA and passaged at a ratio of 1:3 when cell density reaches 75%, and the medium was changed every 2 days. Chondrocytes at passage 3 were utilized in the subsequent experiments. Cell viability assay The Cell Counting Kit-8 (CCK8, Dojindo, Japan) was utilized to analyze cell viability. Firstly, chondrocytes were seeded in 96-well plates at a density of 1 × 10 4 cells/well. After 24 h of adhesion, cells were then treated with IL-1β alone or with UA at different concentrations. Cell viability was carried out after cultivating for 1, 3, and 7 days. In brief, 10 μl CCK-8 solution dissolved in 100 μl culture medium was added into each well and then incubated in the dark at 37°C for 1.5 h. The absorbance of the solution was recorded at 450 nm using a plate reader (BioTek, Winooski, VT, USA). Micromass culture All procedures were performed as previously described [21,22]. Briefly, the chondrocytes were suspended in medium with 10% FBS, 0.25% penicillin-streptomycin, and 0.25% L-glutamine, and plated at a density of 2.5 × 10 5 cells/10 μl in 24-well plates. Four hours later, the medium was added into the plate with IL-1β alone or with IL-1β with UA for 2 days. Then the micromasses were stained with Alcian Blue. Western blotting analysis Chondrocytes were cultured in a sterile six-well plates at 37°C with 5% CO 2 . After reaching 80% density, the cells were exposed to IL-1β alone or with UA. The total proteins were obtained from stimulated or control chondrocytes using radioimmunoprecipitation assay lysis buffer containing 1% proteinase inhibitor and 1% phosphatase inhibitor cocktail for 30 min on ice at the indicated time points. Protein concentrations were measured using BCA protein assay kits (Boster). Then, 40 μg of protein was separated on 12% sodium dodecyl sulfatepolyacrylamide gels and transferred to polyvinylidene fluoride membranes (Millipore, Burlington, MA, USA), blocked with 5% bovine serum albumin (BSA) in Trisbuffered saline with 0.1% Tween-20 (TBS-T) and incubated with primary antibody (2% BSA in TBS-T) overnight at 4°C. Subsequently, the membrane was washed with TBS-T and incubated with the corresponding secondary antibodies for 2 h at room temperature. Finally, the protein bands were visualized with western ECL Substrate Kits (Yseasen, Shanghai, China) on a Tanon imaging system, and grayscale images were analyzed with ImageJ (National Institutes of Health, Bethesda, MD, USA)/Olympus (Tokyo, Japan) software. Total RNA extraction and quantitative real-time RT-PCR Total RNA was extracted by a total RNA extraction kit (Omega Bio-tek, Norcross, GA, USA) from chondrocytes exposed to IL-1β alone or with UA in accordance with the manufacturer's instructions. RNA purity and concentration were determined by a spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA). Complementary DNA (cDNA) was synthesized from total RNA and amplified with SYBR Green Master Mix in an ABI PRISM 7500 PCR Sequence Detection System (Applied Biosystems, Foster City, CA, USA) according to following condition: 30 s of denaturation followed by 40 cycles of 94°C for 5 s and 60°C for 35 s. The melting curve was generated to test for primer dimer formation and false priming for each reaction. Relative expressions of gene-specific products were analyzed using the comparative Ct (2 −ΔΔCt ) method and normalized to the reference gene GAPDH. The sequences of primers constructed were as follows: ADAMTS4: forward (CCGTTCCGCTCCTG TAACACTAAG), reverse (AGGTCGGTTCGGTGGTTG TAGG); MMP9: forward (CTACACGGAGCATGGCAA CGG), reverse (TGGTGCAGGCAGAGTAGGAGTG); Col2a1: forward (ACGCTCAAGTCGCTGAACAACC), reverse (ATCCAGTAGTCTCCGCTCTTCCAC); GAPDH: forward (GACAATTTTGGCATCGTGGA), reverse (ATG-CAGGGATGATGTTCTGG). Immunofluorescence Chondrocytes were plated in 12-well plates. When the density reached 80%, the cells were stimulated with IL-1β alone or with UA. Next, cells were fixed in 4% paraformaldehyde for 15 min at room temperature. Subsequently, the cells were permeabilized in phosphate-buffered saline (PBS) containing 0.3% Triton X-100 for 15 min and then blocked with 5% BSA for 30 min. Cells were then incubated with anti-P65 (1:200 dilution) in a humid chamber overnight at 4°C. The next day, the plates were washed three times with PBS and then incubated with Cy3conjugated goat antirabbit secondary antibody (1:100 dilution) at 37°C for 1 h in the dark. Finally, cells were stained with phalloidin and DAPI. Images were acquired using an inverted fluorescence microscope (Olympus) with identical acquisition settings, and the results were statistically analyzed using ImageJ software. Ex vivo organ culture of rat articular cartilage All experimental protocols were approved by the Committee of Ethics of Animal Experiments at Zhongshan Hospital, Fudan University, China. Cartilage explants were obtained from the knee joints of 4-week-old Sprague Dawley rats that were group-housed at 20 ± 5°C (55 ± 5% humidity) on a 12-h light/dark cycle with free access to standard chow and water. The detailed procedure was described in a published protocol [23]. Initially, the explants were cultured in medium containing 10% FBS at 37°C with 5% CO 2 for 2 days. Then, the explants were cultured in medium (10% FBS and 0.25% penicillin-streptomycin) containing IL-1β and/or IL-1β with UA for 3 additional days. Next, explants were collected and fixed in 4% paraformaldehyde, sectioned at 6 μm, and stained with hematoxylin and eosin (H&E), Safranine O-Fast Green (S-O Fast green), or Alcian Blue, then we used the Osteoarthritis Research Society International (OARSI) scoring system with double blindness as described previously to evaluate the destruction of articular cartilage, scoring including the matrix staining, cartilage tissue structure, chondrocyte clusters, and surface integrity [24,25]. Further, Collagen type II and aggrecan were analyzed by immunochemistry and The percentages of Collagen II, Aggrecan positive cells in each section were quantified by Image Pro Plus. All stained sections were imaged using an upright microscope (Olympus). Statistical analysis The experiments were performed at least three times. All data are presented as mean ± standard deviation (SD). Statistical analyses were performed using Graph-Pad Prism software (GraphPad Inc., San Diego, CA, USA) and SPSS 18.0 (IBM, Armonk, NY, USA). For differences among treatments, Student's t-tests were used for the comparisons between two groups, and data involving more than two groups were analyzed by oneway analysis of variance followed by Tukey post hoc tests. P values less than 0.05 were considered statistically significant. Cell viability after IL-1β or/and UA treatment First, we examined the potential toxicity of UA on chondrocytes with CCK8 assays. As shown in Fig. 1a, UA had no significant effect on chondrocyte viability and proliferation at concentrations of 1, 5, 7.5, or 15 μM for 1, 3, or 7 days. However, chondrocyte activity decreased by~50% compared with the control group (P < 0.05) when the concentration reached 30 μM, indicating that a high concentration of UA may inhibit cell activity. Therefore, we set the maximum concentration of UA to 15 μM (1, 7.5, 15 μM) for subsequent experiments. When chondrocytes were treated with IL-1β for 1, 3, or 7 days, as shown in Fig. 1b and c, there were no significant changes in cell viability with increasing concentrations of IL-1β (< 30 ng/ml) with and without UA (< 15 μM). To investigate whether UA protects against cell damage induced by IL-1β, cartilage micromasses were co-incubated with 20 ng/ml IL-1β and various concentrations of UA from 1 to 15 μM for 2 days and then stained with Alcian Blue. UA markedly ameliorated IL-1β-induced degradation of cartilage matrix in a dose-dependent manner (Fig. 1d). These results suggest that no marked UA cytotoxicity occurred in chondrocytes, and UA partially protected against IL-1β-induced cartilage matrix degradation. UA inhibited IL-1β-induced ECM catabolism in chondrocytes ECM gene expression levels were detected with RT-qPCR after treatment with various UA concentrations (0, 1, 7.5, Fig. 1 Effect of UA on chondrocyte viability. a, b, c Chondrocytes were treated with various concentrations of UA and/or IL-1β and then analyzed by CCK-8 assay (1, 3, and 7 days). d Chondrocytes were treated with 20 ng/ml IL-1β combined with different concentrations of UA (1, 7.5, and 15 μM) for 2 days and then stained with Alcian Blue. Data are presented as mean ± S.D. n = 6, *P < 0.05, **P < 0.01 versus Control 10 μM) for 48 h. MMP9 and ADAMTS4 are matrixdegrading enzymes, and Collagen II could antagonize this effect and promote ECM anabolism. MMPs (MMP3 and MMP13) are catabolic enzymes of Collagen II and Aggrecan. As shown in Fig. 2g and h, MMP9 and ADAMTS4 mRNA expression markedly increased in a dose-dependent manner. UA obviously suppressed the overproduction of MMP9 and ADAMTS4 mRNA induced by IL-1β stimulation. Meanwhile, UA reversed the downregulated gene expression of Collagen II in the IL-1β stimulated condition. However, UA did not affect the expression of these genes at the lowest concentration (1 μM). The effect of UA on IL-1β-induced MMP 3 and MMP13 production were measured by western blot. UA treatment partially reduced protein expression of MMP 3 and MMP13 compared to cells treated with IL-1β alone (Fig. 2a). To evaluate chondrocyte degeneration, we investigated ECM replacement by chondrocytes under IL-1β stimulation with or without UA pretreatment by western blot analysis. Collagen II and Aggrecan are the two main components of cartilage matrix responsible for the anti-compression and shock absorption capabilities of cartilage under mechanical loading. Fig. 3a-c show that IL-1β significantly decreased protein expression of Collagen II (P < 0.05) and Aggrecan (P < 0.01). However, these alterations were reversed by pretreatment with UA, especially at the highest concentration of 15 μM. The key regulator of Collagen II synthesis is Sox-9, and UA could prevent its degradation induced by IL-1β (P < 0.01, Fig. 3a and d). This result was consistent with the RT-qPCR findings. UA suppressed IL-1β-induced expression of iNOS and COX2 in chondrocytes Protein expression levels of iNOS and COX-2 were quantified to examine the extent of IL-1β-induced inflammation and evaluate whether it is attenuated by UA. Chondrocytes were pretreated with different concentrations of UA (1-15 μM) for 2 h and then simulated with or without IL-1β (20 ng/ml) for 48 h. Western blotting was performed to detect protein expression of the inflammatory mediators iNOS and COX2. As shown in Fig. 2a, IL-1β stimulation significantly increased iNOS and COX2 production. However, UA inhibited the excessive production of these mediators. Our results demonstrate that UA co-treatment significantly (P < 0.05) and dose dependently decreased the inflammation induced by IL-1β. However, the lowest dose of UA (1 μM) had no protective effects (P > 0.05). Fig. 3 Effect of UA on IL-1β induced degradation of Sox-9, Collagen II, and Aggrecan. Chondrocytes were treated with IL-1β (20 ng/ml) alone or UA (1, 7.5, and 15 μM) in combination with IL-1β (20 ng/ml) for 24 h. a Protein expression of Sox-9, Collagen II, and Aggrecan were determined by western blot. b, c, d Relative protein expression of Sox-9, Collagen II, and Aggrecan shown as histograms. Data are presented as mean ± S.D. n = 6. *P < 0.05, **P < 0.01, ***P < 0.001 versus the IL-1β group Effect of UA on IL-1β-induced activation of the MAPK pathway Previous studies have demonstrated that IL-1β could trigger inflammation by activating the mitogen-activated protein kinase (MAPK) pathway [26]. Specifically, MAPK signaling mediates inflammation responses and cartilage degradation in the pathogenesis of OA. To clarify the mechanism of action underlying UA protection, MAPK activity evaluated using western blot analysis. The phosphorylation levels of ERK, JNK, and p38 were significantly upregulated compared to the control group after treatment with IL-1β for 2 h (P < 0.01). Notably, UA could suppress the upregulated phosphorylation of ERK1/2, JNK, and p38 in a concentration-dependent manner (Fig. 4a-d). These results suggest that UA protects chondrocytes against IL-β-induced inflammation injury by inhibiting the phosphorylation of MAPK pathway members. UA inhibited IL-1β-mediated activation of the NF-κB pathway To further explore the anti-inflammatory mechanism of UA, immunofluorescence and western blot analyses of NF-κB p65 were performed to evaluate the effect of UA on the NF-κB pathway. IL-1β significantly up-regulated p65 phosphorylation (P < 0.01). As expected, UA remarkably inhibited IL-1β-induced NF-κB activation in a dose-dependent manner ( Fig. 4a and b). However, it is worth noting that phosphorylated p65 was lower than in the control group and the inhibitory effect of UA did not increase at concentrations > 7.5 μM. Immunofluorescence showed that most p65 was present in the cytoplasm in control cells. However, as shown in Fig. 5a and b, IL-1β treatment significantly increased p65 fluorescence intensity, indicating that NF-κB activation induced its nuclear translocation and subsequent transcription of inflammatory mediators. Moreover, chondrocytes treated with 20 ng/ml IL-1β for 2 h exhibited nearly 80% activated p65 was activated, as demonstrated by an~8fold increase in fluorescence intensity compared to the control group. However, UA pretreatment inhibited p65 translocation into the nucleus (Fig. 5a and b). This observation was consistent with the western blot results. Collectively, these findings suggest that UA protects chondrocytes against IL-β-induced inflammation injury by inhibiting phosphorylation of a member of the NF-κB pathway (p65). UA inhibited damage in cartilage explant culture We used an ex vivo culture model of cartilage explants from nine 4-week-old rats to evaluate effect of UA on cartilage degradation. The cartilage explants were treated with UA (15 μM) with or without IL-1β stimulation (30 ng/ml) for 3 days. The explants were divided into three groups: (1) control, (2) IL-1β stimulated (30 ng/ml), and (3) IL-1β plus (30 ng/ml) UA (15 μM). Histopathological changes in cartilage were evaluated by H&E, S-O Fast green, and Alcian blue staining. As shown in Fig. 6a, h &E and S-O Fast Green staining revealed normal structure of cartilage including smooth and intact surfaces and normal morphology and numbers of well-organized chondrocytes in the control group. However, the IL-1β treated group had apparent morphological changes including rough surfaces (black arrow), clustered and disorganized chondrocytes (black triangle), obvious hypocellularity, and loss of Safranin-O staining compared with the control group. Notably, OARSI scores of the cartilage showed cartilage damage was significantly attenuated by treatment with UA (Fig. 6b). Alcian Blue staining for glycosaminoglycan (GAG) distribution. The control group showed the strongest positive expression of GAG, indicating a sound chondroprotective effect. However, the IL-1β group showed the loss of GAG from the superficial zone to the deep zone of articular cartilage (Fig. 6c). Intriguingly, these changes were slightly decreased by UA. Immunohistochemistry showed that the expression of Collagen II and Aggrecan were reduced by IL-1β (Fig. 6d). Apparently, UA administration could effectively reverse the pathological changes with significantly (Fig. 6, f), which was consistent with the western blot results. Taken together, these results indicate that both the structure and ECM of cartilage tissues were better preserved in the UA-treated group. Discussion OA is traditionally considered a mechanically induced chronic condition and affects more than 25% of the population over 18 years old. Although the occurrence of OA is closely related to multiple factors, the exact pathogenesis remains unclear [27]. Additionally, multiple non-surgical regimens used for OA were limited for many aspects including recurrent side effects and variable rates of success. Thus, a safe and effective drug with a certain molecular target is in urgent need to alleviate cartilage degradation. The gut microbiota and its metabolites could affect multiple organs and contribute to disease progression [28,29]. Short-chain fatty acids, the main products of intestinal bacterial fermenting dietary fiber, intrigued researchers because of its potential role in the prevention and treatment of metabolic syndrome, bowel disorders, and cancer [30,31]. UA is a metabolite derived from EA and ETs with a lower molecular weight and better bioavailability compared to its precursors; it is thought to play a protective role in chronic disease with broad spectrum of anti-inflammatory effects. Here, we demonstrated that UA prevented IL-1β-induced damage of ex vivo cartilage explants. Under IL-1β stimulation, UA also attenuated the increased expression of cartilage catabolic enzymes (iNOS, COX2, MMPs) and restored the decreased expression of Sox-9 in rat chondrocytes. Moreover, IL-1β-induced degradation of Collagen II and aggrecan was attenuated by UA. Finally, we found that the MAPK and NF-κB pathways were involved in the protective effects of UA. Collectively, our results suggested that UA may be a promising therapeutic strategy for OA. Numerous studies have implicated the pro-inflammatory cytokine IL-1β as a vital factor in OA, because it is significantly increased in the synovial fluid of OA patients. Anti- Fig. 4 Effect of UA on IL-1β-induced activation of MAPK and NF-κB. Chondrocytes were pretreated with UA (1, 7.5 and 15 μM) for 2 h, followed by co-incubation with 20 ng/ml IL-1β for 30 min. a Protein expression of P-ERK, ERK, P-JNK, JNK, P-P38, P38, P-P65, and P65 were determined by western blot. b, c, d, e Relative protein expression of P-ERK, P-JNK, P-P38, and P-P65 compared to ERK, JNK, P38, and P65 shown as histograms. Data are presented as mean ± S.D. n = 6. *P < 0.05, **P < 0.01, ***P < 0.001 versus the IL-1β group inflammatory treatment plays a key role in alleviating OA symptoms. A recent study demonstrated that the Liraglutide (GLP-1 agonist) ameliorates cartilage degeneration in a rat model of knee osteoarthritis with anti-inflammatory activity [32]. Valproic acid and butyrate were widely reported as latent therapeutic agents for OA due to their ability to suppress IL-1β-induced inflammation and cartilage degradation [33,34]. During disease progression, IL-1β stimulates the expression of the matrix metalloproteinase (MMPs) that mediate the degradation of cartilage matrix components and suppress proteoglycan synthesis [35]. Our study further revealed that UA obviously inhibited the levels of MMP3 and MMP13 enhanced by IL-1β treatment. IL-1β also promotes the expression of iNOS and COX-2, which induces the production of nitric oxide (NO) and prostaglandin E2 (PGE2), respectively. NO is a well-known inflammatory mediator that can induce MMP secretion and activation and decrease Collagen II and proteoglycan synthesis. In the present study, we found that IL-1β-induced expression of iNOS and COX2 were also attenuated by UA. Sox-9 is a vital transcription factor that positively regulates Collagen II synthesis and is indispensable for chondrocyte differentiation [36]. Our study further pointed out that UA obviously restored the levels of SOX9 inhibited by IL-1β treatment. To sum up, our data showed the therapeutic effect of UA by restoring the imbalance between anabolism and catabolism of the cartilage matrix. A recent study also found that UA suppressed the excessive production of NO, PGE2, IL-6 and TNF-α in collagenase-isolated human OA chondrocytes [19]. However, chondrocytes isolated from OA patients had showed various phenotypic changes due to destructive changes in the joint which does not seem to reflect the natural process of OA [37,38]. In contrast, primary chondrocytes, especially cartilage explants in which chondrocytes remain in contact with the extracellular matrix, are more sensitive to molecular environment than OA chondrocytes, thus ex-vivo experiments based on primary chondrocytes and cartilage explants seem to be a more reliable OA-model [38]. In this study, we isolated rat chondrocytes as well as rat articular cartilage explant for experiments, which would provide further evidence that UA may inhibit IL-1β-induced inflammatory response and preserve ECM of cartilage explant tissues. Various intracellular signaling pathways are reported to participate in OA pathogenesis. The MAPK and NF-κB pathways are master regulators of inflammation and catabolism in the process of OA. MAPK, mitogenactivated protein kinase, is a serine/threonine protein . e and f. The percentages of Collagen II and Aggrecan positive cells in each section were quantified by Image Pro Plus. Three sections were randomly selected for quantification, and original magnification were 40 × and 200× in overall and partial picture, respectively. Data are presented as mean ± S.D. n = 6. *P < 0.05, **P < 0.01 versus the IL-1β group inflammatory response. For OA, the active form of ERK, JNK and p38 was observed in synovial tissue and cartilage lesion. In addition, several lines of evidence understate that activation of MAPK induced by IL-1β trigger aggrecanases and MMPs-mediated articular cartilage degradation [36,39]. Specially, Therapy targeting p38 inhibitors could attenuate cartilage degeneration and relief pain in animal models [40]. The NF-κB pathway includes a family of ubiquitously expressed transcription factors and regulates inflammatory responses [17,41]. Normally, the transcription factor exists in the cytoplasm and is render inactive by a constitutive interaction with the inhibitory protein IκB. Once stimulated by IL-1β, NF-κB p65 translocates into the nucleus where it stimulates the expression of inflammatory mediators such as iNOS, COX-2, and MMPs, among these factors, iNOS catalyzes NO, which stimulates the secretion of MMPs and represses collagen II and proteoglycan synthesis to cause ECM degradation [21,39]. Previous studies demonstrated that UA could attenuate lipopolysaccharide-induced inflammation by inhibiting activation of the MAPK and NF-κB pathways and alleviate oxidized low-density lipoprotein-induced endothelial dysfunction by modulating MAPK signaling [42,43]. Our results demonstrated that IL-1β activated ERK, JNK, and p38; upregulated levels of phosphorylated p65; and increased nuclear translocation of p65. In articular cartilage, IL-1β binding to IL-1 receptor results in the recruitment of MyD88, followed by the activation of IRAKs and the E3 ubiquitin ligase TRAF6, and finally activate the MAPK and NF-κB pathway [44]. Furthermore, stimulation of IL-1β induces the accumulation of reactive oxygen species (ROS), known as second messengers during the activation of redox-sensitive transcription factors the MAPK and NF-κB pathway [45]. UA can exert its anti-inflammation and anti-oxidant activity effect through multiple ways. Although its specific target still needs to be explored, our data suggest that UA exerted its beneficial effects by inhibiting MAPK and NF-κB signaling (Fig. 7). Conclusions Our results provide evidence that UA can attenuate IL-1β-induced degradation of Collagen II and aggrecan and reduces the production of inflammatory mediators via the ERK, JNK, P38, and NF-κB pathways in rat chondrocytes. Collectively, these findings suggest that UA may be a promising therapeutic agent in the treatment of OA.
6,125
2020-03-24T00:00:00.000
[ "Medicine", "Environmental Science", "Biology" ]
Stable, Ductile and Strong Ultrafine HT-9 Steels via Large Strain Machining Beyond the current commercial materials, refining the grain size is among the proposed strategies to manufacture resilient materials for industrial applications demanding high resistance to severe environments. Here, large strain machining (LSM) was used to manufacture nanostructured HT-9 steel with enhanced thermal stability, mechanical properties, and ductility. Nanocrystalline HT-9 steels with different aspect rations are achieved. In-situ transmission electron microscopy annealing experiments demonstrated that the nanocrystalline grains have excellent thermal stability up to 700 °C with no additional elemental segregation on the grain boundaries other than the initial carbides, attributing the thermal stability of the LSM materials to the low dislocation densities and strains in the final microstructure. Nano-indentation and micro-tensile testing performed on the LSM material pre- and post-annealing demonstrated the possibility of tuning the material’s strength and ductility. The results expound on the possibility of manufacturing controlled nanocrystalline materials via a scalable and cost-effective method, albeit with additional fundamental understanding of the resultant morphology dependence on the LSM conditions. Introduction Fourth generation nuclear (Gen IV) fission reactor designs are currently explored and developed for attaining ultimate goals of sustainability, efficiency, and safety. These new designs require novel structural materials that can withstand higher radiation doses, temperatures, and mechanical stresses when compared to current light water reactors [1,2]. Therefore, the search for advanced nuclear materials is paramount and a priority to achieve success in Gen IV reactors. Ferritic/Martensitic (F/M) steels are known to be primary candidates as structural and cladding materials for Gen IV reactors given their documentation over a long period of time in research [3]. Some of these steels are the first generation F/M steels (HT-9 with 12% Cr, 1% MoVW) and the second generation modified steels (e.g., Grade 91 with 9% Cr and 1% Mo) [4]. These steels exhibit advantages over austenitic steels in terms of void swelling and physical properties (reduced thermal expansion coefficient and improved thermal conductivity) [4][5][6][7]. While these steels provide excellent resistance to atmospheric corrosion and many organic media, their utilization is however limited to around~560 • C due to thermal creep and associated loss of strength at higher temperatures [1]. This is a key concern, given that the use temperature of fuel cladding materials in the future fleet of reactors is expected to approach 650-700 • C [8]. Another challenge, for example with HT9, is embrittlement (loss of fracture toughness) that occurs due to defect cluster hardening after low doses of irradiation at low temperatures (below 400 • C) [9]. Therefore, there is still a need for an advanced generation of steels for Gen IV reactors [1]. However, these steels have several drawbacks to overcome. ODS steels are processed via powder metallurgy method with mechanical alloying for an extended time (40 h for attritor milling) [25]. The scalability of these steels and the cost of scale up production has been an issue. Furthermore, the composition of powders, processing methods, and fabrication parameters including mechanical alloying via ball milling and possible contamination affect the microstructure and, thus, influence the mechanical properties and radiation resistance [26]. The usual bimodal distribution of grains can also create anisotropy in the mechanical properties [27]. A recently developed nanostructured ODS steel, OFRAC [28], was manufactured as a cladding material with improved creep resistance to other steels (e.g., commercial HT-9), but studies are still ongoing regarding their radiation resistance and drawbacks from mechanical alloying could still be an issue. Other steels, manufactured in nanocrystalline form via mechanical alloying and powder metallurgy have shown to possess ultra-strength, thermal stability, and radiation tolerance [20]. The question that arises is whether nanostructured steels can be manufactured using other methods, with enough thermal stability to survive harsh nuclear reactor environments. Severe plastic deformation methods were shown to successfully produce nanostructured metals [29]. The Large Strain Extrusion Machining (LSEM) process [30,31], is a cost-effective method that can produce large thin metal sheet forms directly from a coarse-grained feedstock without the need for elevated temperature processing. Figure 1 shows one configuration of LSEM, where a thin continuous strip is directly "peeled" away from the surface of a bulk metal feedstock by simultaneous cutting and extrusion. This concept, producing thin cross-section metal products via controlled material removal is fundamentally different to conventional processes such as rolling, in that the shape change is accomplished in just a single stage of deformation. It is also important to note that, unlike conventional machining, the thickness of the strip (chip) exiting the cutting tool can be controlled using an additional constraining edge that is placed directly across the cutting edge ( Figure 1). Some unique features in LSEM when compared to most other deformation processes include: (1) ability to impose extreme plastic strains (up to five) in a single deformation step; (2) high rate deformation which enables large adiabatic heating in the deformation zone; and (3) large hydrostatic pressures in the range of 2-4 k (with k being the shear yield stress of the material) [32]. These characteristics are especially beneficial for processing metals that are prone to processability or material failure issues, without extensive need for multiple heat treatments and numerous deformation passes. LSEM was successfully used to manufacture nanocrystalline materials from Al, Mg, W, Ni, Fe-Si, steels, and alloys [30,31,33]. Here, we demonstrate the use of Large Strain Machining (LSM), LSEM without a constraining edge, to manufacture nanostructured HT-9 steel (processed from ultrafine HT-9) with enhanced thermal stability, mechanical properties (strength comparable to tungsten) and decent ductility despite the grain size being the in nanocrystalline and ultrafine regime. It is also shown that post-LSM annealing can result in recrystallization but a formation of stable ultrafine grains. This material is considered to be a new generation of nanostructured steels capable of withstanding severe nuclear reactor environments. Materials and Methods HT-9 discs were purchased from American Elements ® (Los Angelos, CA, USA). The elemental compositions of the discs are in Table 1. One disc was heated to 1040 °C with a slow ramp rate and then slowly cooled, and one disc followed the same treatment but then was tempered at 760 °C for ~3 h. LSM (schematic shown in Figure 1) was then performed on the discs with no external heating and one sample was generated from every disc (sample A is made from the annealed and slow cooled disc and sample B is made from the tempered disc, see Table 2). The conditions of LSM and the output parameters are presented in Table 2. The effective strain imposed on the strips can be determined by idealizing the deformation zone as a single shear plane [34]. The effective strain can then by calculated by: where is the shear strain. can be found by: where is the tool rake angle and is the chip or strip thickness ratio or tc/t0 where tc and t0 are the final and the uniformed chip thicknesses, respectively. The relation of the strain rate to the deformation speed is given by: ̅ ~ ̅ V/Δ where V and Δ are the deformation speed and the thickness of the deformation zone, respectively [31]. The deformation zone temperature (ΔT) can also be estimated using the shear plane model which uses specific shear energies and velocities as inputs [35]. The input parameters for the ΔT calculations were based on 420 stainless steel properties [36]. Here, we demonstrate the use of Large Strain Machining (LSM), LSEM without a constraining edge, to manufacture nanostructured HT-9 steel (processed from ultrafine HT-9) with enhanced thermal stability, mechanical properties (strength comparable to tungsten) and decent ductility despite the grain size being the in nanocrystalline and ultrafine regime. It is also shown that post-LSM annealing can result in recrystallization but a formation of stable ultrafine grains. This material is considered to be a new generation of nanostructured steels capable of withstanding severe nuclear reactor environments. Materials and Methods HT-9 discs were purchased from American Elements ® (Los Angelos, CA, USA). The elemental compositions of the discs are in Table 1. One disc was heated to 1040 • C with a slow ramp rate and then slowly cooled, and one disc followed the same treatment but then was tempered at 760 • C for~3 h. LSM (schematic shown in Figure 1) was then performed on the discs with no external heating and one sample was generated from every disc (sample A is made from the annealed and slow cooled disc and sample B is made from the tempered disc, see Table 2). The conditions of LSM and the output parameters are presented in Table 2. Table 2. LSM input and output parameters for samples A&B. The effective strain imposed on the strips can be determined by idealizing the deformation zone as a single shear plane [34]. The effective strain can then by calculated by: where γ is the shear strain. γ can be found by: where α is the tool rake angle and λ is the chip or strip thickness ratio or t c /t 0 where t c and t 0 are the final and the uniformed chip thicknesses, respectively. The relation of the strain rate to the deformation speed is given by: . ε~εV/∆ where V and ∆ are the deformation speed and the thickness of the deformation zone, respectively [31]. The deformation zone temperature (∆T) can also be estimated using the shear plane model which uses specific shear energies and velocities as inputs [35]. The input parameters for the ∆T calculations were based on 420 stainless steel properties [36]. Small 3 mm discs were then punched out from the samples, mechanically polished and then electropolished for electron backscattering diffraction (EBSD) characterization. Transmission Electron Microscopy (TEM) (samples were also prepared via electropolishing. The electropolishing solution was 5% perchloric acid in methanol and the electropolishing was performed at −30 • C. The morphology of the samples was characterized with EBSD (electron energy of 20 keV and step size of 25 nm) and 300 keV field emission TEM. In-situ TEM/annealing was performed within the TEM with a ramp rate of~25 degrees/min. Nanoindentation using a Keysight G200 nanoindenter, in the center of integrated technologies (CINT), equipped with a Berkovich tip was performed on polished specimens to a depth of 1000 nm at a strain rate of 0.05 s −1 . Hardness and modulus measurements were computed using the Oliver-Pharr method, assuming a Poisson ratio of 0.3 [37]. For microtensile testing, small scale tensile specimens were fabricated using a femtosecond laser cutting system and procedure described in previous studies [38][39][40]. Tensile specimens were cut using an output pulse width of 350 fs, wavelength of 1053 nm, repetition rate of 20 kHz, and energy of 10 µJ focused through a 5x objective. The final tensile specimen gauge dimensions were 0.045 mm × 0.07 mm. LSM HT9 samples were fabricated with a gauge length of 0.3 mm. Due to the larger fracture strain of the tempered HT9 sample, the gauge length for this specimen was fixed at 0.15 mm. Two or three tensile specimens were tested for each condition using an in-house designed and fabricated tensile testing system and pulled at an initial strain rate of 2.5 × 10 −3 1/s with a constant displacement rate through the test. The average result of the test is then calculated. Results The microstructure within the HT-9 disks (as is and annealed) prior to LSEM is shown in Figures 2-4. The samples show fine grains with grain boundary carbides in some regions. These carbides were shown previously to be Cr-rich M 23 C 6 carbides [41]. These carbides were also shown to occur in tempered steels [42]. After LSM, the microstructure within the samples consisted of elongated or equiaxed grains with carbides still located in some regions (Figures 2 and 3). Sample B (LSM sample from the tempered HT-9) showed larger carbides (Figure 2f), which is expected since it came from the tempered HT-9 disc. The EBSD (Figure 3a) shows poorly indexed points from sample A, however, due to high grain boundary strains after deformation. Texture and grain size aspect ratio changes (calculated via averaging several TEM figures), before and after LSM are also evident in Figure 3 for both samples due to the deformation process and the formation of new grains during the LSM process. To demonstrate the thermal stability of these samples, regions with low carbide densities were selected and the samples were heated to 700 °C. Figure 4 shows the LSM samples A and B heated to 700 °C. Up to 650 °C, no grain growth occurred. Around 700 °C, some regions started to recrystallize. The figures also show recrystallized grains of the samples at 700 °C after 20 min of annealing. These grains were stable, with no growth after about 20 min of annealing at this temperature. Although a growth can further occur at much longer time scale, the microstructure stability for 20 min at 700 °C suggests that any further growth can be slow, and that much longer time scales are needed for grain (a,b), respectively, determined from bright-field TEM micrographs. Aspect ratio = diameter minimum/diameter maximum. to the carbide composition or possible segregation of elements to the grain boundaries as a reason for grain size stability in the non-recrystallized grains and the thermal stability of the recrystallized grains in the LSM samples. The carbides were shown prior to LSM and post LSM and annealing to have the same elements (Cr, Mo, Mn, V and W) and compositions, and no further segregation of other elements on the grain boundaries occurred. It is also clear that no significant changes occurred regarding the density or size of the carbides during annealing. To demonstrate the thermal stability of these samples, regions with low carbide densities were selected and the samples were heated to 700 • C. Figure 4 shows the LSM samples A and B heated to 700 • C. Up to 650 • C, no grain growth occurred. Around 700 • C, some regions started to recrystallize. The figures also show recrystallized grains of the samples at 700 • C after 20 min of annealing. These grains were stable, with no growth after about 20 min of annealing at this temperature. Although a growth can further occur at much longer time scale, the microstructure stability for 20 min at 700 • C suggests that any further growth can be slow, and that much longer time scales are needed for grain growth at lower temperatures. Comparison with bulk heating (results not shown here) demonstrated similar results. The EBSD of these specimens are shown in Figure 3. Grain size measurement was performed on the samples after LSM and post-annealing. The grain size is determined from bright-field TEM micrographs. Both samples showed a decrease in grain size when compared to the discs before LSM (Figure 3c,d). After annealing to 700 • C, the samples showed grain size increases due to recrystallization, with an overall larger grain size for the sample B compared to the sample A. It is evident that mainly in the recrystallized regions of the samples after annealing, the strain is released (grain boundaries are better resolved in the EBSD), as shown in Figure 3. Elemental analysis was then performed using EDX mapping in the TEM. The EDX was performed on the tempered disc (sample B) before LSM and after LSM and annealing to 700 • C, as shown in Figure 5. The elemental mapping was performed to investigate changes to the carbide composition or possible segregation of elements to the grain boundaries as a reason for grain size stability in the non-recrystallized grains and the thermal stability of the recrystallized grains in the The mechanical properties of the samples were investigated using nanoindentation and microtensile tests. Both were performed at RT. Nanoindentation was performed on both samples A and B. Figure 6 shows displacement vs. hardness and modulus data for samples A and B post LSM and post LSM and annealing. Prior to LSM, the slow disc had higher hardness (~7.5 GPa) compared to the annealed and tempered disk (~4 GPa). After LSM and prior to annealing, sample A showed higher hardness of ~8 GPa while sample B hardness was ~6.25 GPa. After annealing, the values on both samples dropped to ~4.5 GPa which is very similar to the disks' values before LSM. However, annealing lead to residual stress and martensitic microstructure (e.g., Figure 2a) minimization, and grain boundary recovery. The microstructure, however, remained equiaxed with grains in the ultrafine regime. To examine the ductility and possible ductility recovery, the microtensile data for only one sample (sample B before and after annealing) is plotted in Figure 6c and is compared to the tempered HT-9 disc before LSM. The LSM sample has nearly 1.75 times the The mechanical properties of the samples were investigated using nanoindentation and microtensile tests. Both were performed at RT. Nanoindentation was performed on both samples A and B. Figure 6 shows displacement vs. hardness and modulus data for samples A and B post LSM and post LSM and annealing. Prior to LSM, the slow disc had higher hardness (~7.5 GPa) compared to the annealed and tempered disk (~4 GPa). After LSM and prior to annealing, sample A showed higher hardness of~8 GPa while sample B hardness was~6.25 GPa. After annealing, the values on both samples dropped to~4.5 GPa which is very similar to the disks' values before LSM. However, annealing lead to residual stress and martensitic microstructure (e.g., Figure 2a) minimization, and grain boundary recovery. The microstructure, however, remained equiaxed with grains in the ultrafine regime. nanocrystalline forms of real reactor materials, with very low martensitic fractions, to be investigated at temperature relevant to some Gen IV fission reactors. Nanocrystalline forms of model materials, such as Fe, were previously studied [16]. However, the thermal stability in model materials limited the temperature at which the performance of these nanocrystalline forms can be investigated. The nanocrystalline forms of real reactor materials not only alloy higher temperature studies, but also offer the advantage of understanding the role of the other microstructural elements on several vital factors affecting the overall performance under severe environments such as the matrix and grain boundary sink efficiencies, segregation, thermal stability, and defects behavior. Possessing outstanding thermal stability, defect sink density and mechanical properties, the performance of these materials under reactor relevant conditions is critical to To examine the ductility and possible ductility recovery, the microtensile data for only one sample (sample B before and after annealing) is plotted in Figure 6c and is compared to the tempered HT-9 disc before LSM. The LSM sample has nearly 1.75 times the yield strength of the HT-9 disc but with much less elongation. The LSM sample after annealing showed recovery of the elongation and strain hardening, but the yield strength dropped to the value of the tempered HT-9 disc which is consistent with the hardness results from the nanoindentation. Discussion The results in this work demonstrate the possibility of obtaining nanocrystalline grains (elongated vs. equiaxed) from commercial HT-9 discs. The discs were used at different conditions (normalized and slow cooled vs. normalized and tempered) and LSM was performed on the discs at different conditions. The work here demonstrates two conditions (one for each disc) where LSM produced continuous strips. Although different parameters were calculated from the two conditions, the effective strain (shown in Table 1) was nearly the same for both. The temperature rises, due to adiabatic heat generation during deformation were not very different for both discs. However, the grain size, hardness and texture were not the same. This is attributed to differences in discs used for LSM rather than the LSM condition differences. Sample A was taken from a normalized and slow cooled disc of high hardness and martensitic regions (Figure 2a). The increase in hardness for the sample after LSM is~0.5 GPa, which indicates a small decrease in the grain size. For sample B, the original grain size for the disc prior to LSEM was larger due to tempering at 760 • C. The grain size decrease after LSM (Figure 3) is more noticeable in this sample. Therefore, the increase in hardness after LSM is more noticeable (~2 GPa increase). Since the effective strain, and thus, the material flow is very close for samples A and B, but sample B is formed from a tempered and low hardness disc (compared to the normalized and slow cooled disc used to generate sample A), the higher grain refinement in Sample B is attributed to the initial microstructure. The resultant hardness values were the same for sample A and B post-annealing and were also the same value as the tempered disc before LSM but with less residual strains, which indicates that the hardness of the LSM samples before and after annealing follows the Hall-Petch effect [43] and is grain size dependent. The microstructures, however, are different from the original discs as the grains are equiaxed for sample B or mixed, and equiaxed and elongated (but of much less elongation), for sample A compared to the original discs ( Figure 3). The aspect ratio for elongated grains can affect the mechanical properties [44,45] and therefore, direct comparison with the discs prior to LSM is not possible based on grain size only. Recrystallization for some regions in the LSM was observed in regions with carbides and with less carbides, which indicates that the recrystallization was mainly due to high strains in some regions of the samples. Other regions (Figure 4) did not recrystallize and showed no grain growth. Measurements of local strains in the samples (future work) and correlation with LSM parameters and process can lead to optimization of the parameters of LSM to provide more stable grains. After recrystallization, the grains were stable and the samples showed no further segregation of elements to the grain boundaries other than carbides which are possibly due to re-precipitation after recrystallization or grain boundaries intercepting matrix carbides during growth. The recrystallization leads to strain minimization and the release of grain boundary energy and this provided stability to the grains after further annealing and further support of the possibility of having more stable grains from LSM if strain was to be minimized or distributed, which is possible with performing LSM at high heats. To demonstrate the ductility and ductility recovery in LSM samples, sample B was chosen since it is formed with equiaxed grains and of bimodal distribution. The microtensile experiment is less affected by grain texture, aspect ratio, and possible non-uniform morphology in the LSM samples compared to nanoindenation. Table 3 provides a summary of the average tensile properties of all samples tested in this study. The decrease in ductility for the LSM samples is accompanied with a high increase in yield strength, which is expected after the grain size refinement and the increase in strain in the LSM sample. These materials have a grain size in the 100-1000 nm regime where the deformation mechanism was previously suggested to be governed by grain boundary shearing promoted by the pile-up of dislocations [46]. Deformation in this regime is well described through the core-and-mantle model [47]. In the model, the grain boundaries are believed to be formed of ledges which can start dislocation formation in the "mantle" layer near the grain boundaries. Dislocations in the "mantle" forms a hardened layer near the boundaries and dominate the plastic flow process, unlike the "core" which undergoes a low work hardening rate. As the grain size decreases in this regime (gets closer to few hundreds of nms, as in the case of the LSM sample), the "mantle" to the "core" fraction increases, causing high yield stress. Although the sample showed some elongation during deformation ( Figure 6, Table 3), no strain hardening is observed. Materials subject to severe plastic deformation (SPD) are known to undergo dynamic recovery due to the rise of local temperature [48,49]. The recovery leads to saturation and/or annihilation of dislocations at the grain boundaries, thus leading to a decrease in the work hardening and a necking at the yield stress. Several ultrafine materials formed via SPD have demonstrated this behavior [46,50]. After annealing, however, ductility is mostly recovered, and the yield strength decreased when compared with the LSM sample (prior to annealing) but was similar to the HT-9 disc. An increase in work hardening rate was also shown to occur and the fracture surface ( Figure 6d) demonstrated a ductile fracture of 45 degrees to the loading axis. The increase in the hardening rate after annealing is associated with partial recovery of grain boundaries (Figures 2 and 3), which was previously described to occur during low to moderate temperature annealing of SPD materials due to minimization of strain localization [46]. The grain size, after annealing, increased which can describe the drop in the yield strength since materials in this grain size regime still follow the Hall-Petch relationship [43,46]. After the annealing, the sample is also composed of equiaxed grains and of bimodal grain size distribution ( Figure 3). A bimodal distribution leads to enhanced strength and ductility where small grains increase the hardness and large grains ensure elongation during deformation and thus, ductility [51,52]. The LSM samples (before annealing) have high strength compared to commercial HT-9 (prior to LSM) and material elongation occurred during deformation. After annealing, larger elongation occurred but hardness decreased. Although the hardness of the discs prior to LSM is similar, the martensitic structure (Figure 2a) of the morphology of the discs prior to LSM would have a large contribution to the hardness. Even the tempered disc is still expected to have martensitic microstructure. Such a microstructure is further minimized in the LSM samples after annealing and the high density of the grain boundaries would have a significant contribution to the hardness of the LSM annealed samples. Moreover, the grain shape was equiaxed and with much less strain. The equiaxed shape of the grains can lead to enhanced radiation resistance as demonstrated in a previous work on another BCC material [53]. In this work, LSM was used to manufacture nanocrystalline and ultrafine forms of two different HT-9 microstructures. The effective strain was the same in both cases. The results (morphology before and after annealing and hardness before annealing) demonstrated how the initial microstructure can affect the deformation process. Several parameters, however, can be changed during the LSM process and altering the microstructure can be achieved through optimization of the LSM parameters and the initial microstructure. The LSM process should then offer an easy pathway to create controlled and stable nanocrystalline forms of real reactor materials, with very low martensitic fractions, to be investigated at temperature relevant to some Gen IV fission reactors. Nanocrystalline forms of model materials, such as Fe, were previously studied [16]. However, the thermal stability in model materials limited the temperature at which the performance of these nanocrystalline forms can be investigated. The nanocrystalline forms of real reactor materials not only alloy higher temperature studies, but also offer the advantage of understanding the role of the other microstructural elements on several vital factors affecting the overall performance under severe environments such as the matrix and grain boundary sink efficiencies, segregation, thermal stability, and defects behavior. Possessing outstanding thermal stability, defect sink density and mechanical properties, the performance of these materials under reactor relevant conditions is critical to climb the technology readiness level (TRL). Investigating the irradiation tolerance of these materials at low temperatures where commercial HT-9 suffers from embrittlement, and elucidating how grain boundaries accompanied with alloying elements affect their radiation tolerance should be a next step to qualify these materials for the use in nuclear industry. Since Gen IV reactors are to be operated at temperatures in the range of 350-700 • C, the thermal stability of these alloys demonstrated in this work makes them promising candidates for the nuclear industry. Du et al. [20] has shown outstanding irradiation resistance to grain growth and void swelling in nanostructured austenitic steels at relevant temperatures to nuclear applications. Future works on nanostructured HT-9 can possibly demonstrate similar outstanding irradiation resistance. Climbing the TRL further depends on whether the process of making these materials is or is not scalable. Bulk nanostructured materials in the form of foils, plates and bars with controlled dimensions were produced via LSEM and thicknesses of up to 2.7 mm (cladding materials are of~0.6 mm thickness) have been achieved [33]. Moreover, morphology produced LSEM principles can be used as benchmarks for other techniques and can be produced by other materials processing techniques such as friction stir processing [54,55]. The results in this work indicate that further parameter optimization during machining, understanding the deformation process during the low cost, efficient and high throughput LSM process, and the correlation with disc materials used for LSM can lead to nanostructured and thermally stable steels where strength and ductility can be optimized. Conclusions We have performed LSM with two different sets of conditions but with similar effective strain on commercial HT-9 material and achieved nanocrystalline and ultrafine grain sizes with different aspect ratios which depended on the initial state of the HT-9 disc. The nanostructured grains had excellent thermal stability up to 700 • C, at the point where some regions of high strains recrystallized to form grains in the ultrafine regions that were stable during further annealing. EDX mapping performed on one sample (before LSM and after LSM and annealing) demonstrated no additional elemental segregation on the grain boundaries other than the initial carbides (which were available in the HT-9 discs prior to LSM) or the re-precipitated carbides (during LSM) indicating that the high thermal stability of the LSM samples is due to the evident low dislocation densities and strains in the post LSM materials. The hardness of the LSM samples were higher than the initial discs but with lower ductility. After annealing, the hardness decreased but the ductility and the strain hardening recovered. The stable nanocrystalline (prior to annealing) and ultrafine, elongated or equiaxed (post-annealing) microstructure will permit high temperature irradiation resistance investigation of real reactor nanocrystalline materials at prototypic nuclear application (e.g., fission) conditions. The results demonstrate the ability to control the nanocrystalline microstructure with additional fundamental understanding of the resultant morphology dependence on the LSM conditions.
7,018.2
2021-09-28T00:00:00.000
[ "Materials Science" ]
Pharmacogenomic biomarker information differences between drug labels in the United States and Hungary: implementation from medical practitioner view Pharmacogenomic biomarker availability of Hungarian Summaries of Product Characteristics (SmPC) was assembled and compared with the information in US Food and Drug Administration (FDA) drug labels of the same active substance (July 2019). The level of action of these biomarkers was assessed from The Pharmacogenomics Knowledgebase database. From the identified 264 FDA approved drugs with pharmacogenomic biomarkers in drug label, 195 are available in Hungary. From them, 165 drugs include pharmacogenomic data disposing 222 biomarkers. Most of them are metabolizing enzymes (46%) and pharmacological targets (41%). The most frequent therapeutic area is oncology (37%), followed by infectious diseases (12%) and psychiatry (9%) (p < 0.00001). Most common biomarkers in Hungarian SmPCs are CYP2D6, CYP2C19, estrogen and progesterone hormone receptor (ESR, PGS). Importantly, US labels present more specific pharmacogenomic subheadings, the level of action has a different prominence, and offer more applicable dose modifications than Hungarians (5% vs 3%). However, Hungarian SmPCs are at 9 oncology drugs stricter than FDA, testing is obligatory before treatment. Out of the biomarkers available in US drug labels, 62 are missing completely from Hungarian SmPCs (p < 0.00001). Most of these belong to oncology (42%) and in case of 11% of missing biomarkers testing is required before treatment. In conclusion, more factual, clear, clinically relevant pharmacogenomic information in Hungarian SmPCs would reinforce implementation of pharmacogenetics. Underpinning future perspective is to support regulatory stakeholders to enhance inclusion of pharmacogenomic biomarkers into Hungarian drug labels and consequently enhance personalized medicine in Hungary. Introduction Pharmacogenomics (PGx) is one of the precision medicine (PM) tools to be applied to maximize treatment effectiveness, while limit the drug toxicity by differentiating responders from nonresponders to medications, based on an individual's genetic constitution [1]. Pharmacogenomic information may be provided in drug labeling to inform healthcare providers about the impact of genotype on response to a drug through description of relevant genomic markers, functional effects of genomic variants, dosing recommendations based on genotype, and other applicable genomic information [2]. This can describe variability in clinical response and drug exposure, risk of adverse events, genotype-specific dosing, mechanisms of drug action, polymorphic drug target and disposition genes or trial design features [3]. Information on PGx biomarkers and laboratory testing provides the resource for practicing medical doctors to apply personalized medicine in clinic [4]. In order to implement PGx in clinical setting, practicing doctors need to have both information on PGx biomarkers or guidelines implementing the use of biomarkers, and available laboratory tests as input, and handy implementation tools to be able to generate output in clinics. The drug labeling for some, but not all, of the products includes specific actions to be taken based on the PGx biomarker information. This information can appear in different sections of the labeling depending on the actions [3]. One would expect regulations for drugs and diagnostics not to differ significantly between countries, given that regulatory authorities evaluate the same scientific data generated in an increasingly globally harmonized context [5]. Despite international regulatory harmonization, implementation of the pharmacogenomic information in official drug labeling shows wide range of geographical variety [6]. The US Food and Drug Administration (FDA) and the European Medicines Agency (EMA) work jointly and in multiple ways on scientific evaluation of drugs to ensure that pharmacogenomic strategies are applied appropriately in all phases of drug development. EMA is responsible for the centralized marketing authorization applications in the European Union and some additional countries. Once granted by the European Commission, the centralized marketing authorization is valid in all European Union Member States, in Hungary as well. However, several drugs have undergone the Hungarian national marketing authorization process previously, therefore the PGx information might be not updated. The ultimate aim and rationale of this study is to: (1) Provide an evaluation of current status of PGx biomarker information present in Hungarian drug labels. (2) Summarize the potential needs of medical practitioners, healthcare providers. (3) Identify the gaps of PGx implementation and potential solutions. Materials and methods All data presented in this work have been collected in July 2019. Consequently, the US FDA information on available pharmacogenomic biomarkers in drug labeling represents the most up-to-date current content as of 26 March 2019 (https://www.fda.gov). The Hungarian Summaries of Product Characteristics (SmPCs) of the same active substance were assessed from the National Institute of Pharmacy and Nutrition database of Hungary (www.ogyei.gov.hu/ gyogyszeradatbazis/). PGx information on the level of action was collected on PharmGKb ® (www.pharmgkb.org) and compared with the same information from the Hungarian SmPCs. Identical data collection was performed in 2017 spring, providing the opportunity to have an overview about the dynamic change of the implementation of PGx information in Hungarian drug labels. Biomarkers in our investigation include but are not limited to germline or somatic gene variants (polymorphisms, mutations), functional deficiencies with a genetic etiology, gene expression differences, and chromosomal abnormalities; specific protein biomarkers that are used to select treatments for patients are also included. The investigation does not include nonhuman genetic biomarkers (e.g., microbial variants that influence sensitivity to antibiotics), biomarkers that are used solely for diagnostic purposes (e.g., for genetic diseases) unless they are linked to drug activity or used to identify a specific subset of patients in whom prescribing information differs, or biomarkers that are related to a drug other than the referenced drug (e.g., influences the effect of the referenced drug as a perpetrator of an interaction with another drug). For drugs that are available in multiple dosage forms, salts, or combinations, a single-representative product is listed. In the case of combination products, the single agent associated with the biomarker is listed unless the agent is only approved as a combination product, in which case all agents are listed. In order to measure the statistical differences, two-sided p values were calculated using Pearson's chi-squared test or Fisher's exact test. A p value < 0.05 was considered to indicate a statistically significant result. Statistical analyses were performed applying Microsoft ® Excel ® for Mac ® 2011 and IBM ® SPSS ® Statistics Version25 for Mac (SPSS Inc., Chicago, IL, USA). Results We identified 264 drugs in the US FDA Table of Pharmacogenomic Biomarkers in Drug Labeling after excluding duplicate active ingredients. Out of these 264 active ingredients we were able to identify 195 (74%) through the website of the National Institute of Pharmacy and Nutrition in Hungary being available in Hungary (Table 1). Among the 195 drugs, 145 (75%) have PGx information included in the Hungarian product summary. Important to note that while taking a point-in-time snapshot, the number of drugs with PGx information in the drug label has elevated in the US with 57% vs in Hungary with 46% in last 26 months. PGx information is partially present in drug label of 20 (10%), completely missing from drug label of 30 (15%) available active ingredients in Hungary compared with US FDA (Table 1, italic and bold, respectively). These drugs without PGx biomarker information in their label belong to diverse therapeutic areas (23% oncology, 23% anesthesiology, 20% infectious diseases, 7% cardiology, 7% inborn error, 7% rheumatology, 3% dermatology, 3% hematology, 3% psychiatry, and 3% pulmonology). The 69 drugs not available in Hungary are listed in Supplementary Table 1. The distribution of therapeutic areas of drugs with PGx information in their labeling is presented on Fig. 1. The most frequent therapeutic area is oncology (37%), followed by infectious diseases (12%), psychiatry (9%), and neurology (8%) (χ 2 p < 0.00001). As one drug's PGx can be affected by more than one specific biomarker, the identified 165 drugs with PGx data (including drugs with partially present data) dispose 222 biomarkers in the Hungarian SmPCs summarized in Table 2. In the Hungarian SmPCs, we identified information either on metabolizing enzymes (n = 102, 46%), pharmacological targets (n = 90, 41%), or other features (n = 30, 13%). Pharmacogenomic biomarkers influence the drug treatment on several different ways, thus one biomarker can have more than one impact. According to the Hungarian product summary, the aim of pharmacogenomic biomarker use can be the following: effects efficacy (n = 84), indicates toxicity (n = 67), belongs to the inclusion criteria (n = 67), belongs to the exclusion criteria (n = 24) because of elevated toxicity risk or effect dosage (n = 18). Moreover, 53 biomarkers (24% of all) are involved in drug-drug interaction management as dose modification or elevated toxicity risk is connected to the presence of enzyme inhibitor/ inductor irrespective of the pharmacogenomic background. Highly importantly, eight biomarkers (4 %) are factual in point of dosing and formulate exact algorithm to manage gene-drug interaction. Out of the biomarkers available in US drug labels, 62 (22%) are missing from the Hungarian SmPCs (p < 0.00001, Fisher's exact test). Our dynamic update shows that the percentage of missing PGx data in Hungarian drug labels has doubled in last 26 months as a result of accelerated PGx biomarker implementation in US FDA drug labeling. Most of the missing pharmacogenomic biomarkers belong to the therapeutic area of oncology (42%), followed by anesthesiology (18%), infectious diseases (13%); hematology (8%); cardiology, dermatology, gastroenterology, inborn errors of metabolism, psychiatry, pulmonology, rheumatology represent minor proportions (<4% each). In order to be able to compare the level of action of PGx biomarkers between Hungary and the United States, we extracted the information from the Hungarian SmPCs for US FDA approved drugs available in Hungary and compared with the level of action available on The Pharmacogenomics Knowledgebase (www.pharmgkb.org) ( Table 3). Testing is required at 72 biomarkers (25 %) in Hungary, from which 66 (92%) belong to field of oncology. In United States, in case of 79 (28%) biomarkers is testing obligatory before treatment. Four (1%) biomarkers in Hungarian drug labels are ranked into testing recommended category, six (2%) biomarkers in the United States. PGx information is actionable at 95 (34%) biomarkers in Hungary, compared with 108 (38%) in the United States. Out of the actionable biomarkers, 14 (5%) biomarkers dispose exact dosing adjustment in PharmGKB recommendation, but only eight (3%) of them are ranked into the same category in Hungary. The six (3%) remaining biomarkers predispose only actionable PGx data without dosing info in Hungarian drug inserts. Fifty-one (18%) biomarkers have informative PGx data in Hungarian drug label; however, in the United States 77 (27%) biomarkers are counted into this category (p = 0.009). Even from FDA US biomarkers 14 (5%) are missing from PharmGKB, which shows generally a rather delayed implementation of PGx information. This is the case for 62 (22%) biomarkers for Hungarian SmPC's (p < 0.00001). Talking about the PGx level of action, out of the 62 missing biomarkers from Hungarian SmPC's 7 (11%) belong to testing required category, 27 (44%) belong to actionable PGx category and 21 (29%) belong to informative PGx category according to PharmGKB. In order to implement PGx in everyday medical practice, we need to translate PGx biomarker information into drug level. It practically means that partially missing biomarkers in Hungarian SmPCs belong to 20, completely missing biomarkers to 30 drugs shown in Table 1. Notably, after checking the level of action, in case of 7 from these 50 drugs biomarker testing is required before treatment according to PharmGKB. It is of utmost importance that six from these seven drugs belong to oncology medication and Hungarian SmPCs mention information on lab test availability at 76 biomarkers (34%). However, the product summary does not ever refer on an exact laboratory in Hungarian drug label. The information on lab test availability is based on clinics internal regulation and doctor's daily routine either on commercial test or on academic setting. Discussion PM strategies and PGx are becoming more prevalent in research and clinical practice and are integral part of drug development. Therefore, including appropriate pharmacogenomic information and accurate description in drug labels intend to support medical professionals and patients is critical [2,8]. Territorial differences in drug label content of PGx biomarker information depending on responsible approval agencies do exist. For example, it is well known that cytochrome P450 pharmacogenetic information included in US FDA drug labels present significantly more specific pharmacogenetic information than analogous EU SmPCs [9]. Therefore, comparing labeling of medicines in Hungary versus the United States may identify gaps to solve. While investigating similarities and differences of PGx information in the United States and Hungarian drug label content, we identified that US labels presented significantly more specific pharmacogenetic subheadings than analogous Hungarian SmPCs. As 62 PGx biomarkers are missing completely from Hungarian SmPCs, Hungarian drug labels may need to be supplemented in future with the pharmacogenetic biomarker information in case of these active substances. Our study demonstrates that the most frequent therapeutic area with pharmacogenomic information in the drug label is oncology both in the United States and in Hungary. This is in line with the EMA statement that PGx information are preferentially present in drug labels having antineoplastic properties [10]. In the field of oncology, pharmacogenetic biomarkers represent a complex combination of germline and somatic variants [11]. Importantly, somatic mutations in tumor cell are increasingly implicated biomarkers in targeted therapy, applied in treatment selection, and are also often associated with treatment efficacy [12]. This is well represented in Hungarian drug labels since the main aim of pharmacogenomic biomarker use is to tailor treatment efficacy. On the other hand, hereditary variants affect pharmacokinetics and pharmacodynamics, and are more often considered to address adverse drug reactions. Tumor sequencing for somatic mutation detection is applied Fig. 1 Therapeutic areas of drugs with pharmacogenomic information in their labeling in Hungary in Hungarian institutions, and produces matched germline information. However, targeted tumor genome sequencing, to provide precision treatment decisions for patients, more relevantly reflects the local practices. Most commonly tested biomarkers in oncology in Hungary are pharmacological targets, where molecular diagnostics is required for patient selection and personalized genotype-directed therapy. For example, EGFR/KRAS/ALK in non-small cell lung carcinoma, or BRAF, NRAS in melanoma, in agreement with the ESMO guidelines [13,14]. In addition, BRCA1/2 are tested in breast and ovarian cancers, but it is not obligatory. In other tumors there is less consensus. According to our results, US labels scored the level of action of PGx information on the same overall quality than the analogous Hungarian SmPCs, but the prominence is different. Hungarian SmPCs are stricter regarding oncological drugs than US labels. Rigor towards genetic testing before oncology drug treatment in Hungary may be caused by the high cost of these target molecules, therefore confirmation of efficacy is rather obligatory before treatment. However, the proportion of requirement or recommendation for PGx testing is higher in oncology than in other therapeutic areas in the United States [15]. Of note, FDA offers more applicable information about dose modifications than Hungarian SmPCs. FDA has recognized genetic differences in drug metabolism where clinically relevant drug-drug interactions or gene-drug interactions trigger dose adjustment or use of alternative drugs [16]. Considering differences in gene expression and physiological maturation between pediatric and adult populations, extrapolation of adult pharmacogenetic information in FDA approved pediatric drug labels is not always appropriate [17,18]. Ontogeny-associated treatment response differences are specifically important in pediatric oncology drugs [18]. Nonetheless, pharmacogenomic biomarker information is commonly based on adult studies both in Hungarian SmPCs and FDA drug labels. Classification of PGx biomarkers (e.g. metabolizing enzymes, pharmacological targets, and others) is not available in Hungarian data resources. Categorization of biomarkers need to be implemented in Hungarian SmPC's, in order to clarify PGx information and consequently enhance genetic biomarker testing in daily medical routine. Pharmacogenetics-related drug-labeling updates do not always result in uniform clinical uptake of pharmacogenetic testing. Lack of simultaneous implementation of newly approved drugs linked to companion diagnostic biomarkers into the clinical practice has several reasons. Potential factors leading to heterogeneity in clinical uptake of pharmacogenetic testing include the strength of supportive evidence (1), which may originate from low contribution of known genetic variant to outcome or incomplete understanding of genetic variation effect; the consequences of a targeted adverse event or treatment failure (2); the availability of alternative agents or dosing strategies (3); the predictive utility of testing (4); test costeffectiveness, accessibility, and turnaround time (5); reimbursement issues (6); professional society positions (7); or simple general resistance to use of genetic tests (8) [19,20]. For example, information on lab test availability is unattached to Hungarian drug label and must have different source in the everyday medical work. The crucial solution can be establishment of the Europe-wide database for PGx laboratory test availability. Tough, a limited set of PGx biomarker test is available in Hungary, provided by three university laboratories (Pécs, Budapest, and Debrecen). All available obligatory tests are reimbursed by the Hungarian State Insurance if the genotyping has been done in noncommercial laboratory. The genotyping approach, the laboratory contacted depend on personal practice of the specific doctors. Also, implementation platforms delivering ready-to-apply genetic results in clinic are missing. In order to take advantage of PGx biomarkers in clinical practice integration with other personalized medicine approaches is also needed. On the other hand, preemptive pharmacogenomic testing of actionable genetic markers predicting systemic exposure can be the most future oriented approach to use PGx biomarkers in practice. All of these will unequivocally enhance the rate of uptake of PGx information by medical practitioners. Acceleration is seen in implementation of PGx info both in the United States and Hungary, though the regulatory dynamics is different. In case regulatory agencies enhance the inclusion of PGx biomarker information in Hungarian drug labels less technical barriers hinder the implementation of PM. The laboratory and professional requirements for all FDA biomarker testing are certainly available in Hungary. Although, pharmacogenomic knowledge of healthcare professionals and the corresponding medical education in PGx [21], as one of the key factors in implementation, need to be improved as well [22]. Hungarian drug labels do not contain any PGx evidence for Hungarian population neither on clinical endpoints nor on pharmacokinetics. Regulatory approval and submission of new drug application are based on international clinical trial's outcome in Hungary. However, this can be due to the low number of inhabitants in Hungary (ten Million) and the population's genetic heterogeneity. More focus may be given to the investigation of dose and regimens for special populations before applying for marketing authorization. Consequently, regulators could review dose-exposureresponse data with more certainty and better define dose recommendations in the label [23]. For unlicensed drugs we suggest representing PGx information in the SmPCs before marketing authorization such as for drugs under renewal or variation process. Limitations of the study include the followings. The field of PGx is rapidly advancing, therefore drug labeling is not static. Updating PGx information is a dynamic process and new markers are constantly being added. This is shown by 57% elevation of FDA drugs with PGx biomarkers in their labeling in last 26 months, compared with 46% in Hungary. However, the timelines used by the Hungarian authorities to update SmPCs according to FDA drug labels are hard to predict. In this study, FDA listed drugs (n = 264) with pharmacogenomic biomarkers in drug labeling were compared with drugs in the Hungarian National Institute of Pharmacy and Nutrition database with potential pharmacogenomic information in their SmPCs. Some active ingredients in Hungarian SmPCs may exist with pharmacogenomic information, although not mentioned by the FDA. These drugs remained hidden in our study. According to a previous study, pharmacogenetic information is included in patient-targeted sections for a minority of drug labels [24]. Our research focused on drug labels' doctor targeted section, but rather superficial content of patient information leaflet was ignored. Original active agents were investigated in the study. Differences between original and generic drug's label were neglected. This study was performed in support for regulatory decisions. In order to minimize the drug-associated risks in the general Hungarian population and reduce uncertainties about application of PGx biomarkers for medical practitioners. literature assembly, manuscript writing, final approval of the manuscript. IS: additions to the study plan, interpretation of results, manuscript writing, final approval of manuscript. GT: pharmacological evaluation of the results, help in interpretation, final approval of the manuscript. LJS: help in data acquisition and statistical analyses, final approval of the manuscript. AS: interpretation of results, manuscript writing, final approval of the manuscript. SB: interpretation of results, final approval of the manuscript. CS: concept and design, study plan preparation, tables and figures correction, interpretation of results, manuscript writing and correcting, final approval of manuscript, manuscript submission, correspondence. Compliance with ethical standards Conflict of interest The authors declare that they have no conflict of interest. Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons. org/licenses/by/4.0/.
4,988.2
2019-12-02T00:00:00.000
[ "Biology", "Medicine" ]
Electronic Structure of Ternary Alloys of Group III and Rare Earth Nitrides Electronic structures of ternary alloys of group III (Al, Ga, In) and rare earth (Sc, Y, Lu) nitrides were investigated from first principles. The general gradient approximation (GGA) was employed in predictions of structural parameters, whereas electronic properties of the alloys were studied with the modified Becke–Johnson GGA approach. The evolution of structural parameters in the materials reveals a strong tendency to flattening of the wurtzite type atomic layers. The introduction of rare earth (RE) ions into Al- and In-based nitrides leads to narrowing and widening of a band gap, respectively. Al-based materials doped with Y and Lu may also exhibit a strong band gap bowing. The increase of a band gap was obtained for Ga1−xScxN alloys. Relatively small modifications of electronic structure related to a RE ion content are expected in Ga1−xYxN and Ga1−xLuxN systems. The findings presented in this work may encourage further experimental investigations of electronic structures of mixed group III and RE nitride materials because, except for Sc-doped GaN and AlN systems, these novel semiconductors were not obtained up to now. Introduction Semiconductor devices based on group III nitrides operate in an exceptionally wide range of energy, e.g., light emitting diodes from the ultraviolet through visible light, up to the infrared region [1][2][3][4]. Solid solutions of group III nitride materials exhibit strong band gap (E g ) bowings [5,6]. Although rare earth (RE) nitrides adopt a rock-salt structure, their relatively narrow band gaps in a range from 0.9 to 1.3 eV [7][8][9][10][11][12] allow one to assume that the introduction of some limited contents of RE ions into group III host systems is a promising realization of band gap engineering and can assist in the search for novel nitride semiconductors. A linear decrease in E g with an increasing Sc content was experimentally reveled in Ga 1−x Sc x N [13][14][15], Al 1−x Sc x N [16][17][18], and Al 1−x Y x N [19] alloys. Theoretical investigations followed the experimental research and were focused on Sc-doped GaN [20][21][22] and AlN systems [22]. Calculations based on the density functional theory (DFT) indicated a general tendency in ternary solid solutions of group III and RE nitrides to form rock-salt systems [23]. The wurtzite-type materials are expected to be stable for relatively small (less than 0.5) contents of RE ions, above which the metastable hexagonal structures of two-dimensional atomic layers of the BN-type are energetically favorable. These predictions are consistent with the findings of previous experimental studies, which were focused on Sc-doped GaN and AlN materials [13][14][15][16][17][18]. Recent investigations of electronic structures of ternary alloys of rock-salt RE nitrides revealed very strong band gap bowings in such materials, which are related to RE ionic radii mismatch in particular systems [24]. The rock-salt alloys of RE and group III nitrides exhibit a linear increase in E g [25], which is also closely connected to the ionic radii of dopant ions despite an opposite relation of band gaps in wurtzite AlN, GaN, and In materials. This may be explained by the fact that valence and conduction band regions of the rock-salt alloys are dominated by the contributions coming from RE ions, whereas the contributions of group III ions are located well below and well above the valence band maximum (VBM) and conduction band minimum (CBM) of a material, respectively. In this work, the structural and electronic properties of wurtzite alloys of group III and RE nitrides are predicted from the first principles (DFT-based calculations). The lattice parameters of the materials are studied with the general gradient approximation [26], whereas the fully relativistic band structures are obtained with the use of the Tran-Blacha exchange correlation functional (MBJGGA [27]), which was designed for the accurate studies of semiconductor materials. The discussion of the dependences of E g on RE ion contents in the alloys are of particular interest, because, except for Sc-doped GaN and AlN systems, these novel nitride semiconductors were not studied experimentally nor theoretically. The findings presented in this work may encourage further experimental investigations of electronic structures of mixed group III and RE nitride materials and their potential applications. Results and Discussion Lattice parameters of parent AlN, GaN, and InN materials, calculated in this work, are gathered in Table 1. As one may expect, the GGA approach yielded slightly overestimated volumes of the unit cells, which is a characteristic feature of this exchange-correlation functional. Similar results were published in previous studies of structural parameters of group III nitrides [20][21][22]28]. [30] 0.69 [4] As presented in Figure 1, the dependencies of a hexagonal lattice parameter a on RE content in the materials reflect generally bigger ionic radii of RE ions [31], which was also discussed in the previous LDA-based studies for similar rock-salt systems [25]. One may notice an almost negligible lattice mismatch in In 1−x Sc x N systems. The lattice parameters in the solid solutions considered here obey the linear Vegard's law for x up to about 0.4, whereas the higher RE contents result in a rapid increase in a. This effect is particularly evident in the case of the smallest group III ion, i.e., in Al 1−x RE x N materials. It is also pronounced in Ga 1−x Y x N and In 1−x Y x N because of the relatively big ionic radius of yttrium. The introduction of RE ions in group III nitrides results in a tendency to form flattened hexagonal atomic layers, which was suggested in experimental studies for Ga 1−x Sc x N systems [14]. This effect was supported by the findings of the recent DFTbased investigations [23], i.e., the full structural relaxation due to the stress tensor and Hellmann-Feynman forces leads to a complete transition between the wurtzite and hexagonal BN type structures in materials with RE contents larger than 0.5. The tendency to change the coordination number of ions in mixed nitrides is connected with the various electronic configurations of d-(RE) and p-block (group III) elements. The GGA-derived rapid increase in a, presented in Figure 1, is a signature of systems close to the complete flattening of hexagonal atomic layers, which is clearly seen in the c/a ratio plots in Figure 2. One may further consider that the alloys with x values of less than 0.25 preserve c/a values that are very close to those characteristic of pure AlN, GaN, and InN compounds, whereas x greater than 0.25 results in more significant modifications to the wurtzite-type structures. Nevertheless, except for the Ga 1−x RE x N systems, the rock-salt ground state is expected to be energetically favorable in solid solutions of RE and group III nitrides for RE contents significantly smaller than 0.5 [23]. Because the available experimental data for high-quality samples were only reported for Sc-doped GaN and AlN [15,18], the issue of structural parameters of hexagonal alloys of REN and group III nitrides requires further experimental investigations. The band gaps of parent AlN, GaN, and LuN materials, calculated here within the MBJGGA approach, are gathered in Table 1. The E g = 5.12 eV, obtained here for AlN, is lower than the previous MBJLDA results of full and pseudopotential calculations [27,32], which are also lower than the experimental data (6.12 [1]). Similar underestimation of E g is revealed for GaN. A recent study of the electronic structures of group III nitrides reported that the band gaps from the MBJLDA calculations are noticeably smaller than the MBJGGA ones [28]. One may consider some empirical adjustments in the parametrization of the MBJ potential to improve the MBJGGA results for nitride materials [33]. However, such a task is difficult due to the relatively big set of parent compounds studied in this work. The value of E g = 0.7 for InN is in excellent accordance with the experimental data [3,4]. The use of the original MBJ approach is generally desirable in consistent discussion of the results obtained here and reported in the literature. The most interesting feature of semiconductor alloys is the band gap engineering. As depicted in Figure 3a, the Al 1−x Y x N and Al 1−x Lu x N materials may exhibit E g in a wide range with a noticeable bowing. The results presented here are consistent with the available experimental data for Sc-and Y-doped AlN [18,19], taking into account the abovementioned general underestimation of MBJGGA-derived E g . A comparable range of E g is available in Al 1−x Ga x N alloys [5], whereas smaller band gaps were reported for Al 1−x In x N alloys [5]. Therefore, RE-doped AlN semiconductors may be expected to be promising materials for applications in the ultraviolet range. The dependences of E g on x in Ga 1−x RE x N alloys, as depicted in Figure 3b, are expected to be linear for x up to about 0.4, for which some effects connected with structural distortions in hexagonal atomic layers of the materials are revealed. The relatively small change in E g is expected in Y-and Lu-doped GaN systems. The only Ga-based materials that were experimentally studied are Ga 1−x Sc x N alloys [13][14][15]. Although the increase in E g with an increasing Sc content in Ga 1−x Sc x N is surprising in view of the previous experimental reports [13,14], it has already been demonstrated for high-quality thin films of this material deposited on GaN and AlN buffer layers [15]. This was also explained in previous DFT-based studies [22], which employed the MBJGGA and hybrid exchangecorrelation calculations; namely, the hypothetical wurtzite ScN may exhibit E g bigger than 4 eV, which is reflected in an increase of E g in Ga 1−x Sc x N systems. As presented in Figure 3c, the introduction of RE ions leads to a strong linear increase of E g in In 1−x RE x N systems when compared to that of the InN host material. Comparable band gaps are available in the well-known In 1−x Ga x N semiconductors [5]. Similarly to the abovementioned case of Ga 1−x RE x N, values of x bigger than about 0.3 may induce some structural changes, which affects band gaps in In 1−x RE x N alloys. However, this effect is less pronounced in In-based systems due to the fact that the band structures of these semiconductors are dominated by the relatively narrow E g of InN. It is worth recalling that an opposite phenomenon was predicted for the rock-salt REN materials doped with In, i.e, the band gap of such systems increases with increasing In content, as a result of the relatively wide band gap of rock-salt InN [25]. It is worth recalling that the influence of various atomic configurations of alloys on a band gap of group III nitride materials is very strong [5]. The investigations of such effects are beyond the scope of this study. The results presented here were obtained with possibly homogeneous models of alloys because the clustering of RE ions is expected to cause a phase segregation in mixed RE and group III materials [17]. A careful analysis of total and partial contributions into the density of states (DOS) in the vicinity of VBM in RE-doped InN materials, presented in Figure ??, reveals some common features of semiconducting nitrides. Namely, the valence regions of these materials are mainly formed by the N 2p states and some minor contributions of the p and d states coming from group and RE ions, respectively. The characteristic electronic structure of nitrides near VBM is unaffected by the doping, which was also found for the materials in an opposite regime of compositions, i.e., the rock-salt alloys of Al/Ga/In-doped REN systems [25]. The unoccupied d-electron contributions coming from RE ions are located above the CBM region (not shown). The evolution of a band gap in the WZ alloys is connected with some chemical pressure related to the relatively big ionic radii of RE elements and the presence of the d-type contributions into the total DOS in the vicinity of CBM of a host material. Conclusions The structural properties of solid solutions of RE and group III nitrides predicted from first principles are rather complex. Similarly to the findings of some experimental studies for Sc-doped GaN, one may observe a flattening of the wurtzite atomic layers, which is directly connected with the presence of the RE ion in the material. The GGA-based results indicate rather small structural modifications for contents of RE lower than 0.25, whereas a very rapid change in the c/a ratio was found for RE contents close to one half. Because this effect is the most pronounced in Y-doped systems, the size of the RE ion may be regarded as an important factor for the abovementioned structural modification. The decreasing band gap as a function of x is expected in Al 1−x RE x N materials. The strong bowing of E g was found for Al 1−x Y x N and Al 1−x Lu x N. Smaller reductions of E g were obtained for Ga-based materials, except for Ga 1−x Sc x N, in which E g increased with an increasing Sc content. The doping with RE ions also seems to be a reasonable strategy of band gap widening in InN. The electronic structure of this family of materials is especially interesting due to the complete lack of any experimental reports on RE doped InN systems. The results presented in this work may encourage further experimental investigations of structural and electronic properties of novel nitride semiconductor alloys. Materials and Methods The DFT calculations were performed using the VASP package [34,35]. The plane wave augmented (PAW [36]) atomic datasets with Perdew-Burke-Ernzerhof parameterization (GGA [26]) of the exchange-correlation functional were employed. The solid solutions were modeled with 2 × 2 × 2 supercells, i.e., the multiplications of the wurtzite primitive cell. Possibly homogeneous atomic configurations were selected. All structural properties, i.e., lattice parameters and atomic positions, were fully relaxed via stresses/forces optimization. The 500 eV plane-wave energy cutoff and 6 × 6 × 6 k-point lattice were selected. The band structures and DOS plots were obtained in the fully relativistic mode within the MBJGGA approach [27]. Data Availability Statement: The data presented in this study are available on reasonable request from the corresponding author. Conflicts of Interest: The author declares no conflict of interest.
3,330.6
2021-07-23T00:00:00.000
[ "Materials Science" ]
Quinolizidines as Novel SARS-CoV-2 Entry Inhibitors COVID-19, caused by the highly transmissible severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2), has rapidly spread and become a pandemic since its outbreak in 2019. We have previously discovered that aloperine is a new privileged scaffold that can be modified to become a specific antiviral compound with markedly improved potency against different viruses, such as the influenza virus. In this study, we have identified a collection of aloperine derivatives that can inhibit the entry of SARS-CoV-2 into host cells. Compound 5 is the most potent tested aloperine derivative that inhibited the entry of SARS-CoV-2 (D614G variant) spike protein-pseudotyped virus with an IC50 of 0.5 µM. The compound was also active against several other SARS-CoV-2 variants including Delta and Omicron. Results of a confocal microscopy study suggest that compound 5 inhibited the viral entry before fusion to the cell or endosomal membrane. The results are consistent with the notion that aloperine is a privileged scaffold that can be used to develop potent anti-SARS-CoV-2 entry inhibitors. Introduction Since its outbreak in 2019, Coronavirus Disease 2019 (COVID-19) has rapidly spread and become a pandemic [1]. Based on current information from World Health Organization (WHO), there have been more than 500 million confirmed COVID-19 cases, with more than 6 million deaths globally as of June 2022. COVID-19 is caused by the highly transmissible severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) [2][3][4][5]. SARS-CoV-2 may cause severe illnesses to the respiratory system and, in some cases, other organs. Many potential clinical remedies were tested for their efficacy against SARS-CoV-2, including chloroquine, hydroxychloroquine, lopinavir plus Ritonavir (Kaletra), umifenovir (Arbidol), remdesivir (RE), and favipiravir [6]. However, most of the repurposing drugs did not present significant clinical improvement in hospitalized adult COVID-19 patients [7]. Recently, two orally effective drugs Paxlovid and Molnupiravir were approved by the US FDA for emergency use of COVID-19 treatment. Paxlovid has two drug components-one is Nirmatrelvir, a peptidomimetic inhibitor targeting the SARS coronavirus main protease (Mpro) to block viral polyprotein processing during viral replication, and the other is Ritonavir that inhibits the metabolic break-down of Nirmatrelvir to prolong efficacy [8,9]. Molnupiravir is a nucleoside analog, which interferes with viral RNA transcription by targeting viral RNA-dependent RNA polymerase (RdRp) [8]. However, due to the high mutation rate of SARS-CoV-2, new variants continue to emerge that may compromise the effectiveness of current vaccines and drugs. The D614G was the first to replace the original SARS-CoV-2 as globally dominant variant due to its increased receptor binding and better fitness [10]. The WHO has since named multiple variants of concern (VOC) including Alpha, Beta, Gamma, Delta, and the recently circulating Omicron variants. It was reported that Omicron variants contain more mutations than previous variants, especially in the receptor-binding domain of the viral spike protein, which could lead to resistance to neutralizing antibodies. It was reported that Omicron variants are much more infectious compared to previous prevalent SARS-CoV-2 subtypes such as Delta, while partially resistant to the vaccine-induced neutralizing antibodies [11][12][13]. Based on these information, it is highly possible that new variants resistant to current vaccines and treatments could emerge in the future. Thus, novel anti-SARS-CoV-2 agents that can inhibit a broad spectrum of SARS-CoV-2 variants are urgently needed. The life cycle of SARS-CoV-2 presents various opportunities for the development of novel anti-SARS-CoV-2 therapeutics. This study was focused on identifying novel small molecules that can effectively inhibit SARS-CoV-2 entry. The virus uses ACE2 as a receptor to enter cells through two routes: endocytosis and direct fusion with the cell membrane [14]. Many viruses, including SARS-CoV-2 and influenza viruses, use endocytosis to enter host cells [15,16]. The endosomal cathepsin B may be responsible for cleavage of the viral spike protein (S), which results in membrane fusion and release of the viral RNA into cytoplasm. The second route of viral entry, through the cell membrane, requires cellular proteases, such as the transmembrane protease serine 2 (TMPRSS2), which cleaves the S protein into S1 and S2 subunits for membrane fusion [17,18]. Once it enters into the host cell cytosol, the viral genomic RNA is directly translated by host ribosomes in the cytoplasm to complete viral replication. Each step of the viral entry process may be a potential target for therapeutic intervention. Our rationale for looking into the anti-SARS-CoV-2 activity of aloperine and its derivatives stems from our prior studies on the antiviral activities of this class of compounds. We have previously discovered that aloperine is a new privileged scaffold that can be modified to become a specific antiviral compound with markedly improved potency against different viruses such as the influenza virus or HIV-1 [19][20][21][22]. Due to their potential of having a broad spectrum antiviral activity, we tested aloperine and a series of previously reported aloperine derivatives against a SARS-CoV-2 pseudovirus containing the corona virus spike protein of the D614G variant [23]. The pseudoviruses also contain a luciferase gene as a reporter, which can be used to efficiently screen small molecules or antibodies that can block the virus entry. Since SARS-CoV-2 has evolved into multiple variants with significant mutations in the spike proteins, it is important to test compounds against variants especially the recently circulating Omicron variants. The significance of this study includes identification of aloperine derivatives with much improved anti-SARS-CoV-2 entry activity through structural modifications and demonstration of their ability to inhibit the pseudotyped viruses carrying SARS-CoV-2 spike proteins from various variants, such as that from currently circulating Omicron BA.4/BA. 5. The results of this study described herein are expected to provide critical information towards further developing this class of natural products into effective therapeutics for the treatment of COVID-19. The aloperine derivatives 1-8 exhibited a range of activity against D614G spikepseudotyped virus from inactive to sub-µM inhibition (Table 1). Compound 1 was moderately active against the D614G spike-pseudotyped virus infection with an IC 50 of 4.7 µM. In contrast to compound 1, compound 3 potently inhibited HIV-1 at an IC 50 of 0.12 µM without anti-influenza virus activity. Similar to compound 1, compound 3 was moderately active against the D614G spike-pseudotyped virus infection with an IC 50 of 3.8 µM. Compound 7 was previously found to be equally active against both the HIV-1 NL4-3 and the influenza A virus PR8 with an IC 50 of 0.80 and 0.83 µM, respectively [22]. The potency of compound 7 against the D614G spike-pseudotyped virus was comparable to that of compounds 1 and 3 with an IC 50 of 3.7 µM. These results suggest that the aloperine derivatives have distinct structure-activity relationships (SARs) when compared to that of their anti-HIV-1 or anti-influenza virus activities. Thus, the anti-SARS-CoV-2 entry activity of aloperine derivatives cannot be predicted from their anti-HIV-1 or anti-influenza virus activity. The aloperine derivatives 1-8 exhibited a range of activity against D614G spikepseudotyped virus from inactive to sub-µM inhibition (Table 1). Compound 1 was moderately active against the D614G spike-pseudotyped virus infection with an IC50 of 4.7 µM. In contrast to compound 1, compound 3 potently inhibited HIV-1 at an IC50 of 0.12 µM without anti-influenza virus activity. Similar to compound 1, compound 3 was moderately active against the D614G spike-pseudotyped virus infection with an IC50 of 3.8 µM. Compound 7 was previously found to be equally active against both the HIV-1NL4-3 and the influenza A virus PR8 with an IC50 of 0.80 and 0.83 µM, respectively [22]. The potency of compound 7 against the D614G spike-pseudotyped virus was comparable to that of compounds 1 and 3 with an IC50 of 3.7 µM. These results suggest that the aloperine derivatives have distinct structure-activity relationships (SARs) when compared to that of their anti-HIV-1 or anti-influenza virus activities. Thus, the anti-SARS-CoV-2 entry activity of aloperine derivatives cannot be predicted from their anti-HIV-1 or anti-influenza virus activity. The compound and its anti-HIV and/or anti-influenza virus activity were previously described in part in references [19][20][21][22], respectively. 7 cis denotes that double bond of the quinolizidine scaffold was reduced and the compound is in the cis-conformations. 8 CC50/IC50 ratio < 5. 9 not determined. The IC50 values on D614G spike-pseudotyped virus are presented as the mean ± SD of three tests. ** denotes the antiviral activity against the murine leukemia virus envelope (MLV-env) pseudotyped virus. Among the tested aloperine derivatives, compound 5 exhibited the most potent activity against the D614G spike-pseudotyped virus infection with an IC50 of 0.50 µM (Table 1), which was approximately 22-and 5-fold more potent than aloperine and chloroquine, respectively. Compound 5 exhibited anti-HIV activity at an IC50 of 0.96 µM, but it was ineffective against the influenza A virus PR8. In contrast, compound 5 was inactive against a murine leukemia virus envelope (MLV-env) pseudotyped virus where the SARS-CoV-2 spike protein was replaced with MLV-env. The aloperine derivatives 1-8 exhibited a range of activity against D614G spikepseudotyped virus from inactive to sub-µM inhibition (Table 1). Compound 1 was moderately active against the D614G spike-pseudotyped virus infection with an IC50 of 4.7 µM. In contrast to compound 1, compound 3 potently inhibited HIV-1 at an IC50 of 0.12 µM without anti-influenza virus activity. Similar to compound 1, compound 3 was moderately active against the D614G spike-pseudotyped virus infection with an IC50 of 3.8 µM. Compound 7 was previously found to be equally active against both the HIV-1NL4-3 and the influenza A virus PR8 with an IC50 of 0.80 and 0.83 µM, respectively [22]. The potency of compound 7 against the D614G spike-pseudotyped virus was comparable to that of compounds 1 and 3 with an IC50 of 3.7 µM. These results suggest that the aloperine derivatives have distinct structure-activity relationships (SARs) when compared to that of their anti-HIV-1 or anti-influenza virus activities. Thus, the anti-SARS-CoV-2 entry activity of aloperine derivatives cannot be predicted from their anti-HIV-1 or anti-influenza virus activity. The compound and its anti-HIV and/or anti-influenza virus activity were previously described in part in references [19][20][21][22], respectively. 7 cis denotes that double bond of the quinolizidine scaffold was reduced and the compound is in the cis-conformations. 8 CC50/IC50 ratio < 5. 9 not determined. The IC50 values on D614G spike-pseudotyped virus are presented as the mean ± SD of three tests. ** denotes the antiviral activity against the murine leukemia virus envelope (MLV-env) pseudotyped virus. Among the tested aloperine derivatives, compound 5 exhibited the most potent activity against the D614G spike-pseudotyped virus infection with an IC50 of 0.50 µM (Table 1), which was approximately 22-and 5-fold more potent than aloperine and chloroquine, respectively. Compound 5 exhibited anti-HIV activity at an IC50 of 0.96 µM, but it was ineffective against the influenza A virus PR8. In contrast, compound 5 was inactive against a murine leukemia virus envelope (MLV-env) pseudotyped virus where the SARS-CoV-2 spike protein was replaced with MLV-env. Compound 8 was inactive for both HIV-1 and 4.7 ± 0.76 >20 0.091 >20 2 3,6 2 The aloperine derivatives 1-8 exhibited a range of activity against D614G spikepseudotyped virus from inactive to sub-µM inhibition ( Table 1). Compound 1 was moderately active against the D614G spike-pseudotyped virus infection with an IC50 of 4.7 µM. In contrast to compound 1, compound 3 potently inhibited HIV-1 at an IC50 of 0.12 µM without anti-influenza virus activity. Similar to compound 1, compound 3 was moderately active against the D614G spike-pseudotyped virus infection with an IC50 of 3.8 µM. Compound 7 was previously found to be equally active against both the HIV-1NL4-3 and the influenza A virus PR8 with an IC50 of 0.80 and 0.83 µM, respectively [22]. The potency of compound 7 against the D614G spike-pseudotyped virus was comparable to that of compounds 1 and 3 with an IC50 of 3.7 µM. These results suggest that the aloperine derivatives have distinct structure-activity relationships (SARs) when compared to that of their anti-HIV-1 or anti-influenza virus activities. Thus, the anti-SARS-CoV-2 entry activity of aloperine derivatives cannot be predicted from their anti-HIV-1 or anti-influenza virus activity. The compound and its anti-HIV and/or anti-influenza virus activity were previously described in part in references [19][20][21][22], respectively. 7 cis denotes that double bond of the quinolizidine scaffold was reduced and the compound is in the cis-conformations. 8 CC50/IC50 ratio < 5. 9 not determined. The IC50 values on D614G spike-pseudotyped virus are presented as the mean ± SD of three tests. ** denotes the antiviral activity against the murine leukemia virus envelope (MLV-env) pseudotyped virus. Among the tested aloperine derivatives, compound 5 exhibited the most potent activity against the D614G spike-pseudotyped virus infection with an IC50 of 0.50 µM (Table 1), which was approximately 22-and 5-fold more potent than aloperine and chloroquine, respectively. Compound 5 exhibited anti-HIV activity at an IC50 of 0.96 µM, but it was ineffective against the influenza A virus PR8. In contrast, compound 5 was inactive against a murine leukemia virus envelope (MLV-env) pseudotyped virus where the SARS-CoV-2 spike protein was replaced with MLV-env. Compound 8 was inactive for both HIV-1 and The aloperine derivatives 1-8 exhibited a range of activity against D614G spikepseudotyped virus from inactive to sub-µM inhibition (Table 1). Compound 1 was moderately active against the D614G spike-pseudotyped virus infection with an IC50 of 4.7 µM. In contrast to compound 1, compound 3 potently inhibited HIV-1 at an IC50 of 0.12 µM without anti-influenza virus activity. Similar to compound 1, compound 3 was moderately active against the D614G spike-pseudotyped virus infection with an IC50 of 3.8 µM. Compound 7 was previously found to be equally active against both the HIV-1NL4-3 and the influenza A virus PR8 with an IC50 of 0.80 and 0.83 µM, respectively [22]. The potency of compound 7 against the D614G spike-pseudotyped virus was comparable to that of compounds 1 and 3 with an IC50 of 3.7 µM. These results suggest that the aloperine derivatives have distinct structure-activity relationships (SARs) when compared to that of their anti-HIV-1 or anti-influenza virus activities. Thus, the anti-SARS-CoV-2 entry activity of aloperine derivatives cannot be predicted from their anti-HIV-1 or anti-influenza virus activity. The compound and its anti-HIV and/or anti-influenza virus activity were previously described in part in references [19][20][21][22], respectively. 7 cis denotes that double bond of the quinolizidine scaffold was reduced and the compound is in the cis-conformations. 8 CC50/IC50 ratio < 5. 9 not determined. The IC50 values on D614G spike-pseudotyped virus are presented as the mean ± SD of three tests. ** denotes the antiviral activity against the murine leukemia virus envelope (MLV-env) pseudotyped virus. Among the tested aloperine derivatives, compound 5 exhibited the most potent activity against the D614G spike-pseudotyped virus infection with an IC50 of 0.50 µM (Table 1), which was approximately 22-and 5-fold more potent than aloperine and chloroquine, respectively. Compound 5 exhibited anti-HIV activity at an IC50 of 0.96 µM, but it was ineffective against the influenza A virus PR8. In contrast, compound 5 was inactive against a murine leukemia virus envelope (MLV-env) pseudotyped virus where the SARS-CoV-2 spike protein was replaced with MLV-env. Compound 8 was inactive for both HIV-1 and 3.8 ± 0.68 >20 >20 0.12 5 The aloperine derivatives 1-8 exhibited a range of activity against D614G spikepseudotyped virus from inactive to sub-µM inhibition (Table 1). Compound 1 was moderately active against the D614G spike-pseudotyped virus infection with an IC50 of 4.7 µM. In contrast to compound 1, compound 3 potently inhibited HIV-1 at an IC50 of 0.12 µM without anti-influenza virus activity. Similar to compound 1, compound 3 was moderately active against the D614G spike-pseudotyped virus infection with an IC50 of 3.8 µM. Compound 7 was previously found to be equally active against both the HIV-1NL4-3 and the influenza A virus PR8 with an IC50 of 0.80 and 0.83 µM, respectively [22]. The potency of compound 7 against the D614G spike-pseudotyped virus was comparable to that of compounds 1 and 3 with an IC50 of 3.7 µM. These results suggest that the aloperine derivatives have distinct structure-activity relationships (SARs) when compared to that of their anti-HIV-1 or anti-influenza virus activities. Thus, the anti-SARS-CoV-2 entry activity of aloperine derivatives cannot be predicted from their anti-HIV-1 or anti-influenza virus activity. The compound and its anti-HIV and/or anti-influenza virus activity were previously described in part in references [19][20][21][22], respectively. 7 cis denotes that double bond of the quinolizidine scaffold was reduced and the compound is in the cis-conformations. 8 CC50/IC50 ratio < 5. 9 not determined. The IC50 values on D614G spike-pseudotyped virus are presented as the mean ± SD of three tests. ** denotes the antiviral activity against the murine leukemia virus envelope (MLV-env) pseudotyped virus. Among the tested aloperine derivatives, compound 5 exhibited the most potent activity against the D614G spike-pseudotyped virus infection with an IC50 of 0.50 µM (Table 1), which was approximately 22-and 5-fold more potent than aloperine and chloroquine, respectively. Compound 5 exhibited anti-HIV activity at an IC50 of 0.96 µM, but it was ineffective against the influenza A virus PR8. In contrast, compound 5 was inactive against a murine leukemia virus envelope (MLV-env) pseudotyped virus where the SARS-CoV-2 spike protein was replaced with MLV-env. Compound 8 was inactive for both HIV-1 and The aloperine derivatives 1-8 exhibited a range of activity against D614G spikepseudotyped virus from inactive to sub-µM inhibition (Table 1). Compound 1 was moderately active against the D614G spike-pseudotyped virus infection with an IC50 of 4.7 µM. In contrast to compound 1, compound 3 potently inhibited HIV-1 at an IC50 of 0.12 µM without anti-influenza virus activity. Similar to compound 1, compound 3 was moderately active against the D614G spike-pseudotyped virus infection with an IC50 of 3.8 µM. Compound 7 was previously found to be equally active against both the HIV-1NL4-3 and the influenza A virus PR8 with an IC50 of 0.80 and 0.83 µM, respectively [22]. The potency of compound 7 against the D614G spike-pseudotyped virus was comparable to that of compounds 1 and 3 with an IC50 of 3.7 µM. These results suggest that the aloperine derivatives have distinct structure-activity relationships (SARs) when compared to that of their anti-HIV-1 or anti-influenza virus activities. Thus, the anti-SARS-CoV-2 entry activity of aloperine derivatives cannot be predicted from their anti-HIV-1 or anti-influenza virus activity. The compound and its anti-HIV and/or anti-influenza virus activity were previously described in part in references [19][20][21][22], respectively. 7 cis denotes that double bond of the quinolizidine scaffold was reduced and the compound is in the cis-conformations. 8 CC50/IC50 ratio < 5. 9 not determined. The IC50 values on D614G spike-pseudotyped virus are presented as the mean ± SD of three tests. ** denotes the antiviral activity against the murine leukemia virus envelope (MLV-env) pseudotyped virus. Among the tested aloperine derivatives, compound 5 exhibited the most potent activity against the D614G spike-pseudotyped virus infection with an IC50 of 0.50 µM (Table 1), which was approximately 22-and 5-fold more potent than aloperine and chloroquine, respectively. Compound 5 exhibited anti-HIV activity at an IC50 of 0.96 µM, but it was ineffective against the influenza A virus PR8. In contrast, compound 5 was inactive against a murine leukemia virus envelope (MLV-env) pseudotyped virus where the SARS-CoV-2 spike protein was replaced with MLV-env. Compound 8 was inactive for both HIV-1 and 0.5 ± 0.12 >20 ** >20 >40 0.96 The aloperine derivatives 1-8 exhibited a range of activity against D614G spikepseudotyped virus from inactive to sub-µM inhibition (Table 1). Compound 1 was moderately active against the D614G spike-pseudotyped virus infection with an IC50 of 4.7 µM. In contrast to compound 1, compound 3 potently inhibited HIV-1 at an IC50 of 0.12 µM without anti-influenza virus activity. Similar to compound 1, compound 3 was moderately active against the D614G spike-pseudotyped virus infection with an IC50 of 3.8 µM. Compound 7 was previously found to be equally active against both the HIV-1NL4-3 and the influenza A virus PR8 with an IC50 of 0.80 and 0.83 µM, respectively [22]. The potency of compound 7 against the D614G spike-pseudotyped virus was comparable to that of compounds 1 and 3 with an IC50 of 3.7 µM. These results suggest that the aloperine derivatives have distinct structure-activity relationships (SARs) when compared to that of their anti-HIV-1 or anti-influenza virus activities. Thus, the anti-SARS-CoV-2 entry activity of aloperine derivatives cannot be predicted from their anti-HIV-1 or anti-influenza virus activity. The compound and its anti-HIV and/or anti-influenza virus activity were previously described in part in references [19][20][21][22], respectively. 7 cis denotes that double bond of the quinolizidine scaffold was reduced and the compound is in the cis-conformations. 8 CC50/IC50 ratio < 5. 9 not determined. The IC50 values on D614G spike-pseudotyped virus are presented as the mean ± SD of three tests. ** denotes the antiviral activity against the murine leukemia virus envelope (MLV-env) pseudotyped virus. Among the tested aloperine derivatives, compound 5 exhibited the most potent activity against the D614G spike-pseudotyped virus infection with an IC50 of 0.50 µM (Table 1), which was approximately 22-and 5-fold more potent than aloperine and chloroquine, respectively. Compound 5 exhibited anti-HIV activity at an IC50 of 0.96 µM, but it was ineffective against the influenza A virus PR8. In contrast, compound 5 was inactive against a murine leukemia virus envelope (MLV-env) pseudotyped virus where the SARS-CoV-2 spike protein was replaced with MLV-env. Compound 8 was inactive for both HIV-1 and 11.9 ± 2.1 >20 >20 11.4 6 6 The aloperine derivatives 1-8 exhibited a range of activity against D614G spikepseudotyped virus from inactive to sub-µM inhibition (Table 1). Compound 1 was moderately active against the D614G spike-pseudotyped virus infection with an IC50 of 4.7 µM. In contrast to compound 1, compound 3 potently inhibited HIV-1 at an IC50 of 0.12 µM without anti-influenza virus activity. Similar to compound 1, compound 3 was moderately active against the D614G spike-pseudotyped virus infection with an IC50 of 3.8 µM. Compound 7 was previously found to be equally active against both the HIV-1NL4-3 and the influenza A virus PR8 with an IC50 of 0.80 and 0.83 µM, respectively [22]. The potency of compound 7 against the D614G spike-pseudotyped virus was comparable to that of compounds 1 and 3 with an IC50 of 3.7 µM. These results suggest that the aloperine derivatives have distinct structure-activity relationships (SARs) when compared to that of their anti-HIV-1 or anti-influenza virus activities. Thus, the anti-SARS-CoV-2 entry activity of aloperine derivatives cannot be predicted from their anti-HIV-1 or anti-influenza virus activity. The compound and its anti-HIV and/or anti-influenza virus activity were previously described in part in references [19][20][21][22], respectively. 7 cis denotes that double bond of the quinolizidine scaffold was reduced and the compound is in the cis-conformations. 8 CC50/IC50 ratio < 5. 9 not determined. The IC50 values on D614G spike-pseudotyped virus are presented as the mean ± SD of three tests. ** denotes the antiviral activity against the murine leukemia virus envelope (MLV-env) pseudotyped virus. Among the tested aloperine derivatives, compound 5 exhibited the most potent activity against the D614G spike-pseudotyped virus infection with an IC50 of 0.50 µM (Table 1), which was approximately 22-and 5-fold more potent than aloperine and chloroquine, respectively. Compound 5 exhibited anti-HIV activity at an IC50 of 0.96 µM, but it was ineffective against the influenza A virus PR8. In contrast, compound 5 was inactive against a murine leukemia virus envelope (MLV-env) pseudotyped virus where the SARS-CoV-2 spike protein was replaced with MLV-env. Compound 8 was inactive for both HIV-1 and The aloperine derivatives 1-8 exhibited a range of activity against D614G spikepseudotyped virus from inactive to sub-µM inhibition (Table 1). Compound 1 was moderately active against the D614G spike-pseudotyped virus infection with an IC50 of 4.7 µM. In contrast to compound 1, compound 3 potently inhibited HIV-1 at an IC50 of 0.12 µM without anti-influenza virus activity. Similar to compound 1, compound 3 was moderately active against the D614G spike-pseudotyped virus infection with an IC50 of 3.8 µM. Compound 7 was previously found to be equally active against both the HIV-1NL4-3 and the influenza A virus PR8 with an IC50 of 0.80 and 0.83 µM, respectively [22]. The potency of compound 7 against the D614G spike-pseudotyped virus was comparable to that of compounds 1 and 3 with an IC50 of 3.7 µM. These results suggest that the aloperine derivatives have distinct structure-activity relationships (SARs) when compared to that of their anti-HIV-1 or anti-influenza virus activities. Thus, the anti-SARS-CoV-2 entry activity of aloperine derivatives cannot be predicted from their anti-HIV-1 or anti-influenza virus activity. The compound and its anti-HIV and/or anti-influenza virus activity were previously described in part in references [19][20][21][22], respectively. 7 cis denotes that double bond of the quinolizidine scaffold was reduced and the compound is in the cis-conformations. 8 CC50/IC50 ratio < 5. 9 not determined. The IC50 values on D614G spike-pseudotyped virus are presented as the mean ± SD of three tests. ** denotes the antiviral activity against the murine leukemia virus envelope (MLV-env) pseudotyped virus. Among the tested aloperine derivatives, compound 5 exhibited the most potent activity against the D614G spike-pseudotyped virus infection with an IC50 of 0.50 µM (Table 1), which was approximately 22-and 5-fold more potent than aloperine and chloroquine, respectively. Compound 5 exhibited anti-HIV activity at an IC50 of 0.96 µM, but it was ineffective against the influenza A virus PR8. In contrast, compound 5 was inactive against a murine leukemia virus envelope (MLV-env) pseudotyped virus where the SARS-CoV-2 spike protein was replaced with MLV-env. Compound 8 was inactive for both HIV-1 and The aloperine derivatives 1-8 exhibited a range of activity against D614G spikepseudotyped virus from inactive to sub-µM inhibition (Table 1). Compound 1 was moderately active against the D614G spike-pseudotyped virus infection with an IC50 of 4.7 µM. In contrast to compound 1, compound 3 potently inhibited HIV-1 at an IC50 of 0.12 µM without anti-influenza virus activity. Similar to compound 1, compound 3 was moderately active against the D614G spike-pseudotyped virus infection with an IC50 of 3.8 µM. Compound 7 was previously found to be equally active against both the HIV-1NL4-3 and the influenza A virus PR8 with an IC50 of 0.80 and 0.83 µM, respectively [22]. The potency of compound 7 against the D614G spike-pseudotyped virus was comparable to that of compounds 1 and 3 with an IC50 of 3.7 µM. These results suggest that the aloperine derivatives have distinct structure-activity relationships (SARs) when compared to that of their anti-HIV-1 or anti-influenza virus activities. Thus, the anti-SARS-CoV-2 entry activity of aloperine derivatives cannot be predicted from their anti-HIV-1 or anti-influenza virus activity. The compound and its anti-HIV and/or anti-influenza virus activity were previously described in part in references [19][20][21][22], respectively. 7 cis denotes that double bond of the quinolizidine scaffold was reduced and the compound is in the cis-conformations. 8 CC50/IC50 ratio < 5. 9 not determined. The IC50 values on D614G spike-pseudotyped virus are presented as the mean ± SD of three tests. ** denotes the antiviral activity against the murine leukemia virus envelope (MLV-env) pseudotyped virus. Among the tested aloperine derivatives, compound 5 exhibited the most potent activity against the D614G spike-pseudotyped virus infection with an IC50 of 0.50 µM (Table 1), which was approximately 22-and 5-fold more potent than aloperine and chloroquine, respectively. Compound 5 exhibited anti-HIV activity at an IC50 of 0.96 µM, but it was ineffective against the influenza A virus PR8. In contrast, compound 5 was inactive against a murine leukemia virus envelope (MLV-env) pseudotyped virus where the SARS-CoV-2 spike protein was replaced with MLV-env. Compound 8 was inactive for both HIV-1 and The aloperine derivatives 1-8 exhibited a range of activity against D614G spikepseudotyped virus from inactive to sub-µM inhibition (Table 1). Compound 1 was moderately active against the D614G spike-pseudotyped virus infection with an IC50 of 4.7 µM. In contrast to compound 1, compound 3 potently inhibited HIV-1 at an IC50 of 0.12 µM without anti-influenza virus activity. Similar to compound 1, compound 3 was moderately active against the D614G spike-pseudotyped virus infection with an IC50 of 3.8 µM. Compound 7 was previously found to be equally active against both the HIV-1NL4-3 and the influenza A virus PR8 with an IC50 of 0.80 and 0.83 µM, respectively [22]. The potency of compound 7 against the D614G spike-pseudotyped virus was comparable to that of compounds 1 and 3 with an IC50 of 3.7 µM. These results suggest that the aloperine derivatives have distinct structure-activity relationships (SARs) when compared to that of their anti-HIV-1 or anti-influenza virus activities. Thus, the anti-SARS-CoV-2 entry activity of aloperine derivatives cannot be predicted from their anti-HIV-1 or anti-influenza virus activity. The compound and its anti-HIV and/or anti-influenza virus activity were previously described in part in references [19][20][21][22], respectively. 7 cis denotes that double bond of the quinolizidine scaffold was reduced and the compound is in the cis-conformations. 8 CC50/IC50 ratio < 5. 9 not determined. The IC50 values on D614G spike-pseudotyped virus are presented as the mean ± SD of three tests. ** denotes the antiviral activity against the murine leukemia virus envelope (MLV-env) pseudotyped virus. Among the tested aloperine derivatives, compound 5 exhibited the most potent activity against the D614G spike-pseudotyped virus infection with an IC50 of 0.50 µM (Table 1), which was approximately 22-and 5-fold more potent than aloperine and chloroquine, respectively. Compound 5 exhibited anti-HIV activity at an IC50 of 0.96 µM, but it was ineffective against the influenza A virus PR8. In contrast, compound 5 was inactive against a murine leukemia virus envelope (MLV-env) pseudotyped virus where the SARS-CoV-2 spike protein was replaced with MLV-env. Compound 8 was inactive for both HIV-1 and The compound and its anti-HIV and/or anti-influenza virus activity were previously described in part in references [19][20][21][22], respectively. 7 cis denotes that double bond of the quinolizidine scaffold was reduced and the compound is in the cis-conformations. 8 CC 50 /IC 50 ratio < 5. 9 not determined. The IC 50 values on D614G spike-pseudotyped virus are presented as the mean ± SD of three tests. ** denotes the antiviral activity against the murine leukemia virus envelope (MLV-env) pseudotyped virus. Among the tested aloperine derivatives, compound 5 exhibited the most potent activity against the D614G spike-pseudotyped virus infection with an IC 50 of 0.50 µM (Table 1), which was approximately 22-and 5-fold more potent than aloperine and chloroquine, respectively. Compound 5 exhibited anti-HIV activity at an IC 50 of 0.96 µM, but it was ineffective against the influenza A virus PR8. In contrast, compound 5 was inactive against a murine leukemia virus envelope (MLV-env) pseudotyped virus where the SARS-CoV-2 spike protein was replaced with MLV-env. Compound 8 was inactive for both HIV-1 and influenza A virus and was a weak inhibitor for D614G spike-pseudotyped virus. The data also support the notion that the SARs of aloperine derivatives against SARS-CoV-2 is distinct from that of their anti-HIV or anti-influenza A virus activities. Compounds 4 and 5 were significantly more potent than other structurally similar aloperine analogs for inhibition of the D614G spike-pseudotyped virus infection. Both compounds exhibited sub-µM potency against the D614G spike-pseudotyped virus infection of 293T-ACE2 cells. Compounds 4 and 5 are structurally different from other less potent analogs in that they possess an amine instead of an amide moiety to connect the terminal aromatic group to the aliphatic linker. Thus, we further synthesized compound 9 with the same amine moiety that has a fluorine in the para position of the aromatic ring using a similar method for obtaining compound 5 [20]. The 1H and 13C NMR information of compound 9 were included in Supplementary Materials (see Figures S1 and S2). Compound 9 exhibited comparable anti-D614G spike-pseudotyped virus activity when compared with compound 5 ( Table 1), suggesting that the amine moiety in the N12 side chain is favored for potent anti-SARS-CoV-2 activity. Effect of Compound 5 on SARS-CoV-2 Variants The emergence of new SARS-CoV-2 variants such as Delta, Omicron BA.1, BA.2, and BA.4/BA.5 has caused new waves of infection around the world. To determine the effectiveness of the aloperine derivatives on various SARS-CoV-2 variants, we tested the ability of the compounds to block entry of D614G, Delta, Omicron BA.1, Omicron BA.2, and Omicron BA.4/BA.5 with the same assay system described above. The results summarized in Table 2 indicated that D614G and Delta pseudotyped viruses were approximately equally sensitive to compound 5, whereas all the tested Omicron variants were approximately 1.5-fold less sensitive to the compound. In contrast, potency of the cathepsin inhibitor E64D was increased against the Omicron spike-pseudotyped viruses. On the other hand, the Omicron variants were very resistant to the spike protein mAb DH1047 [24]. These results suggest that, although slightly decrease in potency, the aloperine derivative 5 remains active against all tested variants, including the currently circulating Omicron variants. Mechanism of Action Study The D614G spike-pseudotyped virus is an indicator virus for the SARS-CoV-2 spike protein-mediated cell entry [23]. Inhibition of the D614G spike-pseudotyped virus infection suggests that the aloperine derivatives blocked the virus from entering the cells. Previous reports suggested that SARS-CoV-2 may enter cells through direct fusion with the cell membrane or endosomal membrane after endocytosis [14]. To dissect the mechanism of the anti-SARS-CoV-2 entry activity of aloperine derivatives, we infected 293T-ACE2 cells with the D614G spike-pseudotyped virus in the presence or absence of compound 5 or chloroquine diphosphate for 2 h. The SARS-CoV-2 spike protein was then detected with a fluorescence-conjugated anti-spike protein antibody and visualized under a confocal microscope using a protocol we have described previously [19]. As shown in Figure 1A, the spike protein with green fluorescence was barely visualized in the absence of compound 5, likely due to their low abundance and/or degradation by cellular proteolytic activities quickly after infection. In contrast, the virus was observed as puncta in 293T-ACE2 cells in the presence of compound 5 ( Figure 1B), suggesting that the virus was arrested on the cell membrane or in endosome in the presence of the compound 5. The presence of the spike proteins in the 293T-ACE2 cells raised the possibility that compound 5 did not block the binding of the pseudotyped virus to the ACE2 receptor. In addition, chloroquine treated sample showed no accumulation of viral puncta ( Figure 1C), which suggested a difference in their mechanisms of action for chloroquine and compound 5. Chloroquine was reported to inhibit SARS-CoV-2 at various steps of the viral life cycle [25]. the spike protein with green fluorescence was barely visualized in the absence of compound 5, likely due to their low abundance and/or degradation by cellular proteolytic activities quickly after infection. In contrast, the virus was observed as puncta in 293T-ACE2 cells in the presence of compound 5 ( Figure 1B), suggesting that the virus was arrested on the cell membrane or in endosome in the presence of the compound 5. The presence of the spike proteins in the 293T-ACE2 cells raised the possibility that compound 5 did not block the binding of the pseudotyped virus to the ACE2 receptor. In addition, chloroquine treated sample showed no accumulation of viral puncta ( Figure 1C), which suggested a difference in their mechanisms of action for chloroquine and compound 5. Chloroquine was reported to inhibit SARS-CoV-2 at various steps of the viral life cycle [25]. A class of aloperine derivatives was reported to have moderate anti-SARS-CoV-2 activity [26]. The highlighted compound 8a in the mentioned report exhibited an IC50 of 19 µM, while compound 5 described herein had an IC50 of 0.5 µM using a comparable pseudotyped virus assay. Compound 8a was implicated to block the viral entry through inhibition of cathepsin B, even though there was no direct binding between 8a and cathepsin B [26]. Compound 8a was inactive against cathepsin L, which is an endosomal protease involves in SARS-CoV-2 entry through endocytosis. To test whether our compounds inhibit cathepsin B or cathepsin L, the enzyme inhibitory activity of compound 5 was determined using cathepsin B or cathepsin L inhibitor assay kits (BPS bioscience). Compound 5 was totally inactive against cathepsin B or cathepsin L at a concentration as high as 20 µM whereas the known cathepsin B inhibitor E64 inhibited the enzyme activity by more than 95% at a concentration as low as 0.1 µM (Figure 2). The result strongly suggests that cathepsin B, or cathepsin L, is not a direct target of compound 5. Thus, the molecular mechanism of action of the aloperine derivatives remains to be determined. A class of aloperine derivatives was reported to have moderate anti-SARS-CoV-2 activity [26]. The highlighted compound 8a in the mentioned report exhibited an IC 50 of 19 µM, while compound 5 described herein had an IC 50 of 0.5 µM using a comparable pseudotyped virus assay. Compound 8a was implicated to block the viral entry through inhibition of cathepsin B, even though there was no direct binding between 8a and cathepsin B [26]. Compound 8a was inactive against cathepsin L, which is an endosomal protease involves in SARS-CoV-2 entry through endocytosis. To test whether our compounds inhibit cathepsin B or cathepsin L, the enzyme inhibitory activity of compound 5 was determined using cathepsin B or cathepsin L inhibitor assay kits (BPS bioscience). Compound 5 was totally inactive against cathepsin B or cathepsin L at a concentration as high as 20 µM whereas the known cathepsin B inhibitor E64 inhibited the enzyme activity by more than 95% at a concentration as low as 0.1 µM (Figure 2). The result strongly suggests that cathepsin B, or cathepsin L, is not a direct target of compound 5. Thus, the molecular mechanism of action of the aloperine derivatives remains to be determined. Discussion Although there are two antiviral drugs currently available to treat COVID-19, there is no approved small molecule drug that targets SARS-CoV-2 entry. With the high mutation rate of the virus, variants that resistant to the antiviral drugs are likely to emerge in the future. Thus, more drug candidates are urgently needed for antiviral drug development for COVID-19 treatment, especially those target different steps of viral replication cycle. Effective small molecule entry inhibitors may have potential to become a useful Figure 2. Aloperine derivatives were inactive against cathepsin B and L. Compound 5 and aloperine were tested for their inhibitory activity against cathepsin B or L using a BPS bioscience assay kit and the protocol provided by the manufacturer (catlog#79590). The enzyme activity in the absence of compounds (control) was defined as 100%. Aloperine (Alop) and compound 5 (Cpd 5) were tested at 20 µM. The known cathepsin B inhibitor E64 was tested at 0.1 µM (E64-H) and 0.01 µM (E64-L), respectively. The data represent the average of a duplicated experiment. Discussion Although there are two antiviral drugs currently available to treat COVID-19, there is no approved small molecule drug that targets SARS-CoV-2 entry. With the high mutation rate of the virus, variants that resistant to the antiviral drugs are likely to emerge in the future. Thus, more drug candidates are urgently needed for antiviral drug development for COVID-19 treatment, especially those target different steps of viral replication cycle. Effective small molecule entry inhibitors may have potential to become a useful addition to current therapy against SARS-CoV-2 infection. The SARS-CoV-2 spike-pseudotyped virus system has provided a convenient method to screen compound libraries for potential hits that block SARS-CoV-2 entry [23,[26][27][28][29]. Many of the positive hits were showing moderate potency. Aloperine is a natural product isolated from Sophora alopecuroides L. and other plant species [30,31]. It has been tested in cell and animal models for its potential therapeutic effects, such as regulation of inflammatory cytokines [32,33]. We have previously shown that aloperine exhibited moderate inhibitory activities against HIV and influenza A viruses [19][20][21][22]. In this study, we demonstrated that aloperine inhibited the SARS-CoV-2 entry with a moderate IC 50 of 11.5 µM. However, with a simple structural modification at the N12 position, the aloperine derivative compound 5 was transformed into a much potent compound with an approximately 22-fold improvement in potency against the viral entry. It should be noted that all the tested Omicron variants, including the currently circulating BA.4/BA.5, were sensitive to compound 5. This result is consistent with the notion that compound 5 may not interfere with receptor binding of the spike proteins, as the receptor-binding domain of BA.4/BA.5 is significantly different from that of D614G variant in primary amino acid sequence and receptor affinity [13]. A recent report showed that an aloperine derivative (8a) could block SARS-CoV-2 entry through inhibition of cathepsin B at an IC 50 of 19.1 µM in a similar pseudotype virus assay [26]. In contrast, compound 5 did not inhibit the activity of cathepsin B or cathepsin L, and was able to inhibit SARS-CoV-2 and its variants at sub-micromolar concentrations. The differences in potency and possible mechanisms of action between the two compounds are likely due to their respective N12 side chains. The N12 side chain (CH 2 ) 4 NHCH 2 Ph of compound 5 (Table 1) is longer and possesses higher chemical complexity when compared with that on compound 8a (p-ClPh(CH 2 ) 2 ). We have previously shown that aloperine may function as a privileged scaffold and N12 side chain modifications result in derivatives with different biological activities, such as specific anti-HIV activity or anti-influenza A virus activity [19][20][21][22]. Therefore, it is possible that minor change in the N12 side chain of aloperine could result in differences in mechanisms of action. Identification of the direct target(s) of the compounds would provide a more definitive molecular detail on how the compounds inhibit SARS-CoV-2 entry. SARS-CoV-2 Pseudovirus Inhibition Assay The anti-SARS-CoV-2 activity of the aloperine derivatives was assessed with a D614G or other above mentioned variant spike-pseudotyped viruses in 293T-ACE2 cells as a function of reductions in luciferase (Luc) reporter activity as described by D. Weissman et al. previously [23]. The assay system was kindly provided by Dr. David Montefiori at Duke University. Briefly, pseudovirions were produced by FuGENE ® 6 (Promega, Madison, WI, USA, Cat. # E2691) transfection of HEK293T cells with a plasmid mixture containing a spike protein plasmid (VRC7480.D614G), a lentiviral backbone plasmid (pCMV ∆8.2), and a firefly Luc reporter gene plasmid (pHR' CMV Luc), which produces luciferase upon successful viral infection) in a 1:17:17 ratio. Pseudovirus in culture medium was collected after an additional 2 days of incubation. The murine leukemia virus envelope protein (MLV-Env) pseudotyped virus was produced in the same assay system except that the spike protein plasmid (VRC7480.D614G) was replaced with pSV-A-MLV-env (NIH AIDS Reagent Program, ARP1065). For virus entry inhibition, the D614G spike-pseudotyped virus and 293T-ACE2 cells were treated with various concentrations of compounds for 3 days. Luminescence was then measured by adding the Promega Bright-Glo luciferase reagent and using a PerkinElmer Victor 2 luminometer. IC 50 was calculated as compound concentration at which relative luminescence units (RLU) were reduced by 50% compared to virus control wells after subtraction of background RLUs. Immunofluorescence Staining of the SARS-CoV-2 Spike Proteins and Confocal Microscopy 293T-ACE2 cells cultured in 96-well glass-bottom plates were treated with compound 5 (5 µM) and infected with D614G spike-pseudotyped virus for 2 h. The cells were fixed with 4% formaldehyde in PBS for 15 min. The cells were then treated with a blocking buffer containing 5% FBS and 0.3% Triton X-100 in PBS for 60 min. Immunostaining was carried out by incubating Alexa Fluor 488-conjugated anti-SARS-CoV-2 spike protein antibody (Thermo Fisher Scientific, Waltham, WA, USA, Cat. # 53-6490-82) with the cells at 4 • C overnight. The samples were then washed three times in PBS before treated with Prolong ® Gold Anti-Fade Reagent with DAPI (Cell Signaling Technology, Danvers, MA, USA, Cat. # 4083S). Confocal images were acquired using a Nikon TE2000-U laser-scanning confocal microscope (Nikon, Tokyo, Japan). Confocal image analysis was performed with NIS-Elements AR 3.0 software (Nikon). Cytotoxicity Assay The CellTiter-Glo ® Luminescent Cell Viability Assay (Promega, Cat. # G7570 ) was used to determine the cytotoxicity of the aloperine derivatives. 293T-ACE2 cells were cultured in the presence of various concentrations of the compounds for 3 days. The cytotoxicity of the compounds was determined by following the protocol provided by the manufacturer. The 50% cytotoxic concentration (CC 50 ) was defined as the concentration that caused a 50% reduction in cell viability. Conclusions In summary, the results of this study are consistent with the notion that aloperine is a privileged scaffold that can be structurally optimized to have selective antiviral activity. Spike protein-pseudotyped viruses of major SARS-CoV-2 subtypes such as D614G, Delta, Omicron BA.1, BA.2, and BA.4/BA.5, were included to evaluate the spectrum and potency of compound 5. Our results indicated that compound 5 could inhibit all the tested pseudotyped viruses at sub-µM concentrations. We proposed a model to suggest that compound 5 may inhibit SARS-CoV-2 infection through inhibition of viral entry at membrane fusion and/or endocytosis pathways after the virus binding to receptors (Figure 3). We speculate that the viral particles were arrested at a stage before the viral fusion with cellular and/or endosomal membranes based on the fluorescent puncta observed from compound 5-treated 293T-ACE2 cells under confocal microscopy ( Figure 1). However, the molecular details of how compound 5 arrests the viral entry remain to be determined. Nevertheless, the submicromolar anti-SARS-CoV-2 entry activity of compound 5 in all tested variants offers a promising lead for further developing aloperine derivatives as anti-COVID-19 drug candidates. that the viral particles were arrested at a stage before the viral fusion with cellular and/or endosomal membranes based on the fluorescent puncta observed from compound 5treated 293T-ACE2 cells under confocal microscopy ( Figure 1). However, the molecular details of how compound 5 arrests the viral entry remain to be determined. Nevertheless, the submicromolar anti-SARS-CoV-2 entry activity of compound 5 in all tested variants offers a promising lead for further developing aloperine derivatives as anti-COVID-19 drug candidates.
10,591
2022-08-25T00:00:00.000
[ "Biology", "Chemistry" ]
Neospora caninum infection induced mitochondrial dysfunction in caprine endometrial epithelial cells via downregulating SIRT1 Background Infection of Neospora caninum, an important obligate intracellular protozoan parasite, causes reproductive dysfunctions (e.g. abortions) in ruminants (e.g. cattle, sheep and goats), leading to serious economic losses of livestock worldwide, but the pathogenic mechanisms of N. caninum are poorly understood. Mitochondrial dysfunction has been reported to be closely associated with pathogenesis of many infectious diseases. However, the effect of N. caninum infection on the mitochondrial function of hosts remains unclear. Methods The effects of N. caninum infection on mitochondrial dysfunction in caprine endometrial epithelial cells (EECs), including intracellular reactive oxygen species (ROS), mitochondrial membrane potential (MMP), adenosine triphosphate (ATP) contents, mitochondrial DNA (mtDNA) copy numbers and ultrastructure of mitochondria, were studied by using JC-1, DCFH-DA, ATP assay kits, quantitative real-time polymerase chain reaction (RT-qPCR) and transmission electron microscopy, respectively, and the regulatory roles of sirtuin 1 (SIRT1) on mitochondrial dysfunction, autophagy and N. caninum propagation in caprine EECs were investigated by using two drugs, namely resveratrol (an activator of SIRT1) and Ex 527 (an inhibitor of SIRT1). Results The current study found that N. caninum infection induced mitochondrial dysfunction of caprine EECs, including accumulation of intracellular ROS, significant reductions of MMP, ATP contents, mtDNA copy numbers and damaged ultrastructure of mitochondria. Downregulated expression of SIRT1 was also detected in caprine EECs infected with N. caninum. Treatments using resveratrol and Ex 527 to caprine EECs showed that dysregulation of SIRT1 significantly reversed mitochondrial dysfunction of cells caused by N. caninum infection. Furthermore, using resveratrol and Ex 527, SIRT1 expression was found to be negatively associated with autophagy induced by N. caninum infection in caprine EECs, and the intracellular propagation of N. caninum tachyzoites in caprine EECs was negatively affected by SIRT1 expression. Conclusions These results indicated that N. caninum infection induced mitochondrial dysfunction by downregulating SIRT1, and downregulation of SIRT1 promoted cell autophagy and intracellular proliferation of N. caninum tachyzoites in caprine EECs. The findings suggested a potential role of SIRT1 as a target to develop control strategies against N. caninum infection. Graphical abstract Supplementary Information The online version contains supplementary material available at 10.1186/s13071-022-05406-4. Neosporosis, caused by the obligate intracellular protozoan parasite Neospora caninum, is one of the main causes of abortion or stillbirths in pregnant cattle [21]. Neospora caninum infection has been reported to be responsible for approximately 12-42% of aborted fetuses in dairy cows and caused abortion in cattle with the median losses estimated as > US$ 1.298 billion per annum, with the highest to be US$ 2.380 billion [22,23]. However, due to poor understanding of the pathogenic mechanisms of N. caninum, there are no effective drugs and vaccines available currently against this disease. Neospora caninum infection has been reported to induce accumulation of ROS and reduction of ATP levels, resulting in oxidative damage in cows and gerbils [24,25]. Transcriptome analysis of cerebrovascular endothelial cells found that N. caninum infection induced increased expression of 21 mitochondrial genes that contributed to functions of Complex I, II, III, IV and V [26]. RNA-seq analysis of bovine trophoblast cells showed that N. caninum infection altered expression of several oxidoreductases (e.g. SOD2) [27]. However, few studies were conducted to unveil the mystery of mitochondrial dysfunction during N. caninum infection. Sirtuin 1 (SIRT1), a NAD + -dependent histone deacetylase, has been reported to be an important regulator of metabolic control and mitochondrial biogenesis in a wide range of physiological processes and diseases (e.g. diabetes mellitus, aging and inflammatory diseases) and also identified as a probably promising therapeutic target to treat autoimmune diseases and reproductive failures [28][29][30][31][32]. Decreased expression of SIRT1 was found to be associated with mitochondrial dysfunction by increasing ROS and DNA damage in both male and female gametes [33]. The protective role of SIRT1 was also observed in several infectious diseases, including infections of viruses (e.g. respiratory syncytial virus and dengue virus), bacteria (e.g. Pseudomonas aeruginosa and Helicobacter pylori) and protozoan parasites (e.g. Toxoplasma gondii, Trypanosoma cruzi and Cryptosporidium parvum) [34][35][36][37][38][39][40]. In the current study, the mitochondrial dysfunction and its mechanisms associated with SIRT1 during N. caninum infection were investigated by using caprine endometrial epithelial cells (EECs). [41]. Determination of MMP The MMP of intracellular mitochondria was monitored by using the mitochondrial membrane potential assay kit with JC-1 (Beyotime Biotechnology, Shanghai, China) according to the manufacturer's instructions. Briefly, caprine EECs were seeded in 12-well cell culture plates (Shanghai Sangon Biotech, Shanghai, China) tachyzoites in caprine EECs. The findings suggested a potential role of SIRT1 as a target to develop control strategies against N. caninum infection. Keywords: Neospora caninum, Mitochondrial dysfunction, Caprine endometrial epithelial cells, Sirtuin 1, Autophagy, Propagation of parasite and infected with N. caninum tachyzoites with a MOI of 3:1 (parasite:cell) for 48 h. Then, cells were washed with PBS and incubated with JC-1 (1 ×) in the dark for 30 min at 37 ℃. After washing with the JC-1 washing buffer (Beyotime Biotechnology, Shanghai, China), cells were observed under an inverted fluorescence microscopy (Leica Microsystems, Wetzlar, Germany) to detect fluorescence of green (excitation/emission wavelengths = 490/530 nm) and red (excitation/emission wavelengths = 525/590 nm). The relative MMP was expressed as the ratio of red/green fluorescence intensities. Determination of ROS The intracellular ROS production was measured by using 2′,7′-dichloro-fluorescin diacetate (DCFH-DA) (Abmole, Shanghai, China). Caprine EECs were seeded in 12-well cell culture plates (Shanghai Sangon Biotech, Shanghai, China) and infected with N. caninum tachyzoites with a MOI of 3:1 (parasite:cell) for 48 h. The cells were washed with PBS and incubated with DCFH-DA (10 μM) in the dark for 20 min at 37 ℃. After washing the cells with serum-free DMEM/F12 medium for 5 min, the fluorescence intensity was detected under an inverted fluorescence microscopy (Leica Microsystems, Wetzlar, Germany). The fluorescence intensity of green (excitation/emission wavelengths of 488/530 nm) represented the intracellular ROS levels. Measurement of ATP levels Intracellular ATP levels were measured by using an ATP Assay Kit (Beyotime Biotechnology, Shanghai, China) according to the manufacturer's instructions. Briefly, caprine EECs were seeded in six-well cell culture plates (Shanghai Sangon Biotech, Shanghai, China) and infected with N. caninum tachyzoites with a MOI of 3:1 (parasite:cell) for 48 h. The cells were lysed by using an ATP assay lysis solution, and then the cell lysis solution was incubated with an ATP assay working solution at room temperature for 2 min. The ATP content was measured in cells by utilizing a multifunctional fluorimeter microplate reader (Tecan Austria GmbH, Austria). The standard curve of ATP concentrations was prepared from known amounts (0.01, 0.03, 0.1, 0.3, 1, 3, 10, 30 μmol) of ATP levels. Results were expressed as arbitrary units of luminescence compared. Reverse transcriptase quantitative polymerase chain reaction (RT-qPCR) Mitochondrial DNA (mtDNA) copy numbers were measured by using RT-qPCR with the templates of genomic DNA (gDNA) samples that were extracted using a Blood/Cell/Tissue DNA Extraction Kit (Tiangen, Beijing, China) according to the manufacturer's instructions. To determine the mRNA level for the sirt1 gene during N. caninum infection, caprine EECs were seeded in 12-well cell culture plates (Shanghai Sangon Biotech, Shanghai, China) and infected with N. caninum tachyzoites with a MOI of 3:1 (parasite:cell) for 48 h. The cells were collected for RNA extraction with Trizol reagent (Accurate Biotechnology co., Ltd., Hunan, China). RNA samples were reverse transcribed to cDNA by using an Evo M-MLV RT Kit with gDNA Clean for RT-qPCR (Accurate Biotechnology Co., Ltd., Hunan, China). RT-qPCR reactions were performed by using 2 × Universal SYBR Green Fast RT-qPCR Mix (ABclonal, Wuhan, China) with specific primers listed in Additional file 1: Table S1. The 18 s rRNA gene was used to normalize the expression level of mtDNA (nd1), and the glyceraldehyde phosphate dehydrogenase (gapdh) gene was used to normalize the expression level of the sirt1 gene. The relative expression of target genes was calculated by using the 2 − ΔΔCt method [42]. Analysis of transmission electron microscopy (TEM) Caprine EECs were seeded in six-well cell culture plates (Shanghai Sangon Biotech, Shanghai, China) and infected with N. caninum tachyzoites with a MOI of 3:1 (parasite:cell) for 48 h. The cells (about 10 7 cells per sample) were washed with PBS, digested with trypsin and collected by centrifuge at 1000 rpm for 5 min. Cell pellets were incubated with 2.5% glutaraldehyde overnight at 4 ℃ and postfixed with 1% osmium tetroxide for 2-3 h. Fixed cells were dehydrated with increasing concentrations of ethanol, infiltrated with resin and embedded. Ultrathin sections were obtained by using an ultramicrotome (Leica Microsystems, Wetzlar, Germany), double stained with 4% uranyl acetate and lead citrate and analyzed by using a transmission electron microscopy (Hitachi Ltd., Tokyo, Japan). Drug treatment and propagation of N. caninum To investigate the effect of SIRT1 on propagation of N. caninum, caprine EECs were incubated with an activator resveratrol (RSV) (Beijing Solarbio Science & Technology Co., Ltd., Beijing, China) or an inhibitor Ex 527 (Beyotime Biotechnology, Shanghai, China) of SIRT1 for 1 h and then infected with N. caninum tachyzoites at a MOI of 3:1 (parasite:cell) for 48 h. The numbers of N. caninum tachyzoites per parasitophorous vacuole were calculated by using inverted optical microscopy (Olympus Co., Tokyo, Japan), and a total of 100 vacuoles were counted. In addition, the cytotoxicities of RSV and Ex 527 were analyzed using a cell counting kit (CCK-8; Zeta life, California, USA), and both 50 μM RSV and 20 μM Ex 527 had no significant cytotoxicities for caprine EECs (Additional file 2: Fig. S1). Statistical analysis Data were reported as means ± standard deviation (SD) in at least three independent experiments, and the differences between independently experimental data were analyzed using GraphPad PRISM 6.07 (GraphPad Software Inc., San Diego, CA, USA). P values were computed using the two-tailed t test, with a parametric test. P value < 0.05 (*P < 0.05; **P < 0.01; ***P < 0.001) compared with an appropriate control group was considered as statistically significant. Occurrence of mitochondrial dysfunction in caprine EECs induced by N. caninum infection To evaluate the mitochondrial function during N. caninum infection, the ROS levels, MMP, ATP levels and mtDNA copy numbers were determined in caprine EECs infected with N. caninum for 48 h (Fig. 1). Compared to the control group without infection, N. caninum infection induced a significant increase of ROS production in caprine EECs (Fig. 1a, b). The ratios of red/green fluorescence intensities were significantly decreased in infected caprine EECs, indicating reduction of the relative MMP induced by N. caninum infection (Fig. 1c, d). Significant reductions were also detected for ATP levels (Fig. 1e) and mtDNA copy numbers (Fig. 1f ) in infected caprine EECs. Furthermore, mitochondrial ultrastructural changes, e.g. cristae fractures, mitochondrial deformation, swelling, and vacuolization and mitochondrial autophagy, were observed by using TEM (Fig. 1g). These results showed occurrence of mitochondrial dysfunction in caprine EECs induced by N. caninum infection. Effect of SIRT1 on mitochondrial dysfunction in caprine EECs induced by N. caninum infection SIRT1 has been reported as an important regulator in mitochondrial biogenesis and turnover [43]. The expression of SIRT1 was investigated in caprine EECs infected with N. caninum for 48 h. Both mRNA (Fig. 2a) and protein (Fig. 2b, c) levels of SIRT1 were found to be significantly decreased in infected caprine EECs. To determine the role of SIRT1 on mitochondrial dysfunction induced by N. caninum infection, two drugs, namely RSV (a SIRT1 activator) and Ex 527 (a SIRT1 inhibitor), were used to treat caprine EECs for 1 h before infection. After infection with N. caninum for 48 h, 50 μM RSV significantly increased the protein levels of SIRT1 induced by N. caninum infection (Fig. 2b, c), while the opposite results were found by using 20 μM Ex 527 (Fig. 2d, e). Interestingly, treatment with 50 μM RSV significantly reversed the effect of N. caninum infection on ROS levels (Fig. 3a, b), MMP (Fig. 3c, d), ATP levels ( Fig. 3e) and mtDNA copy numbers (Fig. 3f ) in caprine EECs, while application of 20 μM Ex 527 remarkably (Fig. 3c, d) and ATP levels (Fig. 3e) in caprine EECs. These results indicated that N. caninum infection induced mitochondrial dysfunction by downregulating SIRT1 in caprine EECs. Effect of SIRT1 on autophagy in caprine EECs induced by N. caninum infection Autophagy in caprine EECs induced by N. caninum infection was previously found by our group [41], and previous studies showed that SIRT1 was related to cell autophagy during inflammations and pathogenic infections [36,44]. To test the effect of SIRT1 on autophagy induced by N. caninum infection, caprine EECs were treated with 50 μM RSV or 20 μM Ex 527 for 1 h and then infected with N. caninum tachyzoites for 48 h; the protein levels of LC-3II (an autophagy marker) and p62 (a molecule to monitor changes of autophagy flux) were determined. RSV treatment significantly reversed the increased expression of LC-3II and reduction of p62 caused by N. caninum infection (Fig. 4a, b), while Ex 527 treatment significantly increased the expression of LC-3II caused by N. caninum infection though had no significant effect on N. caninum-inducing reduction of p62 (Fig. 4c, d). These results suggested that N. caninum infection induced autophagy by downregulating SIRT1. Effect of SIRT1 on propagation of N. caninum in caprine EECs Autophagy induced by N. caninum infection has been reported to promote intracellular propagation of tachyzoites in caprine EECs by our group [41], and downregulation of SIRT1 by N. caninum infection advanced cell autophagy (see above). To test whether SIRT1 had a negative effect on propagation of N. caninum tachyzoites, caprine EECs were treated with 10-50 μM RSV or 5-20 μM Ex 527 for 1 h and then infected with N. caninum tachyzoites for 48 h. The average numbers of tachyzoites per parasitophorous vacuole were calculated by counting 100 vacuoles. Both RSV and Ex 527 affected replication of N. caninum tachyzoites in a dose-dependent manner in caprine EECs. Three dosages (10, 25 and shown as mean ± standard deviation (SD) of three independent experiments. P-values were calculated using Student's t test. *P < 0.05; **P < 0.01; ***P < 0.001. NS no significant difference was observed 50 μM) of RSV significantly suppressed propagation of N. caninum tachyzoites in caprine EECs (Fig. 5a, b), while 10 and 20 μM of Ex 527 significantly promoted replication of tachyzoites in vitro (Fig. 5c, d). These results indicated that downregulation of SIRT1 was beneficial to propagation of N. caninum tachyzoites in caprine EECs. Discussion Mitochondrial dysfunction has been found to be associated with many pathogenic diseases, leading to implicated outcomes, e.g. progressive cognitive decline and abortion [36,45,46]. Abortion is being reported as the main cause of economic losses caused by neosporosis in intermediate hosts, especially in cattle and goats [47]. Substantial evidence has suggested that mitochondrial damage was one of the main factors responsible for reproductive dysfunction [48]. Metabolism and energy demand in pregnant maternal resulted in increases in increased placental mitochondria activity and ROS generation [49]. Abnormal stimuli or external pathogenic infections also caused accumulation of mtROS and irreversible damage to mitochondria and cells (e.g. trophoblast apoptosis), finally leading to reproductive disorders [48,50]. For example, oxidative damage-induced mitochondrial dysfunction caused by T. gondii would contribute to trophoblast apoptosis [51]. In the current study, accumulation of ROS was found in caprine EECs infected with intracellular N. caninum tachyzoites, consistent with in vivo findings in cows and gerbils [25,52]. Notably, significant decreases of ATP contents and mtDNA copy numbers and severe disruption of mitochondrion morphology were observed in N. caninum-infecting caprine EECs, suggesting that mitochondrial dysfunction of endothelial cells in the uterus would be associated with pathogenesis of N. caninum infection. SIRT1 has been identified to be heavily implicated in health span and longevity by controlling mitochondrial biogenesis and metabolic processes [53,54]. Downregulation or deficiency (SIRT1 −/− ) of SIRT1 would elevate ROS production and cause ROS-induced mitochondrial function damage, enhancing pathogenesis of diseases. For example, Activation SIRT1 by using SRT1720 (an activator of SIRT1) attenuated mitochondrial dysfunction by decreasing ROS accumulation to maintain cell homeostasis in intestinal epithelial cells caused by H 2 O 2 [55]. SIRT1 −/− bone marrow dendritic cell showed further decreases in MMP, ATP levels and generation of ROS during respiratory syncytial virus infection, leading to inappropriate metabolic processes and enhancement of the pathogenic responses [35]. In the current study, the expression level of SIRT1 was decreased because of N. caninum infection in caprine EECs. RSV treatment increased the expression of SIRT1 and reversed the effect of mitochondrial dysfunction induced by infection of N. caninum tachyzoites, while Ex 527 further decreased SIRT1 expression and aggravated mitochondrial damage SIRT1 functions as both metabolic sensor and transcriptional regulator with broad cellular functions (metabolic homeostasis, stress response, tumorigenesis and autophagy) [56][57][58][59]. Of them, cell autophagy, a dynamic recycling system, has been reported to be one of common consequences due to mitochondrial dysfunction [60]. The interplay between cell autophagy and SIRT1 has been widely studied. For example, activation SIRT1 by using SRT1720 inhibited intracellular survival and colonization of H. pylori in gastric cells through activating autophagic flux [39]. Autophagy has been found to be induced in caprine EECs infected with N. caninum by downregulating mTOR, and it contributed to N. caninum propagation [41]. Rapamycin (an autophagy inducer) treatment increased parasite loads and reduced survival rates of N. caninum-infected mice [61]. Moreover, N. caninum infection induced mitophagy in a ROS-dependent manner to promote parasite propagation in mice and inhibited inflammatory cytokines production to achieve immune evasion [62]. This evidence suggested that autophagy/mitophagy contributed to N. caninum replication in vitro and in vivo. In the current study, activation of SIRT1 by using RSV inhibited autophagy induced by N. caninum infection, further inhibiting propagation of N. caninum in vitro. On the other hand, Ex 527 treatment further increased LC3-II protein expression due to N. caninum infection and promoted N. caninum replication in caprine EECs. These results indicated that N. caninum infection downregulated SIRT1 expression to promote Three independent experiments were conducted in triplicate. Data are shown as mean ± standard deviation (SD) of three independent experiments. P-values were calculated using Student's t test. *P < 0.05. NS no significant difference was observed autophagy and then affected propagation of N. caninum in caprine EECs. Certainly, the scientific evidence has previously reported that the results obtained in studies using in vitro models were not always in accordance with the results obtained in vivo models. Considering that the abortion pathophysiology is complex, more studies are needed to understand the importance of the potential role of SIRT1 as a potential target of focus related to the treatment of N. caninum infection. In addition, previous work mentioned a difference in susceptibility amongst ruminant species. Therefore, it would be interesting to evaluate mitochondrial function in a model of bovine epithelial cells infected with N. caninum in a future study, in which N. caninum provokes a more severe reproductive effect in cattle. More studies are also needed on an in vivo or ex vivo model to deepen in the understanding of mitochondrial damage on the pathophysiology of abortion. Conclusions The effect on mitochondrial function on caprine EECs due to infection of N. caninum was investigated for the first time to our knowledge. Neospora caninum infection downregulated SIRT1 expression to induce mitochondrial dysfunction, and downregulation of SIRT1 further promoted cell autophagy and intracellular replication of N. caninum tachyzoites in caprine EECs. The findings in the present study suggested a potential role of SIRT1 as a target to develop control strategies against N. caninum infection.
4,523.4
2022-08-01T00:00:00.000
[ "Biology" ]
Isolation, Characterization, and Functional Expression of cDNAs Encoding NADH-dependent Methylenetetrahydrofolate Reductase from Higher Plants* Methylenetetrahydrofolate reductase (MTHFR) is the least understood enzyme of folate-mediated one-carbon metabolism in plants. Genomics-based approaches were used to identify one maize and two Arabidopsis cDNAs specifying proteins homologous to MTHFRs from other organisms. These cDNAs encode functional MTHFRs, as evidenced by their ability to complement a yeast met12 met13 mutant, and by the presence of MTHFR activity in extracts of complemented yeast cells. Deduced sequence analysis shows that the plant MTHFR polypeptides are of similar size (66 kDa) and domain structure to other eukaryotic MTHFRs, and lack obvious targeting sequences. Southern analyses and genomic evidence indicate thatArabidopsis has two MTHFR genes and that maize has at least two. A carboxyl-terminal polyhistidine tag was added to oneArabidopsis MTHFR, and used to purify the enzyme 640-fold to apparent homogeneity. Size exclusion chromatography and denaturing gel electrophoresis of the recombinant enzyme indicate that it exists as a dimer of ≈66-kDa subunits. Unlike mammalian MTHFR, the plant enzymes strongly prefer NADH to NADPH, and are not inhibited byS-adenosylmethionine. An NADH-dependent MTHFR reaction could be reversible in plant cytosol, where the NADH/NAD ratio is 10−3. Consistent with this, leaf tissues metabolized [methyl-14C]methyltetrahydrofolate to serine, sugars, and starch. A reversible MTHFR reaction would obviate the need for inhibition by S-adenosylmethionine to prevent excessive conversion of methylene- to methyltetrahydrofolate. Methylenetetrahydrofolate reductase (MTHFR) 1 catalyzes the reduction of 5,10-methylenetetrahydrofolate (CH 2 -THF) to 5-methyltetrahydrofolate (CH 3 -THF), which then serves as a methyl donor for methionine synthesis from homocysteine. The MTHFR proteins and genes of Escherichia coli and mammalian liver have been characterized (1)(2)(3)(4), and MTHFR genes have been identified in Saccharomyces cerevisiae (5) and other organisms. The MTHFR of E. coli (MetF) is a homotetramer of 33-kDa subunits that prefers NADH as reductant (1), whereas mammalian MTHFRs are homodimers of 77-kDa subunits that prefer NADPH and are allosterically inhibited by S-adenosylmethionine (AdoMet) (2,3). Two domains have been identified in mammalian MTHFR polypeptides. The NH 2 -terminal catalytic domain (about 40 kDa) shows 30% sequence identity to E. coli MetF and, like MetF, contains FAD as a noncovalently bound prosthetic group (2). The COOH-terminal domain contains the AdoMet binding site; [methyl-3 H]AdoMet photoaffinity labeling located this site about 50 residues from the junction between the domains (2,3). Yeast and other eukaryotic MTHFRs have a two-domain structure similar to the mammalian enzyme (5,6). The MTHFR reaction in liver is physiologically irreversible, due to a combination of the large standard free energy change for the reduction of CH 2 -THF by NADPH (⌬G 0Ј ϭ -5.2 kcal mol Ϫ1 ) and the high NADPH/NADP ratio in the cytoplasm (7,8). A corollary of this irreversibility is that MTHFR has the potential to deplete the pool of CH 2 -THF, reducing its availability for synthesis of thymidylate and purines (9,10). The AdoMet sensitivity of the liver enzyme functions to check such depletion, leaving CH 2 -THF available for other metabolic demands (9,10). Thus, mammalian MTHFR commits one-carbon units to methyl group synthesis and is considered to have a key regulatory role in one-carbon metabolism. In contrast to the detailed information about MTHFR from mammals and E. coli, there are few data on plant MTHFR and no genes have been identified (11,12), making it the least understood enzyme of folate-mediated one-carbon metabolism in plants. MTHFR activity has been detected in crude extracts of pea tissues using a CH 3 -THF-menadione oxidoreductase (i.e. reverse direction) assay, and found to be insensitive to methionine (13). The reductant has not been identified. No , by an endowment from the C.V. Griffin, Sr. Foundation, and by the Florida Agricultural Experiment Station. This is Journal Series no. R-07213 from this station. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. with the start of work on plant metabolic engineering, because success in many current projects may depend upon understanding and modifying the mechanisms whereby plants balance the demands for methyl groups and other one-carbon moieties. Such projects include engineering the accumulation of betaines or methylated polyols, modifying lignins, and enhancing the synthesis of pharmaceutical alkaloids (14 -16). In this study, we used genomics-based approaches to identify plant MTHFR cDNAs, and expressed them in yeast. The recombinant enzymes were partially characterized, providing a foundation for more detailed study of their catalytic and regulatory properties. We identified cDNAs from plants with the C 3 and C 4 pathways of photosynthesis (Arabidopsis and maize, respectively) because C 3 and C 4 species differ in one-carbon metabolism, the former having a large photorespiratory carbon flux through glycine and serine (17). In addition, we developed a sensitive and specific NAD(P)H-CH 2 -THF oxidoreductase (i.e. forward direction) radioassay that can be used with crude extracts. The results indicate that, in contrast to the mammalian enzymes, the MTHFRs from Arabidopsis and maize use NADH as the reductant, and that AdoMet does not feedbackinhibit their activity. Plant Materials-Arabidopsis plants (ecotype Columbia) were grown in potting soil in a culture room at 26°C under 14-h days (PPFD ϭ 80 E m Ϫ2 s Ϫ1 ). Maize (cv. Florida 32B) for radiotracer experiments and tobacco (cv. Wisconsin 38) were grown in soil in a greenhouse under natural lighting; the maximum temperature was 33°C. Maize plants (cv. B73) for cDNA library construction were grown in sand in a culture room at 25°C under 12-h days (PPFD ϭ 300 -400 E m Ϫ2 s Ϫ1 ) and irrigated with 0.25ϫ Hoagland's nutrients; roots were harvested at 14 days of age. Spinach (cv. Savoy Hybrid 612) was grown in similar conditions and salinized with 200 mM NaCl. cDNA Generation, Sequencing, and Sequence Analysis-Poly(A) ϩ mRNA was isolated from maize roots as described (19), and used to construct a Uni-ZAP XR cDNA library according to the manufacturer's protocols (Stratagene). Two amino acid sequences conserved in eukaryote MTHFRs, FEFFPPKT and AVTWGVFP, were used to design the degenerate PCR primers 5Ј-CGARTTYTTYCCRCCVAARAC-3Ј (forward primer) and 5Ј-GGAAAACWCCCCAMGTKACAGC-3Ј (reverse primer), respectively. These were used to amplify a Ϸ1,500-base pair product by reverse transcription-PCR. The PCR product was cloned into the pGEM T-Easy vector (Promega); sequencing confirmed that it specified a polypeptide homologous to MTHFRs from other organisms. The 1,500-base pair fragment was then used to identify cDNAs from the maize root library. Arabidopsis expressed sequence tags (ESTs), Gen-Bank accession numbers W43486 and W43508 (hereafter termed At-MTHFR-1 and -2, respectively), were obtained from the Arabidopsis Biological Resource Center (Columbus, OH). Both strands of cDNAs were sequenced using the ABI Prism dye terminator cycle sequencing Ready Reaction (PE Applied Biosystems) and an ABI model 373 sequencer. Sequence alignments were made using Clustal W 1.7 (20). Homology searches were made using BLAST programs (21). Maize ESTs were sought in GenBank and the data base maintained by Pioneer Hi-Bred International, Inc (hereafter, Pioneer). cDNA Expression in Yeast-Plant MTHFR coding sequences were amplified from plasmid templates by high fidelity PCR using recombinant Pfu DNA polymerase (Stratagene) and primers that included the first or last six codons plus BamHI and PstI site sequences for cloning into pVT103-U. This plasmid contains the URA3 gene for selection and the ADH1 promoter to drive gene expression (18). For AtMTHFR-2, the forward primer was used to add 5Ј-ATGAAG-3Ј to restore the missing first two codons (see text). The AtMTHFR-1 coding sequence was amplified in unmodified form, and also with a five-residue histidine tag added to the COOH terminus by inserting 5Ј-CATCACCATCACCAT-3Ј before the stop codon. After ligation into pVT103-U, constructs were introduced into E. coli strain DH10B by electroporation. MTHFR constructs were verified by sequencing, and used to transform yeast strain RRY3 as described (5). Enzyme Isolation, Affinity Purification, and Molecular Mass Determination-All operations were at 0 -10°C. Yeast cultures were grown to an A 600 of 1-2, washed, and broken by agitation (5 or 10 ϫ 0.5 min) with glass beads in 100 mM potassium phosphate buffer, pH 6.8 or 7.2, containing 1 mM EDTA, 12-25 M FAD, and 10% (v/v) glycerol (Buffer A) plus 1 mM PMSF (5). Where specified, a protease inhibitor mixture (Sigma P8849) was used (at 3% v/v in the extraction buffer and 1% v/v in the desalting buffer) in place of PMSF. Plant tissues were pulverized in liquid N 2 and extracted with 2 ml per g of buffer A containing 1 mM PMSF. Extracts were cleared by centrifugation (25,000 ϫ g, 30 min), desalted on PD-10 columns (Amersham Pharmacia Biotech) equilibrated in buffer A, and concentrated if necessary in Centricon-30 units (Amicon). Extracts were stored at Ϫ80°C after freezing in liquid N 2 ; this did not affect MTHFR activity. The histidine-tagged AtMTHFR-1 protein was purified by two cycles of affinity chromatography on Ni 2ϩnitrilotriacetic acid (NTA) superflow resin (Qiagen) as described (22), with the following modifications. Buffers contained 10 M FAD; binding was carried out at 40 mM imidazole, washing at 60 mM, and elution at 400 mM for the first cycle and 300 mM for the second. Native molecular mass was estimated using a Waters 626 HPLC system equipped with a Superdex 200 HR 10/30 column (Amersham Pharmacia Biotech); reference proteins were cytochrome c, carbonic anhydrase, bovine serum albumin, and ␤-amylase. SDS-polyacrylamide gel electrophoresis was carried out as described (23). Protein was estimated by Bradford's method (24) using bovine serum albumin as the standard. Assays for MTHFR Activity-Assays were made under conditions in which substrates were saturating, and product formation was proportional to enzyme concentration and time. When imidazole and NaCl were present in enzyme preparations, their final concentrations in the assays were kept Յ45 mM to avoid inhibitory effects. CH 3 -THF-menadione oxidoreductase activity was measured by a modification of published methods (25,26). Assays (final volume 100 l, in 1.5-ml screwcap microcentrifuge tubes) contained 100 mM potassium phosphate buffer, pH 6.8 (shown to be the optimal pH), 2 mM EDTA, 180 nmol of sodium ascorbate, 200 nmol of formaldehyde, 2.5 nmol of FAD, 51 nmol (50 nCi) of [methyl-14 C]CH 3 -THF, enzyme extract, and 25 nmol of menadione. Reactions were started by adding a 1 mM menadione solution (in water at 65°C unless otherwise indicated) to the other components at 0°C, incubated at 30°C for 10 or 20 min, and stopped with 50 -65 l of dimedone reagent (26) plus 100 nmol of formaldehyde. After heating at 100°C for 5 min, 1 ml of toluene was added and the tubes were agitated for 2 min and centrifuged (16,000 ϫ g, 2 min). A sample (0.8 ml) of the toluene phase was mixed with 3 ml of scintillation fluid (Beckman Ready Gel) and counted. For assay blanks, enzyme was omitted during incubation and added just before the dimedone reagent. The reaction product was analyzed by TLC on Silica Gel 60 in methanol: acetone:HCl (90:10:4, v/v/v). Product recovery was determined to be 60 Ϯ 3% (mean Ϯ S.E., n ϭ 14) by spiking unlabeled reaction mixtures with [ 14 C]formaldehyde, and experimental data were corrected accordingly. NAD(P)H-CH 2 -THF oxidoreductase activity was measured in reaction mixtures (final volume 20 l, in 2-ml screw-cap microcentrifuge tubes) containing 100 mM potassium phosphate buffer, pH 7.2, 0.3 mM EDTA, 4 mM 2-mercaptoethanol, 42 nmol (0.1 Ci) of [ 14 C]formaldehyde, 20 nmol of THF, 0.5 nmol of FAD, 4 nmol of NAD(P)H, 20 nmol of glucose 6-phosphate, 0.06 units of glucose-6-phosphate dehydrogenase (1 unit ϭ 1 mol of NAD reduced min Ϫ1 at pH 7.2, 24°C), and enzyme preparation. Blank assays contained no NAD(P)H. The buffer, EDTA, [ 14 C]formaldehyde, THF, and 2-mercaptoethanol were mixed and held for 5 min at 24°C in hypoxic conditions (to allow 14 CH 2 -THF to form) before adding other components. Reactions were incubated at 30°C for 20 min, and stopped by adding 1 ml of 100 mM formaldehyde. After standing for 20 min at 24°C (to allow 14 C to exchange out of CH 2 -THF), 0.2 ml of a slurry of AG-50(H ϩ ) resin (1:1 with water) was added to bind 14 CH 3 -THF. The resin was washed with 3 ϫ 1.5 ml of 100 mM formaldehyde, mixed with 1 ml of scintillation fluid, and counted. The counting efficiency was 40%, determined using assays spiked with 14 CH 3 -THF. The identity of the reaction product was verified by reverse-phase HPLC (27). NADP phosphatase activity was measured by incubating extracts with 10 mM NADP in 100 mM potassium phosphate buffer, pH 7.2, at 30°C for 30 min, followed by enzymatic assay of NAD using yeast alcohol dehydrogenase. [methyl-14 C]CH 3 -THF Metabolism-Arabidopsis rosettes (240 Ϯ 30 mg) or sets of three maize leaf discs (11 mm diameter, 70 Ϯ 3 mg/3 discs, cut from a young blade and scarified with eight radial cuts on the abaxial surface) were allowed to absorb 0.5 Ci (9 nmol) of [methyl- 14 C]CH 3 -THF dissolved in 20 l of 8 mM sodium ascorbate, minus or plus 25 mM L-serine. Label was fed to rosettes via the severed root, and to discs via the cuts; after uptake, the feeding solution was replaced by water or 25 mM serine. Incubation was in the light (PPFD ϭ 150 E m Ϫ2 s Ϫ1 ) at 28°C for 3.5 h. Tissues were extracted with 80% acetone, and the extract was separated into amino acid, organic acid/phosphate ester, and sugar fractions using AG-50(H ϩ ) and AG-1 (formate) columns (28). Starch in the insoluble residue was hydrolyzed in 1 M HCl (4 h, 100°C), and the [ 14 C]glucose formed was purified by ion exchange as above. Amino acids were separated on cellulose TLC plates in n-butanol:acetic acid:water (6:2:2, v/v/v) and by electrophoresis in 0.6 M HCOOH, 1.5 M CH 3 COOH at 1.8 kV, 4°C, for 20 min; detection was with ninhydrin. Serine and glycine zones were scraped from electrophoresis plates for 14 C assay. Sugars were separated by TLC on cellulose plates in npropanol:ethyl acetate:water (7:1:2, v/v/v) and detected with alkaline KMnO 4 . Samples spiked with [methyl-14 C]CH 3 -THF were included as controls. Southern Analyses-Arabidopsis genomic DNA was isolated from leaves as described (29). One-g samples of the isolated DNA were digested, separated in 0.7% agarose gels, and transferred to supported nitrocellulose membrane (NitroPure, MSI) as described by Sambrook et al. (30). The blots were hybridized overnight at 58°C in 5ϫ SSC, 5ϫ Denhardt's solution, 1% SDS, 1 mM EDTA, and 100 g ml Ϫ1 sonicated salmon sperm DNA, and washed at low stringency (1ϫ SSC, 0.1% SDS, 37°C) (30). The probe was the full-length AtMTHFR-1 cDNA. Maize genomic DNA was prepared from 3-day-old seedlings as described (31); 6.5-g samples were digested, separated in 0.8% agarose gels, and transferred to Duralon-UV membrane (Stratagene). Hybridization was at 42°C in 6ϫ SSC, 5ϫ Denhardt's solution, 0.5% SDS, 50% formamide, Identical residues are shaded in black, similar residues in gray. Dashes are gaps introduced to maximize alignment. Asterisks mark residues that interact with the FAD prosthetic group in E. coli (6). The bar indicates the hydrophilic bridge region between the domains (3). The triangle shows the position of an artifactual 12-residue insert in the Arabidopsis protein (AAC23420) predicted from genomic sequence. The arrow near the NH 2 terminus of the human sequence marks an alternative start site (4). At-1 and At-2, AtMTHFR-1 and -2; Zm-1, ZmMTHFR-1; Hs, human MTHFR (CAB41971); Sc, S. cerevisiae Met13 (P53128); Ec, E. coli MetF (P00394). Because the AtMTHFR-2 cDNA lacked the first six nucleotides of the coding sequence, the first two residues were deduced from the genomic sequence. and 100 g ml Ϫ1 salmon sperm DNA. Washing was at low stringency (0.1ϫ SSC, 0.1% SDS, 25°C). The probe was the full-length Zm-MTHFR-1 cDNA. Probes were labeled with 32 P by the random primer method. Radioactive bands were detected by autoradiography. Genomics-based Cloning of MTHFR cDNAs from Arabidopsis and Maize-For Arabidopsis, the strategy was based on a sequence from chromosome II whose conceptual translation product (unknown protein, GenBank accession no. AAC23420) is homologous to eukaryotic MTHFRs. BLAST searches using the deduced cDNA corresponding to AAC23420 detected 15 Arabidopsis ESTs of two types, one essentially identical to the AAC23420 nucleotide sequence, the other differing by Ϸ15%. Sequencing a nearly full-length representative of each type (both from hypocotyl libraries) confirmed that they encode polypeptides that are 86% identical to each other and Ϸ43% identical to human and yeast MTHFRs (Fig. 1). The deduced proteins are designated AtMTHFR-1 (592 residues, 66.3 kDa) and AtMTHFR-2 (594 residues, 66.8 kDa). AtMTHFR-2 is identical to the AAC23420 conceptual translation product except that the latter has a 12-residue insert (Fig. 1, triangle) attributable to an error made by the gene-prediction algorithm. For maize, a homology-based PCR strategy was adopted. Two amino acid sequences conserved in eukaryotic MTHFRs were used to design degenerate PCR primers, which amplified a Ϸ1500-base pair fragment from a root cDNA template. Screening a root library with this fragment yielded 10 apparently full-length cDNAs with the same sequence. They encode a 593-residue (66.4 kDa) protein (ZmMTHFR-1) that is 77% identical to AtMTHFR-1 (Fig. 1). Twelve maize MTHFR ESTs were found in GenBank and Pioneer data bases, all encoding ZmMTHFR-1. Fig. 1 shows that the deduced plant proteins are homologous to human and yeast MTHFRs throughout their entire length, and appear to lack targeting sequences (e.g. chloroplast or mitochondrial transit peptides). In the NH 2 -terminal catalytic domain, of the 19 residues shown to interact with the FAD cofactor in the E. coli enzyme (Fig. 1, asterisks), 17 are identical or conservatively replaced in the plant sequences. Complementation of a Yeast met12 met13 Mutant and Detection of CH 3 -THF-Menadione Oxidoreductase Activity-The three plant MTHFR cDNAs were subcloned into the expression vector pVT103-U and introduced into yeast strain RRY3, a met12 met13 double disruptant that totally lacks MTHFR activity and is a methionine auxotroph (5). All three constructs yielded methionine-independent transformants at high frequency; growth of the transformants on plates was comparable to that of the wild-type strain DAY4 ( Fig. 2A). No complementation was observed with the vector alone ( Fig. 2A), and retransformation of RRY3 with rescued plasmid containing the AtMTHFR-1 cDNA restored methionine prototrophy, showing that the complementation is due to the encoded plant protein. CH 3 -THF-menadione oxidoreductase activity was readily detected in desalted extracts of the complemented strains but not, as expected, in RRY3 (Fig. 2, panels B and C). To authenticate the observed activity, reactions were allowed to proceed to near completion, and the labeled product was verified by TLC (Fig. 2C). Addition of a five-residue histidine tag to the carboxyl terminus of the AtMTHFR-1 polypeptide had no impact on complementation (results not shown) and little effect on enzyme activity (Fig. 2B). The specific activities of yeast extracts containing plant MTHFRs (Fig. 2B) are at least 150-fold greater than those of wild type yeast (5) and up to 50-fold greater than those of liver (25,26), indicating that recombinant MTHFR proteins are expressed at a high level in yeast. Affinity Purification of Histidine-tagged MTHFR and Molec-ular Mass Determinations-The histidine-tagged AtMTHFR-1 enzyme was purified 640-fold by two cycles of affinity chromatography on nickel-NTA resin ( Table I). The specific activity of the purified enzyme assayed just after isolation (6.9 mol min Ϫ1 mg Ϫ1 protein at 30°C) falls between the values reported for human MTHFR and E. coli MetF (1, 26). The purified enzyme was found to be unstable, losing about half its activity during 3 h on ice. To investigate the mass and integrity of MTHFR subunits, the purified protein was analyzed by denaturing gel electrophoresis (Fig. 3). A 64-kDa band was evident, consistent with the size of the deduced polypeptide, and no bands of lower molecular mass. This demonstrates that the plant MTHFR protein isolated from yeast is not cleaved at the junction between the domains, a site that is particularly protease-sensitive in mammalian MTHFR and at which cleavage results in loss of AdoMet inhibition (2,3). In the purification experiment documented in Table I (2), and RRY3 transformed with pVT103-U alone (3) or containing AtMTHFR-1 (4), AtMTHFR-2 (5), or ZmMTHFR-1 (6) were plated on synthetic medium with or without methionine. B, CH 3 -THF-menadione oxidoreductase activities in desalted extracts of RRY3 or RRY3 expressing MTHFR cDNAs; -ht, histidine-tagged. Other abbreviations are as in Fig. 1. Data are means ϮS.E. (n Ն3). C, progress curves of menadione-dependent 14 CH 3 -THF oxidation catalyzed by extracts (60 g of protein) of RRY3 (E) or RRY3 expressing AtMTHFR-1 (ⅷ). Assays contained 26.6 nmol of (6R,6S) 14 CH 3 -THF. The inset is an autoradiograph of a TLC separation of the reaction product (P) and of a [ 14 C]formaldemethone standard (S). Product formation at 20 min was 91% of the theoretical maximum (assuming half the 14 CH 3 -THF to be in the biologically active 6S form). work we used PMSF. The molecular mass of the native At-MTHFR-1 enzyme was estimated by size exclusion chromatography (results not shown). The protein migrated as a symmetrical peak with an apparent molecular mass of 141 kDa, which is consistent with a dimer of 66-kDa subunits. Pyridine Nucleotide Preference-NAD(P)H-CH 2 -THF oxidoreductase activity cannot be measured spectrophotometrically in crude extracts due to the presence of NAD(P)H oxidase (26), and spectrophotometric assays are in any case fairly insensitive. We therefore developed a radiometric assay to study the pyridine nucleotide specificity of MTHFRs using desalted crude extracts or small amounts of purified enzyme. In this assay, 14 CH 2 -THF (prepared from THF and excess H 14 CHO) is incubated with enzyme, NAD(P)H, and an NAD(P)H recycling system (to prevent any NAD(P) formed from supporting 14 CH 2 -THF oxidation by CH 2 -THF dehydrogenases). Label remaining in 14 CH 2 -THF is then exchanged out into an excess of unlabeled HCHO and the 14 CH 3 -THF formed is bound to a cation exchange resin, which is washed and counted. The assay was validated by comparing extracts of RRY3 (MTHFR-deficient) and RRY3 expressing AtMTHFR-1. No activity was detected in RRY3; product formation with the AtMTHFR-1 extract was dependent on pyridine nucleotide and THF, and slightly promoted by FAD (Fig. 4A). The reaction product was confirmed to be 14 CH 3 -THF by reverse-phase HPLC (Fig. 4B). Using this assay, the three recombinant plant MTHFRs were found to strongly prefer NADH; the activities with 200 M NADPH were Ͻ2% of those with 200 M NADH, which was a saturating concentration (Table II). Recombinant human enzyme (HsMTHFR) was tested as a control and shown to be NADPH-dependent (Table II), as it is when extracted from liver (2). The NADH-CH 2 -THF oxidoreductase/CH 3 -THF-menadione oxidoreductase activity ratio for the plant enzymes was 0.9 Ϯ 0.1, similar to the corresponding ratio for mammalian MTHFR (25,32). Sensitivity to S-Adenosylmethionine and S-Methylmethionine-Recombinant plant MTHFR activity in desalted extracts was tested for inhibition by high concentrations (1-2 mM) of AdoMet using both NADH-CH 2 -THF oxidoreductase and CH 3 -THF-menadione oxidoreductase assays. Extracts were preincubated at 24 -30°C with AdoMet (or buffer for controls) before assays, because onset of AdoMet inhibition is slow (25,26). Recombinant human enzyme (HsMTHFR) was used as a positive control to check that expression in yeast did not desensitize it to AdoMet. In both assays, the activity of the human enzyme was strongly inhibited by AdoMet, whereas that of ZmMTHFR-1 was unaffected, AtMTHFR-1 was stimulated by 10 -20%, and AtMTHFR-2 was stimulated by 50 -70% (Tables II and III). The effect of S-methylmethionine (SMM) was also tested, because SMM is a major plant metabolite whose levels can exceed those of AdoMet (33). Physiological concentrations of SMM (2-5 mM) had no effect on either CH 3 -THF-menadione oxidoreductase (Table III) or NADH-CH 2 -THF oxidoreductase activities (results not shown). Methionine (5 mM) or S-adenosylhomocysteine (2 mM) were also found to have no effect (results not shown). NADH Preference and S-Adenosylmethionine Insensitivity of Purified AtMTHFR-1-To confirm that the pyridine nucleotide specificity and AdoMet response of the purified recombinant protein are the same as those observed in desalted extracts, the histidine-tagged form of AtMTHFR-1 was tested (Table IV). The instability of the purified enzyme resulted in significant loss of activity during preincubation with AdoMet or buffer alone. The results with purified enzyme nonetheless mirrored those with extracts: the enzyme strongly preferred NADH and was not inhibited by AdoMet. As for crude extracts, there was an apparent stimulation by AdoMet. However, in this case it was shown to be due principally to slower loss of activity during preincubation when AdoMet was present, i.e. to a stabilizing effect of AdoMet. S-Adenosylmethionine-insensitive NADH-CH 2 -THF Oxidoreductase Activity in Plant Extracts-To rule out the possibility that the NADH-preference and AdoMet-insensitivity of the recombinant plant enzymes are artifacts of the yeast expression system, enzymes extracted from Arabidopsis, maize and two other plants were tested (Table V). In root and leaf extracts of all species, the MTHFR activity showed a strong preference for NADH and was not inhibited by AdoMet; the activities of the extracts were up to Ϸ50-fold greater than those in liver. That the ratios of NADPH-to NADH-dependent activities were higher for plant extracts than for recombinant enzymes is attributable to conversion of NADP(H) to NAD(H) by phosphatases in the plant extracts. NADP phosphatase activities The starting material was 1.5 g (wet weight) of cells from a 0.5-liter culture. Proteins were extracted and desalted in buffers containing a proteinase inhibitor mixture (see "Experimental Procedures"). In cycle 1, enzyme was bound at pH 7.5 to nickel-NTA resin, using 50 mM sodium phosphate buffer containing 300 mM NaCl and 40 mM imidazole; the imidazole concentration was raised to 60 mM for washing, and to 400 mM for elution. After diluting the imidazole concentration to 40 mM, the process was repeated for cycle 2 except that elution was with 300 mM imidazole. Activity was measured at 30°C using the CH 3 -THF-menadione oxidoreductase assay, with 20% methanol as the solvent for menadione. One milliunit equals oxidation of 1 nmol of CH 3 -THF min Ϫ1 . Step Table I. The positions of molecular mass markers (kDa) are indicated. (estimated using an NADP concentration of 10 mM) in Arabidopsis and maize tissue extracts were 14 -34 nmol min Ϫ1 mg Ϫ1 protein, which would allow significant NADH formation during the oxidoreductase assays. Yeast contained no detectable NADP phosphatase activity (Ͻ0.3 nmol min Ϫ1 mg Ϫ1 protein). Metabolism of [methyl-14 C]CH 3 -THF-MTHFRs in yeast and mammals are cytosolic enzymes (9), and the lack of NH 2terminal transit sequences (Fig. 1) indicates that plant MTH-FRs are likewise cytosolic. If they are, the very low NADH/NAD ratios that prevail in plant cytosol (10 Ϫ3 ) (34) might allow the MTHFR reaction to proceed in the reverse direction. An explor-atory test of this possibility was made by supplying a tracer quantity of [methyl-14 C]CH 3 -THF to illuminated leaf tissue and analyzing labeled metabolites (Fig. 5, panels A and B). In both Arabidopsis and maize, 14 C was readily metabolized to serine, sugars, and starch. A simple explanation for this labeling pattern is that 14 CH 3 -THF is oxidized to 14 CH 2 -THF, allowing 14 C to enter serine via the action of glycine hydroxymethyltransferase (11,12). From serine, label is expected to flow to photosynthetic end products (17,35). Consistent with this explanation, when a large dose of serine was given together with 14 CH 3 -THF, label was trapped in the serine pool (Fig. 5, panels C and D). That the trapping effect was less marked in the C 3 plant Arabidopsis may be explained by its high capacity to metabolize serine; measurements showed that Ϸ60% of the serine supplied to Arabidopsis was metabolized during the experiment. Southern Analyses-Southern analyses were carried out in order to estimate the number of MTHFR genes in Arabidopsis and maize (Fig. 6). For Arabidopsis (Fig. 6, panel A), the sizes and intensities of hybridizing bands indicated two genes, corresponding to the AtMTHFR-1 and -2 cDNAs with respect to the predicted restriction sites. For maize (Fig. 6, panel B), the banding pattern indicated at least two MTHFR genes. Taken with the evidence from the data bases, the Southern analyses show that the cDNAs that we have identified represent both MTHFR genes of Arabidopsis, and what appears to be the most strongly and widely expressed MTHFR gene of maize. DISCUSSION The identification of cDNAs encoding MTHFR completes the set of plant genes required for the synthesis of methyl groups from serine and formate (12). This opens the way for systematic application of reverse genetics to investigate folate-mediated one-carbon metabolism in plants. It will also permit comprehensive studies of the expression of one-carbon metabolism FIG. 4. Characteristics of the NAD(P)H-CH 2 -THF oxidoreductase radioassay. A, effects of omitting assay components. Complete reactions contained extract (7.5 g of protein) of RRY3 expressing AtMTHFR-1 (left panel) or RRY3 alone (right panel) and were otherwise as described under "Experimental Procedures" except that 0.2 Ci of H 14 CHO was used. B, reverse-phase HPLC separation of reactions containing extract of RRY3 expressing AtMTHFR-1 (30 g of protein) minus (left frame) or plus (right frame) NADH. Reactions were incubated at 30°C for 45 min to ensure that they went to completion. The peak position of CH 3 -THF (retention time Ϸ8.5 min) is shown with a horizontal line. The 14 C activity in the CH 3 -THF peak represents 86% of the maximum theoretical yield. The large peak at Ϸ4 min is H 14 CHO. TABLE IV Pyridine nucleotide preference and S-adenosylmethionine sensitivity of the affinity-purified histidine-tagged form of AtMTHFR-1 Histidine-tagged AtMTHFR-1 enzyme was purified as described in Table I and assayed for NADPH-CH 2 -THF, NADH-CH 2 -THF, and CH 3 -THF-menadione oxidoreductase activities, preincubating without (control) or with AdoMet as described in Tables II and III genes. The finding that a histidine-tagged form of AtMTHFR-1 can be expressed at a high level in yeast and readily purified will facilitate detailed analysis of the properties of this and other MTHFRs. Plant MTHFR proteins resemble those of other eukaryotes in having a catalytic domain homologous to the E. coli enzyme, and a long (Ϸ270-residue) COOH-terminal extension. Like their mammalian and yeast counterparts, plant MTHFRs appear to be cytosolic proteins inasmuch as they lack obvious targeting sequences. Despite these overall structural similarities, the plant enzymes have the opposite pyridine nucleotide preference to mammalian MTHFR, and are not inhibited by AdoMet. Because of the far-reaching implications of these conclusions for the regulation of plant one-carbon metabolism, it is important to examine the evidence for them. The conclusion that plant MTHFRs are NADH-dependent rests (i) on the properties of three different recombinant enzymes from Arabidopsis and maize (with control experiments in which recombinant human MTHFR expressed in the same system proved to be NADPH-dependent), and (ii) on data for enzymes isolated directly from these and two other plant species. Taken together, this evidence rules out the possibility that the NADH-dependence of the plant enzymes is an artifact of expression in yeast. The same can be concluded for the AdoMet response of the plant enzymes, because neither enzymes from plant sources nor recombinant plant MTHFRs were inhibited by AdoMet, whereas the recombinant human enzyme was inhibited. Moreover, the demonstration that recombinant plant MTHFR has intact subunits excludes the possibility that proteolytic cleavage between the catalytic and COOH-terminal domains causes the AdoMet insensitivity. This interdomain cleavage is the most likely origin of artifactual AdoMet insensitivity (2). The lack of inhibition of plant MTHFRs by AdoMet seems most likely to be due to absence of an AdoMet binding site. Photoaffinity labeling data (3) locate the binding site in mammalian MTHFR some 50 residues from the junction between the domains (3), so it may be significant that the human and plant sequences diverge substantially in this region (Fig. 1). About 80 residues from the junction, the human enzyme has a seven-residue insertion that is absent from plant and yeast MTHFRs. However, our preliminary data indicate that the yeast Met13 enzyme is NADPH-dependent and inhibited by AdoMet, 2 suggesting that the insert does not relate to AdoMet binding. If plant MTHFRs do not bind AdoMet, the overall sequence conservation between the mammalian, yeast and plant COOH-terminal domains would suggest that these have other functions that remain to be discovered. It is also possible that plant MTHFRs bind AdoMet but are not inhibited by it. The moderate stabilizing or stimulatory effects of AdoMet on the activities of Arabidopsis MTHFRs are consistent with such a possibility, and merit further investigation. That plant MTHFRs use NADH rather than NADPH as reductant suggests that the MTHFR reaction is reversible under physiological conditions. The equilibrium constant (K eq ) for the reductive reaction has been determined (8) to be 4.5 ϫ 10 10 . 2 Roje and Raymond, unpublished results. At a pH of 7.6 ([H ϩ ] ϭ 2.5 ϫ 10 Ϫ8 M), the cytosolic NADH and NAD concentrations in illuminated spinach leaves have been estimated as 7 ϫ 10 Ϫ7 and 6 ϫ 10 Ϫ4 M, respectively (34). Using these values in Equation 1 gives a value of 1.3 for the CH 3 -THF/CH 2 -THF ratio at equilibrium. A value so close to unity connotes a freely reversible reaction in the cytosol (⌬G Ϸ 0). A physiologically reversible MTHFR reaction could account for the absence of allosteric inhibition by AdoMet in the plant enzymes, since a reversible reaction could maintain an adequate pool of CH 2 -THF for thymidylate and purine synthesis, without need of a feedback signal from methyl metabolism. Similar considerations may apply to E. coli MTHFR, which is also NADH-dependent and AdoMet-insensitive, as the NADH/ NAD ratio is very low in aerobically grown E. coli cells (36). Note that for ready interconversion of CH 3 -THF and CH 2 -THF to occur, the thermodynamic reversibility of Equation 1 must be accompanied by kinetic reversibility. Thus, the forward and reverse rates of the MTHFR reaction in vivo would need to be at least as great as those for other reactions forming and consuming CH 3 -THF and CH 2 -THF, otherwise the calculated ratio of Ϸ1 would probably not hold. Because the MTHFR activities measured in plant extracts (5-25 nmol min Ϫ1 mg Ϫ1 protein) are similar to or higher than those reported for methionine synthase, cytosolic glycine hydroxymethyl transferase and CH 2 -THF dehydrogenase (37)(38)(39)(40), this condition may be met. Moreover, indirect evidence indicates that the CH 2 -THF level in illuminated leaves may be approximately the same as the CH 3 -THF level (37). The exploratory radiotracer tests that we made for in vivo reversibility of the MTHFR reaction establish that leaves readily metabolize the methyl group of CH 3 -THF to serine, and thence to carbohydrates. This result is consistent with conversion of 14 CH 3 -THF to 14 CH 2 -THF through the action of MTHFR, but not proof of it. Plants lack glycine N-methyltransferase and sarcosine dehydrogenase (11,37), whose sequential action in animal tissues provides a route to convert the methyl group of AdoMet, via formaldehyde, to CH 2 -THF (9). However, while there are no reports that it occurs in plants, oxidative demethylation of 14 CH 3 -THF, or of methylated products derived from it, could potentially generate [ 14 C]formaldehyde and hence 14 CH 2 -THF and [ 14 C]serine. Other caveats are that the (necessarily) large dose of 14 CH 3 -THF used may have perturbed one-carbon metabolism, and that the monoglutamyl form supplied may not have acted as a faithful tracer for endogenous polyglutamylated forms. The direct conversion of 14 CH 3 -THF to 14 CH 2 -THF via MTHFR nonetheless remains the simplest explanation of our 14 C-tracer results. Based on the thermodynamic considerations outlined above, together with the 14 CH 3 -THF metabolism data, we suggest that the MTHFR reaction is reversible in plants. Support for this comes from early work by Clandinin and Cossins (41), who showed that germinating peas converted supplied 14 CH 3 -THF to 5-and 10-[ 14 C]formyl-THF.
8,401.4
1999-12-17T00:00:00.000
[ "Biology", "Environmental Science" ]
Genomic heterogeneity of breast tumor pathogenesis. Pathological grade is a useful prognostic factor for stratifying breast cancer patients into favorable (low-grade, well-differentiated tumors) and less favorable (high-grade, poorly-differentiated tumors) outcome groups. Under the current system of tumor grading, however, a large proportion of tumors are characterized as intermediate-grade, making determination of optimal treatments difficult. In an effort to increase objectivity in the pathological assessment of tumor grade, differences in chromosomal alterations and gene expression patterns have been characterized in low-grade, intermediate-grade, and high-grade disease. In this review, we outline molecular data supporting a linear model of progression from low-grade to high-grade carcinomas, as well as contradicting genetic data suggesting that low-grade and high-grade tumors develop independently. While debate regarding specific pathways of development continues, molecular data suggest that intermediate-grade tumors do not comprise an independent disease subtype, but represent clinical and molecular hybrids between low-grade and high-grade tumors. Finally, we discuss the clinical implications associated with different pathways of development, including a new clinical test to assign grade and guide treatment options. Introduction Breast cancer is the most common cancer in women worldwide, and in the United States (US) is estimated to account for ∼26% of all new female cancer cases and 15% of all cancer deaths among women. 1 Incidence of breast cancer in the US has risen by approximately 1.2% per year since 1930, 2 such that one in eight American women now are expected to develop breast cancer during her lifetime. Research attempting to understand the molecular nature of breast cancer and its progression will have a tremendous impact on costs associated with disease, and importantly, on the nature of diagnosis, treatment, and prevention. Pathological assessment of breast cancer is currently based on criteria such as tumor size, lymph node and hormone receptor status, and epidermal growth factor receptor 2 (HER2) expression, but pathology alone does not accurately predict outcomes, even for patients with similar tumor characteristics. Recent studies suggest that, despite use of identical treatment modalities in patients with similar pathological characteristics, clinical outcomes can be highly variable. 3 Differences in response to treatments such as Tamoxifen and Herceptin ® likely reflect heterogeneity in pathological factors such as estrogen receptor (ER) and HER2 status. Breast carcinomas are heterogeneous at the molecular level, with at least five disease categories identified through differential patterns of gene expression. [4][5][6] This extensive clinical, pathological, and molecular heterogeneity complicates diagnosis, prognosis, and treatment of patients with breast cancer. In this review, we examine use of the Nottingham histological score in assigning grade to breast carcinomas and the clinical utility of the Nottingham score in determining patient risk and outcome. We outline our understanding of how genomic alterations contribute to histological characteristics that define tumor grade and the importance of molecular changes in shaping tumor growth and differentiation in patients with breast cancer. An important focus of this review is the ongoing debate over development of high-grade and low-grade breast disease, specifically on whether low-grade and high-grade breast carcinomas represent separate and distinct diseases. We present molecular evidence supporting a linear model of progression from low-grade to high-grade carcinomas, as well as contradicting genetic data suggesting that low-grade and high-grade tumors develop independently. While debate regarding specific pathways of development continues, molecular data suggest that intermediategrade tumors do not comprise an independent disease subtype, but represent clinical and molecular hybrids between low-grade and high-grade tumors. Finally, we discuss the clinical implications of different pathways of development, including a new clinical test to assign grade and guide treatment options. nottingham Histological score The Nottingham combined histological grading system, based on classification parameters developed by Bloom and Richardson 7 as modified by Elston and Ellis, 8 is currently the most widely used method for assessing breast tumor grade. The Nottingham score uses three components, tubule formation, nuclear pleomorphism, and mitotic count, each of which are scored independently using criteria described in Table 1. 9 Scores for the three components are then combined and the cumulative score serves as the classifier: low-grade (well-differentiated) tumors have a cumulative score of 3, 4, or 5; intermediategrade (moderately-differentiated) tumors score 6 or 7; and high-grade (poorly-differentiated) tumors have cumulative scores of 8 or 9. The Nottingham grading system has clinical utility in determining patient risk and outcome-patients with low-grade carcinomas have ∼95% five-year survival compared to just 50% in patients with highgrade disease. 8,10 Although the prognostic power of the Nottingham score has prompted the College of American Pathologists to suggest using grade during staging, 11 grade has not yet been incorporated as a component of tumor staging. 12 Use of grade is impaired by 1) the inherent subjectivity associated with its assessment-concordance between pathologists ranges from 50%-85%, 13 and 2) the large number (30%-60%) of tumors classified as intermediategrade (moderately-differentiated). These tumors have features of both low-grade and high-grade tumors, making it difficult to assess risk and determine the most appropriate treatment option for patients. 14 Genomic Discrimination of Lowand High-Grade Breast carcinomas Breast cancer progression can be defined by a non-obligatory sequence of histological changes from normal epithelium through atypical hyperplasia, in situ carcinoma, and finally invasive malignancy. 15 The hypothesis of dedifferentiation posits that breast cancers evolve from well-differentiated to poorly-differentiated tumors following a linear model. The progressive sequence of dedifferentiation is: well-differentiated (grade 1) → moderately-differentiated (grade 2) → poorly-differentiated (grade 3). Support for a link between histological progression and tumor growth comes mainly from clinical studies, which have identified correlations between histological grade and tumor size, 16 or observed that impalpable carcinomas detected by mammography tend to be well-differentiated. 17 In contrast, observations that recurrent carcinomas tend to exhibit the same level of cellular differentiation, and hence the same histological grade, as the original primary tumor 18 have led to the hypothesis that lowgrade and high-grade carcinomas reflect different disease entities. Although certain DNA copy number changes defined by comparative genomic hybridization (CGH) correlate with the degree of histological differentiation, 19 several molecular studies suggest that the majority of low-grade (well-differentiated) tumors do not progress to high-grade (poorly-differentiated) carcinomas. For example, Roylance et al 20 observed distinct genomic differences between grade I and grade III breast tumors; in particular, loss of chromosome 16q was significantly more frequent in grade I (65%) compared to grade III (16%) tumors. Likewise, Buerger et al 21 observed frequent loss of chromosome 16q in welldifferentiated invasive breast carcinomas and concluded that sequential progression from low-grade to highgrade is unlikely because chromosomal alterations at 16q were not maintained in higher-grade tumors. Since the initial models of disease progression were published, a number of studies examining levels and patterns of genomic variation in breast carcinomas have supported the hypothesis that low-grade and high-grade tumors represent separate genetic diseases, based largely on observations that the frequency of alterations at chromosome 16q was significantly higher in low-grade tumors. An allelic imbalance (AI) analysis using three microsatellite markers on chromosome 16q detected a significantly higher frequency of AI events in low-grade (grade 1) compared to high-grade (grade 3) tumors for two of the three markers. 22 Likewise, microsatellitebased data from our own group showed significantly higher levels of AI at chromosome 16q11-q22 in lowgrade compared to high-grade breast carcinomas. 23 In addition, low-grade tumors contained larger alterations across the 23 Mb region of chromosome 16 compared to high-grade tumors. Only proximal markers (D16S409 and D16S2624) on 16q had a higher frequency of AI in grade 1 versus grade 3 tumors, suggesting that changes in the 16q11-q22 region are critical in the development of low-grade disease ( Fig. 1). Similarly, an assessment of copy number status across chromosome 16q by CGH, AI, and fluorescence in situ hybridization (FISH) demonstrated that low-grade and high-grade disease were associated with different types of chromosomal alterations in the 16q region. 24 Physical loss of large portions of chromosome 16q was associated with low-grade disease, while small regions of loss of heterozygosity (LOH) were characteristic of highgrade tumors. Further, the timing of alterations at 16q appeared to differ between tumor grades, with physical loss of 16q being an early and critical event in the development of low-grade breast tumors, while smaller alterations of 16q occurred late in the development of high-grade carcinomas. 24 Together, these studies Higher levels of chromosomal alterations have been detected via CGH in high-grade compared to lowgrade DCIS, with loss of 16q found almost exclusively in low-grade lesions. 25,26 AI analysis on 100 pure DCIS specimens (with no detectable invasive component) recently found significantly higher levels of AI in the high-grade compared to low-grade lesions-AI at chromosome 16q characterized low-grade lesions, while alterations at 6q25-q27, 8q24, 9p21, 13q14, and 17p13.1 were frequent in high-grade disease. 27 Similar patterns of chromosomal changes in in situ and invasive disease suggest that low-grade and highgrade invasive breast tumors evolve directly from lowgrade and high-grade DCIS, respectively (Fig. 2). Components of the Nottingham score The Nottingham score uses three components, tubule formation, nuclear pleomorphism, and mitotic count, to assign histological grade, but mechanisms by which genomic changes in breast carcinomas specifically contribute to these underlying components are unknown. When patterns of AI were compared between tumors with favorable (=1) and unfavorable (=3) scores for each component, significantly higher levels of AI were observed in samples with unfavorable (high) scores for all components. 28 Tumors with reduced tubule formation (score = 3) showed higher levels of AI at chromosomal regions 11q23 and 13q12, those with high levels of nuclear atypia had frequent alterations at 9p21, 11q23, 13q14, 17p13, and 17q12, and carcinomas with high mitotic counts were commonly altered at 1p36, 11q23, and 13q14. Only region 16q11-q22 was altered more frequently in samples with low nuclear atypia. Alterations at 11q23 are common in breast tumors showing reduced tubule formation, high nuclear atypia, and high mitotic counts, suggesting that this is an early genetic change in the development of poorlydifferentiated breast tumors; however, alterations at other chromosomal regions in poorly-differentiated tumors may specifically influence cell structure, nuclear morphology, and cellular proliferation. Genomic heterogeneity and breast cancer The identification of genomic signatures for lowgrade and high-grade breast disease provides new insights into the heterogeneity of breast cancer. Under (20); checked bar, (23); striped bar, (24). Candidate genes located in the region are shown on the right. Note that mutations in CDH1 have been associated with invasive lobular carcinoma, but not with low-grade or high-grade invasive ductal carcinoma. Abbreviations: RBL2, retinoblastoma-like 2; AKTiP, akt-interacting protein; MMP, matrix metalloproteinase; CDH, cadherin; FBXL8, f-box and leucine-rich repeat protein 8; e2F4, e2f transcription factor 4; CTCF, CCCTC-binding factor; TeRF2, telomeric repeat-binding factor 2; HAS3, hyaluronan synthase 3. current models of disease progression, low-grade and high-grade breast carcinomas develop independently along different genetic pathways, thus consideration of breast disease without regard to tumor grade may mask molecular (or environmental) factors specific to one grade. 20 For example, grade 1 DCIS has been shown to exhibit a significantly lower overall frequency of chromosomal changes than low-grade (welldifferentiated) invasive carcinomas, but no individual chromosomal regions effectively differentiate lowgrade in situ from invasive disease. In contrast, highgrade (poorly-differentiated) invasive tumors did not show significantly higher levels of AI than grade 3 DCIS, but AI events at specific chromosomal regions (1p36 and 11q23) were significantly more frequent in high-grade invasive tumors compared to high-grade DCIS. 29 Lower levels of AI in low-grade in situ lesions compared to low-grade invasive carcinomas may reflect the protracted time-to-progression associated with low-grade DCIS. Likewise, increased levels of AI at 1p36 and 11q23 in high-grade carcinomas suggest that these chromosomal regions may harbor genes associated with invasiveness. Therefore, consideration of histological grade when analyzing genetic data has the potential to identify molecular changes associated with invasion and to define molecular signatures of aggressive behavior for lowgrade and high-grade disease. Molecular evidence for a Biological continuum Stratification of low-grade and high-grade breast carcinomas into separate molecular diseases is based on the high frequency of alterations observed for chromosome 16 in low-grade tumors and a low frequency of 16q alterations in high-grade tumors. To localize genes involved in low-grade IDCA, and to refine regions of chromosome 16 that may be important to the development of high-grade disease, Roylance et al characterized 40 low-grade (grade 1) and 17 high-grade (grade 3) IDCA using CGH with nearly contiguous coverage of chromosome 16q. 30 The majority of low-grade tumors showed large deletions of 16q, while high-grade tumors were more frequently characterized by multiple, small chromosomal alterations, including copy number gains in this region. Because many regions demonstrated loss and gain of certain sections, chromosome 16q may be inherently unstable and many of these regions may contain secondary, rather than causative, alterations. Based on this data, and the identification of copy number gains not previously detected, Roylance et al 30 suggest that loss of chromosome 16q is an early event in the development of low-grade tumors, and postulate that high-grade carcinomas evolve from low-grade tumors by the accumulation of subsequent chromosomal alterations, such as small breaks and amplifications. These observations question the role of chromosome 16q deletions as the key to defining low-and highgrade genetic pathways of development. The Nottingham grading system, used to assign histological grade to invasive carcinomas, does not adequately describe internal variation in the degree of differentiation within tumors. Although most pathologists rely on nuclear grade, either alone or in combination with central necrosis, to classify DCIS, one recent attempt to quantify histological diversity in 120 pure DCIS lesions found that ∼46% of cases showed localized variability in histological grade. Nearly one-third of lesions with internal grade differences demonstrated further diversity for a panel of immunohistochemistry markers including ER, GATA-binding protein 3 (GATA3), and HER2. 31 The authors concluded that higher-grade DCIS gradually evolve from lower-grade in situ lesions by random accumulation of genetic mutations. These studies hypothesize that low-grade and high-grade breast carcinomas are not necessarily unique genetic diseases. Under this model, cells with the most aggressive/poorly-differentiated characteristics tend to become the dominant cell type during progression from low-grade to high-grade carcinomas. 32 Patterns of genetic changes usually do not differ significantly between intermediate-grade and high-grade carcinomas, supporting the idea that intermediate-grade invasive breast tumors develop from either grade 2 or grade 3 DCIS. Further studies of genomic alterations in breast tumors of different histological grades have shown that although genetic changes were more frequent in grade 3 tumors, alterations of one specific chromosomal region (16q) were significantly lower (P  0.01) in high-grade (26%) compared to intermediate-grade (54%) tumors. 33 Thus it appears that intermediate-grade carcinomas may represent a mixture of histological characteristics and may develop along two independent genetic pathways, one characterized by loss of chromosome 16q, few genomic alterations, and high rates of diploidy, while the other pathway is characterized by high homology with high-grade tumors. Molecular Classification of Intermediate-Grade Tumors In our ongoing studies of intermediate-grade breast carcinomas, we observed that clinicopathological characteristics and overall levels of genomic alterations in grade 2 tumors were generally intermediate compared to low-grade and high-grade disease. 23 Specifically, 47% of the intermediate-grade tumors showed patterns of genomic alterations similar to high-grade tumors, while 11% had a low-grade signature where AI was detected only at chromosome 16q. Of note, 24% of cases showed genetic features representing a mixture of low-grade and high-grade disease, while 18% had a unique genomic profile not observed in either high-or low-grade tumors. These data suggest that intermediategrade carcinomas should not be classified as a discrete disease type, but represent a blend of low-grade and high-grade diseases. Gene expression analysis has been widely used to identify genetic profiles associated with different stages of breast cancer development. Using laser microdissection to isolate pure populations of tumor cells and prevent cross-contamination from stroma or co-occurring lesions, histological grade rather than pathological stage, was found to correlate with significantly different patterns of gene expression. 34 A subset of samples showed gene expression signatures that were distinctly grade-1-like or grade-3-like; most intermediate-grade tumors exhibited a mixed lowand high-grade gene expression profile. Similarly, an expression signature from only five genes-barren homologue-Drosophila (BRRN1), hypothetical protein FLJ11029, chromosome 6 open reading frame 173 (C6orf173), serine/threonine-protein kinase 6 (STK6), and maternal embryonic leucine zipper kinase (MELK)-has been shown to discriminate low-grade from high-grade tumors with ∼95% accuracy. 35 35 Recognizing the inherent subjectivity in assigning histological grade and the need to better characterize intermediate-grade tumors, researchers have begun to analyze combined gene expression data sets from primary breast tumor samples derived from multiple sources. These approaches have led to the development of a gene expression grade index (GGI), based on 97 genes, which summarizes molecular differences between low-grade and high-grade breast tumors. 14 Similar to earlier results, 34 the GGI partitions intermediate-grade carcinomas into lowgrade and high-grade clusters; with un-clustered cases representing a mixture of the two grades. clinical Implications Molecular data (DNA and RNA) suggest that intermediate-grade invasive breast cancer is not a discrete disease, but represents a blend between lowgrade and high-grade tumors. However, whether poorlydifferentiated tumors arise from well-differentiated carcinomas, or whether low-grade and high-grade tumors develop along independent genetic pathways remains unclear. Although multiple studies have identified significant differences in gene expression between low-grade and high-grade disease, 14,34,35 gene and protein expression profiles are transient, reflecting biological conditions in the tumor at the time of excision, rather than an evolutionary history of tumor development. In contrast, chromosomal changes can be very useful for modeling disease progression. Continuing improvements in technologies to measure chromosomal alterations, such as copy number changes assessed by large-scale single-nucleotide polymorphism (SNP) arrays, 36 may provide the tools necessary to determine the role of chromosome 16q in the development of low-grade tumors and further examine the development of low-grade as well as high-grade breast carcinomas. Determining relationships among tumors of different histological grades has important clinical implications for estimating risk and defining treatment options in patients with breast disease. For example, atypical ductal hyperplasia (ADH) specimens typically share a gene expression profile with grade 1 disease and tend to cluster with lowgrade DCIS and well-differentiated invasive cancer. 34 Thus, ADH may represent a precursor lesion, specifically to low-grade breast cancer. Should lowgrade disease be genetically distinct from high-grade, patients diagnosed with ADH could be considered lower-risk, reflecting the less aggressive phenotype of low-grade disease. Similarly, under a model of tumor progression from low-grade to high-grade through histological de-differentiation, identification of molecular changes that promote progression may provide molecular targets for the development of therapeutics to block the progression from low-grade to high-grade (aggressive) tumors. Development of molecular signatures that closely correlate with histological differentiation may improve the assessment of tumor grade. At present, debate continues within the pathology community over the best way to assign histological grade. Some studies suggest that a two-tiered grading system comprised of nuclear pleomorphism and mitotic counts is superior to the current tripartite system that includes tubule formation. 9,37 In contrast, research suggests that a composite score based on a 7-point scale (range 3-9) is more accurate than the current system that converts the cumulative scores from tubule formation, nuclear pleomorphism, and mitotic counts to a 3-grade system. 38 For example, while tumors with a composite score of 6 or 7 would be classified as intermediate-grade, those with a score of 7 have a prognosis similar to high-grade tumors. It is possible that tumors with a score of 6 correspond to the G2a tumor group and those with a score of 7 to the G2b tumor group defined by Ivshina et al 35 suggesting that tumors with scores of 6 and 7 should be considered separately when making treatment decisions. Finally, molecular profiles such as the OncotypeDX™ (Genomic Health, Redwood City, CA) and MammaPrint ® (Agendia, Amsterdam, The Netherlands) are now being used more frequently as clinical tools to determine treatment for certain groups of patients. For example, the OncotypeDX™ can be used to make decisions about chemotherapy after surgery for women with node-negative, ER-positive breast cancer. In 2008, Ipsogen (http://www.ipsogen. com/) developed the MapQuant Dx™ Genomic Grade test based on the GGI discussed above. 14 This test is being marketed as the first microarray-based diagnostic test to measure tumor grade. With the reported ability to classify 80% of intermediate-grade tumors as either low-grade or high-grade, the MapQuant assay may be useful in guiding treatment options, possibly sparing patients with grade 1 or grade 1-like tumors unnecessary treatments, while identifying patients who would benefit from chemotherapy. 32 summary Molecular characterization of breast tumors at both the DNA and RNA levels suggests that intermediategrade carcinomas do not represent an independent disease subtype, but instead share clinical and molecular features of low-grade and high-grade tumors. In contrast, debate continues as to whether poorly-differentiated (high-grade) tumors evolve from well-differentiated (low-grade) tumors or whether low-grade and high-grade carcinomas represent discrete diseases that develop along separate genetic pathways. While efforts continue to improve our understanding of biological factors influencing the development of low-, intermediate-, and high-grade tumors, clinical uses of molecular assays are providing new ways to assign histological grade and guide treatments for patients with breast cancer. publish with Libertas Academica and every scientist working in your field can read your article "I would like to say that this is the most author-friendly editing process I have experienced in over 150 publications. Thank you most sincerely." "The communication between your staff and me has been terrific. Whenever progress is made with the manuscript, I receive notice. Quite honestly, I've never had such complete communication with a journal." "LA is different, and hopefully represents a kind of scientific publication machinery that removes the hurdles from free flow of scientific thought." Your paper will be: • Available to your entire community free of charge • Fairly and quickly peer reviewed • Yours! You retain copyright http://www.la-press.com
5,054
2009-01-01T00:00:00.000
[ "Biology" ]
Partially Fluorinated Sulfonated Poly ( ether amide ) Fuel Cell Membranes : Influence of Chemical Structure on Membrane Properties A series of fluorinated sulfonated poly (ether amide)s (SPAs) were synthesized for proton exchange membrane fuel cell applications. A polycondensation reaction of 4,4’-oxydianiline, 2-sulfoterephthalic acid monosodium salt, and tetrafluorophenylene dicarboxylic acids (terephthalic and isophthalic) or fluoroaliphatic dicarboxylic acids produced SPAs with sulfonation degrees of 80–90%. Controlling the feed ratio of the sulfonated and unsulfonated dicarboxylic acid monomers afforded random SPAs with ion exchange capacities between 1.7 and 2.2 meq/g and good solubility in polar aprotic solvents. Their structures were characterized using NMR and FT IR spectroscopies. Tough, flexible, and transparent films were obtained with dimethylsulfoxide using a solution casting method. Most SPA membranes with 90% sulfonation degree showed high proton conductivity (>100 mS/cm) at 80 °C and 100% relative humidity. Among them, two outstanding ionomers (ODA-STA-TPA-90 and ODA-STA-IPA-90) showed proton conductivity comparable to that of Nafion 117 between 40 and 80 °C. The influence of chemical structure on the membrane properties was systematically investigated by comparing the fluorinated polymers to their hydrogenated counterparts. The results suggest that the incorporation of fluorinated moieties in the polymer backbone of the membrane reduces water absorption. High molecular weight and the resulting physical entanglement of the polymers chains played a more important role in improving stability in water, however. OPEN ACCESS Polymers 2011, 3 223 Introduction Environmental damage and related energy problems caused by burning fossil fuels have shifted research priorities to the discovery of alternative energy sources.Fuel cells can efficiently convert the chemical energy stored in fuels directly into electrical power without negative impacts on the environment, and this technology has been recognized as one of the most promising alternative and sustainable energy sources for automotive, portable, and stationary applications [1][2][3][4][5][6].Many factors affect the performance of fuel cells.Among different types of fuel cells, proton exchange membrane fuel cells (PEMFCs) are considered the most promising type because of their high power density.Polymer electrolyte membranes (PEMs), a key component of PEMFCs, are the proton conductors in fuel cells and separate the hydrogen fuel from the oxidant.The harsh environments in which fuel cells operate-which include wide ranges of temperature, humidity, and pressure and the presence of oxidative species-set strict standards for effective PEM materials, including robust thermal, mechanical, and chemical stability as well as high proton conductivity. During the last 40 years, perfluorosulfonic acid polymers such as Nafion have been used predominantly as PEMs in PEMFCs [7,8].The perfluorinated structure provides good thermal and chemical stability and high proton conductivity.Nevertheless, inherent drawbacks such as difficult synthesis, high cost, and moderate operating temperature owing to low glass transition temperature have limited the practical application of these membranes in fuel cells.Thus, the development of low-cost, high-performance PEMs has been the focus of considerable research throughout the past decade [9][10][11].A variety of advanced sulfonated polymers have been investigated as proton-conducting membranes, including polystyrene copolymers [12][13][14], poly(ether sulfone)s [15][16][17][18], poly(ether ketone)s [19][20][21][22], polyimides [23][24][25], and polybenzimidazoles [26,27].Although these newer materials display good proton conductivity under certain conditions, none is close to being an ideal candidate for practical applications in PEMFCs. In hydrocarbon-based PEMs, a high degree of sulfonation is generally required to achieve acceptable proton conductivity.A suitable amount of water sorption is also necessary for proton transport [28][29][30], but the resulting high ion exchange capacity (IEC) leads to considerable swelling of the membrane under hydrated conditions, which severely deteriorates its thermal and mechanical properties as well as overall fuel cell performance.Sulfonated aromatic polyamides have been suggested as promising PEM materials because of their favorable mechanical properties and film-forming abilities [31][32][33].Despite these potential advantages, however, their application in fuel cell membranes is hampered by their instability toward hydrolysis and heat.Introducing flexible linkages, separating the sulfonic acid group from the diamine parts, and using aliphatic diamines could improve the stability of these materials, but research on more stable membranes would remain highly desirable [34]. Fluorine-containing polymers and materials display good thermal and chemical stability and high hydrophobicity-properties that make them useful in a wide variety of applications [35,36].We have previously reported our discovery that water adsorption in various sulfonated poly(ether amide)s (SPA) can be reduced by incorporating bulky aromatic diamine structures [37].SPAs with an IEC less than 1.80 mequiv/g showed good stability in water and high proton conductivity comparable to that of Nafion.Higher sulfonation level in SPAs would improve proton conductivity further.Unfortunately, SPAs with IEC > 1.80 were found to have a limited stability in water, causing dissolution of the membranes.These results lead us to investigate ionic polymers with different structures (e.g., fluorinated vs. hydrogenated groups) and their structure-related properties of these membranes.Herein we report our systematic comparison of the synthesis and properties of various aromatic poly(ether amide)s and their fluorinated analogs.We focus specifically on the influence of the chemical structure and molecular weight of the ionomers on membrane properties, including water sorption, stability, and conductivity. Polymer Synthesis and NMR Characterization The preparation of various SPAs containing different fluorine segments and their hydrogen-containing counterparts is shown in Scheme 1.Using our previously reported conditions [37], we reacted a diamine monomer (4,4'-oxydianiline, ODA) with a sulfonated dicarboxylic acid (2-sulfoterephthalic acid monosodium salt, STA) and unsulfonated dicarboxylic acids (the box in Scheme 1 gives structural details) in a one-pot polycondensation in the presence of pyridine and triphenylphosphite to produce a series of SPAs in high yields.To obtain a variety of polymer structures for comparison, we used both aryl and aliphatic dicarboxylic acids in the polycondensation.Incorporating aliphatic moieties in the polymer main chains provides flexibility and enhanced solubility.We obtained SPAs in sodium salt form that possessed good solubility in dimethylsulfoxide (DMSO), and casting from a DMSO solution was used to procure flexible, transparent, yellowish-brown membranes. The prepared SPAs were characterized using 1 H and 19 F NMR spectroscopies.As shown in Figure 1, both fluorine-and hydrogen-containing SPAs showed two resonances at 10.5 and 11.3 ppm, which were assigned as the two amide hydrogens of STA owing to the electron-withdrawing effect of the sulfonate group.For ODA-STA-TPA-90 and ODA-STA-TPA-80 (TPA = terephthalic acid), a resonance appeared at 10.4 ppm that was attributed to the amide hydrogen in the TPA unit.This peak was more intense in the spectrum of ODA-STA-TPA-80 compared to that of ODA-STA-TPA-90 because of its slightly higher TPA concentration.Based on the integral ratio of amide hydrogen resonances from the STA and TPA parts in the polymers, the calculated degrees of sulfonation were 89% and 78%, respectively, which matched well with values of the sulfonated monomer feed ratio.Interestingly, the peak for the amide hydrogens of the tetrafluoroterephthalic acid (TFTPA) unit appeared at 8.64 ppm, a shift of nearly 2 ppm upfield from that of TPA.When fluoroalkyl dicarboxylic acids (i.e., hexafluoroglutaric acid [HFGA], perfluorosebacic acid [PFSEA], perfluorosuberic acid [PFSUA], shown in Scheme 1) were incorporated into the condensation polymerization, however, their amide hydrogens showed a deshielded resonance at 11.3 ppm versus 9.8 ppm in the hydrogenated alkyl compounds (Figure 2). Properties of Sulfonated Poly(ether amide)s Observations of the influence of chemical structure on the reactivity of the polymerization revealed the following: terephthalic acid monomers (TPA and TFTPA) showed enhanced reactivity compared with isophthalic acid monomers (IPA and TFIPA) with both fluorinated and unfluorinated dicarboxylic acids.Polymerization with TPA was stopped after 0.5 h owing to the high viscosity of the reaction medium.Compared with their unfluorinated hydrocarbon counterparts, the fluorinated dicarboxylic acids showed lower reactivities and reaction rates; therefore, a longer reaction time (i.e., 24 h) was required for the polycondensation of fluorinated dicarboxylic acids (TFTPA, TFIPA, HFGA, PFSEA, PFSUA).High yields (>97%) were obtained in all the reactions, and off-white fibrous polymers were obtained after precipitation twice from methanol. A polymer must have a high molecular weight to maintain good mechanical properties and stability in fuel cell membrane applications.The intrinsic viscosities of the SPAs were measured in a 0.1 M sodium iodide solution of DMSO at 30 °C.At a 90% degree of sulfonation, all polymers showed higher viscosity than they did at 80%.Owing to the lower reactivity of the fluorinated dicarboxylic acids in the polycondensation reaction, fluorinated SPAs had lower molecular weights than their hydrogenated counterparts.Thus, their viscosity values were typically about half or less those of hydrogenated SPAs (Table 1).The lower reactivity of isophthalic acid monomers also resulted in lower viscosity values than those of terephthalic acid derivatives (Table 1, entries 1 and 2 vs. 3 and 4).Flexible and transparent membranes were easily cast from a DMSO solution using sodium salt forms of the SPAs.After the membranes were acidified in 1 M sulfuric acid solution and rinsed with deionized water, their water adsorption properties were measured.IECs were measured from titration and compared with those of calculated values based on the molar feed ratio of STA.As shown in Table 1, the experimental results correspond well with the calculated values.For membranes with similar structure, the higher the IEC, the greater the water uptake of the membrane was observed.Thus, ionomers with a 90% degree of sulfonation had higher water uptake values than those with a degree of sulfonation of 80%.Furthermore, the water uptake in fluorinated membranes was slightly lower than that of their hydrogenated counterparts.For example, the water uptake of ODA-STA-TPA-90 is 65%, which is about 10% higher than ODA-STA-TFTPA-90.This lower water uptake of fluorinated ionomers is also evident when we compare the lambda values (number of water molecules per SO 3 H group) of the membranes (see Table 1).Overall, these results suggest that the presence of fluorinated segments reduces water adsorption of the membranes, possibly owing to their relatively lower IECs and/or greater hydrophobicity. The results of hydrolytic stability studies of the SPA membranes are shown in Table 2.The membranes displayed low hydrolytic stability, and some of them were dissolved in less than 1 h under the testing conditions (i.e., deionized water at 80 °C).To investigate whether the poor stability in water is a result of degradation of amide bonds in the polymer chains or dissolution of the polymers due to high sulfonation degree, we dissolved ODA-STA-TFIPA-90 and ODA-STA-TFIPA-80 (in -SO 3 H form) in hot water and compared their viscosity values (in DMSO) and 1 H NMR spectra with those of untreated polymers.The change in viscosity after dissolving in water was negligible.The treated ionomer polymers also showed almost identical 1 H NMR spectra with unaffected NH resonances of amide bonds.These results indicate that poor stability of the ionomers in water is not caused by acidic hydrolysis of the amide bonds of the polymer chain at least under the condition but simple dissolution of the highly sulfonated polymers (IEC > 1.8).Despite their lower water uptake, the fluorinated SPA membranes were surprisingly less stable than the corresponding hydrogenated membranes, possibly because of their lower molecular weights and reduced physical entanglement. Proton Conductivity Table 3 and Figure 4 show proton conductivity data of SPA membranes at different temperatures under 100% relative humidity.Proton conductivities of these ionomers are strongly dependent on IEC and temperature: membranes with higher IECs tend to have higher proton conductivity, and proton conductivity increases with temperature.Most of the SPA membranes displayed lower proton conductivities than that of Nafion 117 at low temperatures, but ODA-STA-TPA-90 and ODA-STA-IPA-90 exhibited proton conductivity comparable to that of Nafion 117 at temperatures between 40 and 80 °C.Among examined SPA membranes, ODA-STA-GA-90 provided the highest conductivity of 165.6 mS/cm at 80 °C-a value higher than that of Nafion.Unfortunately, its conductivity dropped dramatically with decreasing temperature.The typical heavy dependence of hydrocarbon-based PEM's proton conductivity on temperature can be attributed to the lower acidity of the sulfonic acid group and the less pronounced phase-separated morphology of the randomly sulfonated ionomers compared with those of Nafion.Nevertheless, all SPA membranes prepared in this study exhibited sufficiently high proton conductivity.a Measured at 100% relative humidity. Thermal Properties SPA membranes in acid form were subjected to thermogravimetric analysis to evaluate their thermal decomposition properties.A slight weight loss in the membranes below 150 °C was due to the evaporation of water and other solvent residues.For aromatic SPAs, the next weight decrease occurred around 250 °C, which was ascribed to desulfonation of the ionomers (Figure 5(a)).The thermogravimetric analysis showed that most of the aromatic SPA membranes retained approximately 70% of their weights even at high temperatures up to 530 °C. In the case of aliphatic SPAs, decomposition of the aliphatic chains started around 180 °C (Figure 5(b)).Due to higher bond dissociation energy of C-F bond than C-H bond, the fluorinated aliphatic SPAs showed better thermal stability than the non-fluorinated counterparts. Synthesis of Sulfonated Poly(ether amide)s In a nitrogen-filled glove box, ODA (0.40 g, 2.0 mmol), different ratio of STA and hydrogenated or fluorinated dicarboxylic acid (2.0 mmol in total), lithium chloride (0.41 g, 9.6 mmol), calcium chloride (0.60 g, 5.4 mmol), TPP (1.68 g, 5.40 mmol), and NMP (10 mL) were added in sequence to a 25 mL vial and capped with a Teflon-lined septum.The vial was removed from the glove box, pyridine (2.6 mL) was added to the vial via a syringe.The vial was placed in 115 °C oil bath and stirred for given time shown in Table 1.After cooling to room temperature, the reaction solution was added dropwise to methanol to precipitate polymer.After filtering, the recovered polymer was dissolved in DMSO (9 mL) and reprecipitated by adding methanol (80 mL).The polymer was dried at 80 °C overnight under vacuum. Membrane Preparation of Sulfonated Poly(ether amide)s The sulfonated poly (ether amide) in sodium salt form (-SO 3 Na) (0.60-0.70 g) was dissolved in DMSO (approximately 5 mL) with gentle heating and the resulting clear solution was cast on a clean glass plate.After dried overnight at 50 °C with a positive air flow, the membrane was further dried at 80 °C for 6 h under reduced pressure.The membrane was peeled off from the glass plate by immersing in deionized water and rinsed with water to remove any residual solvent.Acidification was conducted by immersing the membrane in 1 M H 2 SO 4 solution for 3 days, during which period the solution was changed daily.The membrane was then soaked in deionized water for 12 h to remove any excess acid. Measurements and Characterization 1 H and 19 F NMR spectra of the polymers in sodium salt form were obtained using a 400 MHz and 376 MHz Varian NMR spectrometer at room temperature.The NMR samples were prepared by dissolving polymer in DMSO-d 6 .Chemical shifts were referred to DMSO residue (2.50 ppm) for 1 H NMR and CFCl 3 for 19 F NMR.The intrinsic viscosities of sulfonated poly(ether amide)s in sodium salt form were measured using an Ubbelohde viscometer in a 0.1 M NaI/DMSO solution at 30 °C.Thermo gravimetric analysis (TGA) of the sulfonated polymers was performed under a nitrogen flow at a rate of 40 mL/min.The TGA data was collected between 30 °C and 560 °C at a heating rate of 10 °C/min using a Netzsch STA 440 TGA/DSC instrument. The calculated IECs of the sulfonated polymers were estimated from the feeding molar ratio of unsulfonated dicarboxylic acid to STA.The experimental IECs of the membranes were determined from a typical titration method.The dried membrane in acid form was equilibrated with 60 mL of 2 M NaCl solution for 3 days for proton exchange with sodium cation.The amount of released protons from the membrane to the solution was titrated with a 0.025 M NaOH solution using phenolphthalein solution as indicator.The degree of sulfonation was estimated from molar feed ratio of unsulfonated dicarboxylic acid to STA and confirmed by 1 H NMR spectra. For water uptake (WU) measurement, membranes in acid form were immersed in deionized water at room temperature for 24 h.The membrane was removed from water, after wiping off surface water with a tissue, it was weighed on a microbalance to give W Wet .The membrane was then dried at 80 °C under reduced pressure for 12 h and weighed to give W Dry .The water uptake was calculated according to the equation of (W Wet -W Dry )/W Dry × 100%. The hydrolytic stability of SPA membranes in sulfonic acid form was tested by soaking the films in deionized water at 80 °C.The hydrolytic stability was evaluated by measuring the elapsed time for the membrane to completely dissolve into the water solution. The SPA membranes in acid form were immersed in deionized water for at least 24 h before the measurement of proton conductivity.The proton conductivity was measured using a four-electrode method with a BT-512 membrane conductivity test system (BekkTech LLC).Proton conductivity measurement was conducted by changing the temperature from 40 to 80 °C under 100% relative humidity.The conductivity was calculated using the equation below: Where L is the distance between the two inner platinum wires (0.425 cm), R is the resistance of the membrane, W and T are the width and the thickness of the membrane in centimeters, respectively. Conclusion We have prepared a series of SPA polymers through polycondensation of ODA with STA and various dicarboxylic acid monomers (fluorinated, unfluorinated, aromatic, and aliphatic).The structures of the polymers were thoroughly characterized and confirmed using NMR spectroscopy.Systematic comparisons of both the fluorinated structures and their hydrogenated counterparts as well as the aryl and aliphatic segments of the sulfonated polymers were made.Although the proton conductivity of the random sulfonated polymers showed heavy dependence on temperature, it was sufficiently high in all of the polymers at 80 °C.The incorporation of fluorinated segments led to reduced water adsorption-presumably because of the increased hydrophobicity of the fluorine group-and enhanced thermal stability.However, the reduced reactivity of fluorinated dicarboxylic acids in the polycondensations also induced lower molecular weight in fluorinated SPA polymers, as indicated by lower viscosity. Table 1 . Polymerization conditions and properties of sulfonated poly(ether amide) membranes a . a All the reactions were carried out with 0.40 g of 4,4'-oxydianiline (ODA) under standard conditions (see experimental section for details).b Measured with -SO 3 − Na + form of ionomer (0.5 mg/mL) in a 0.1 M NaI/DMSO solution) using an Ubbelohde viscometer at 30 °C.c Water uptake (%) = (W wet -W dry )/W dry × 100, where W wet and W dry are the weights of the dry and wet membranes.d Measured from titration.e Calculated from the molar feed ratio of STA and dicarboxylic acid. a The elapsed time for a membrane to dissolve completely in deionized water at 80 °C.
4,243.4
2011-01-07T00:00:00.000
[ "Materials Science", "Chemistry" ]
Near-field distribution and propagation of scattering resonances in Vogel spiral arrays of dielectric nanopillars In this work, we employ scanning near-field optical microscopy, full-vector finite difference time domain numerical simulations and fractional Fourier transformation to investigate the near-field and propagation behavior of the electromagnetic energy scattered at 1.56 μm by dielectric arrays of silicon nitride nanopillars with chiral α1-Vogel spiral geometry. In particular, we experimentally study the spatial evolution of scattered radiation and demonstrate near-field coupling between adjacent nanopillars along the parastichies arms. Moreover, by measuring the spatial distribution of the scattered radiation at different heights from the array plane, we demonstrate a characteristic rotation of the scattered field pattern consistent with net transfer of orbital angular momentum in the Fresnel zone, within a few micrometers from the plane of the array. Our experimental results agree with the simulations we performed and may be of interest to nanophotonics applications. Introduction The generation of complex light beams carrying orbital angular momentum (OAM) to the far field using optical nanostructures is attracting considerable attention in singular optics for potential applications to super-resolution imaging, optical communications, quantum optics and information security. Most research efforts have so far focused on the engineering of near-field coupling in metallic nanostructures with tailored optical resonances [1] or array geometries [2][3][4] in relation to the generation of optical vortices and structured light with well-defined OAM in the far field. Among the different approaches investigated so far, Vogel spiral arrays of Au nano-cylinders and dielectric nanopillars are interesting owing to their largely tuneable chiral and geometrical properties, described by correlation functions in between disordered random systems and quasi-periodic crystals [5]. Vogel spiral arrays are a broad class of deterministic aperiodic structures that have been investigated for centuries by mathematicians, botanists and theoretical biologists [6] in relation to the outstanding geometrical problems of phyllotaxis [7][8][9][10], which is concerned with the understanding of the arrangement of leaves, bracts and florets on plant stems. Vogel spirals exhibit circularly symmetric scattering rings in Fourier space entirely controlled by simple generation rules that induce structural correlations in between amorphous and random systems [5]. Vogel spiral arrays of n particles are defined in polar coordinates (r, θ) by the simple relations [6,[11][12][13]: where n = 1, 2, . . . is an integer index, a 0 is a constant scaling factor and ξ is the divergence angle, which is an irrational number. This angle gives the constant aperture between successive particles in the spiral array. The most commonly studied Vogel spiral is known as the golden angle (GA) spiral, generated with a divergence angle equal to the GA ξ = α (∼137.51 • ). The GA α is related to the famous Fibonacci golden number ϕ = (1 + √ 5)/2 ≈ 1.618 by the relation α = 360/ϕ 2 . Rational approximations to the GA can be obtained by the formula α = 360 × (1 + p/q) −1 where p and q < p are consecutive Fibonacci numbers. Since α is an irrational number, the GA spiral lacks both translational and rotational symmetry. Accordingly, its spatial Fourier spectrum does not exhibit well-defined Bragg peaks, as for standard photonic crystals and quasi-crystals, but rather features a diffuse circular ring whose position varies with wavelength, particle spacing and array geometry, as determined by the angle ξ . The structure of a GA spiral can be decomposed into clockwise (CW) and counterclockwise (CCW) families of out-spiraling arms of particles, known as parastichies, which stretch out from the center of the arrays. The number of spiral arms in the parastichies is given by consecutive Fibonacci numbers. Previous studies have mostly focused on three types of aperiodic spirals, including the GA-spiral and two other structures obtained by the following choices of divergence angles: 137.3 • (i.e. α 1 -spiral) and 137.6 • (i.e. α 2 -spiral or β 4 -spiral) [6][7][8]13]. The α 1 -and β 4 -spirals are called 'nearly golden spirals', their parastichies are considerably fewer, and they exhibit a more disordered spatial structure compared to the GA-spiral resulting in far richer spectra of localized optical modes. Vogel spiral arrays of resonant Au nanoparticles have been shown to give rise to polarization-insensitive light diffraction across a broad spectral range and to support a broad spectrum of distinctive scattering resonances carrying OAM [2], which have been exploited in a number of optical device structures [2-4, 14, 15]. Controlled generation and manipulation of OAM states with large values of azimuthal numbers have been recently analytically modeled [3] and demonstrated experimentally using various types of Vogel arrays of metal nanoparticles [4], providing opportunities for the generation of complex OAM spectra using planar nanoparticle arrays. More recently, the interaction of optical beams with two-dimensional nanostructures of designed chirality became a topic of great interest from both a fundamental and a technological standpoint due to the possibility to directly manipulate chiral effects in light-matter coupling by near-field engineering [16]. In this study, using scanning near-field optical microscopy combined with light scattering simulations based on full-vector finite difference time domain (FDTD) and efficient fractional Fourier transform (FRFT), we investigate the spatial evolution of characteristic scattering resonances in arrays of dielectric nanopillars with chiral α 1 Vogel spiral geometry. In particular, different from most of the previous works that focused on metallic nanostructures, here we consider optically transparent, high-refractive index silicon nitride nanopillar structures and their near-field coupling behavior, and we experimentally show net transfer of OAM by wave diffraction in spiral geometry within few micrometers from the plane of the array. We specifically focused our study on the optical behavior of the α 1 Vogel spiral because it is an example of a deterministic chiral structure with an almost constant pair-correlation function, similarly to the case of random gases [5]. As a result, the α 1 Vogel spiral features a characteristic interplay between structural disorder and the well-defined CCW chirality that could potentially lead to novel wave diffraction and self-imaging phenomena, as addressed in this paper for the first time. Experimental methods The spiral nanopillar array fabrication process begins by depositing a 650 nm thick silicon rich nitride (SRN) layer by radio frequency magnetron sputtering onto a silicon dioxide (SiO 2 ) substrate [17]. Electron beam lithography is used to define the spiral geometry in a poly(methyl methacrylate) resist [18]. A 40 nm thick chrome (Cr) mask layer was deposited via electron beam evaporation, followed by a subsequent lift-off process in heated acetone. Reactive ion etching is used to transfer the pattern to the SRN, resulting in 350 nm tall SRN pillars, and leaving a 300 nm SRN film beneath. The Cr mask layer is then removed by wet chemical etchant. The α 1 -spiral array diameter is 50 µm, consisting of 1965 pillars each with individual pillar diameter of 520 nm. A scaling factor (a 0 ) of 564 nm was used, resulting in an average nearest-neighbor center to center spacing of 900 nm. This value is calculated by finding the distance from a particle to every other particle in the array, the minimum of those values gives the distance to the nearest neighbor. By repeating this for each particle in the array and averaging the results, the average nearest-neighbor separation is then obtained. A scanning electron microscopy (SEM) micrograph of the fabricated α 1 -spiral is shown in figure 1(a) showing that the structure is characterized by 21 parastichies arms. Optical field measurements were performed with a commercial scanning near-field optical microscope (SNOM) (TwinSNOM, Omicron) with shear-force feedback equipped with uncoated near-field probes in transmission geometry. The light from a diode laser (λ = 1556 nm) is focused on the back side of the sample with a 20× objective (NA = 0.4) and the transmitted light is collected through the near-field probe, as shown in figure 1(b). We performed experiments with two different laser spot dimensions. The first configuration, used for near-field measurements, maintains an optimal focus on the sample surface, resulting in a laser spot with a diameter of few microns. In this configuration, approximately ten pillars are illuminated at a time. A second configuration used to take far-field measurements, utilizes a defocused laser beam to illuminate the entire spiral pillar array at once. The latter configuration contributes to the enhancement of interference effects in the far field since each pillar acts as a scattering center and collective effects are more pronounced. Results and discussion Using the SNOM setup discussed above, we have investigated experimentally the spatial distribution of the optical intensity scattered by the fabricated α 1 spiral at 1.56 µm. Figure 2(a) displays the overall electric field intensity distribution of the α 1 spiral with 50 µm diameter obtained by connecting different near-field scans collected on 20 × 20 µm 2 areas. From this image, we extract a typical spatial resolution of 250 nm. Figure 2(b) shows the spiral geometry extracted from the reassembled topography image, which is acquired simultaneously with optical data. By comparing the reassembled optical image and the topography, near-field coupling is observed unambiguously between nanopillars along the parastichies arms. In order to accurately investigate where the transmitted optical intensity is concentrated, figure 3 is magnified, both for the optical image and for the topography, the 6 × 6 µm 2 region highlighted by the white squares in figure 2. A direct comparison of the optical distribution with the topographic image reveals that the transmitted signal originates from the top of the pillars forming parastichies arms and from the substrate region between two parastichies arms, highlighted by the corresponding white arrows in figure 3. However, the highest field intensities are recorded near the top of the pillars forming the parastichies arms. To better understand the near-field coupling effects within the parastichies, we carried out three-dimensional (3D) FDTD simulations, using a commercial software package (Lumerical Solutions) 5 . The exact 3D geometry of the device was considered in the simulation. However, owing to computational memory limitations, the spiral array was limited to the first 500 pillars. The device is placed on a SiO 2 substrate and excited from the bottom (through substrate) by a plane wave at 1550 nm to match the experimental conditions as closely as possible. Perfectly matched layer boundary conditions are used to terminate the simulation domain. To reduce the presence of the pump beam in the far-field plots, the source contribution was subtracted from the simulation immediately above the array, leaving only the contribution of the scattered light. Field monitors were placed in the device to measure the electric field intensity at the top and bottom of the nanopillars, as well as at various planes above the array in order to study the evolution of the field structure as it propagates in space away from the pillars plane. The results of the numerical simulations are summarized in figure 4. Figure 4(a) shows the distribution of the electric field intensity sampled in the top plane of the pillars. Figure 4(b) displays a magnified area (3.6 × 3.6 µm 2 ) encircled by the orange box in figure 4(a) and approximately corresponding to the central part of the region investigated in figure 3. Both the electric field intensity at the top detector and the bottom detector are plotted in figure 4(b). As observed experimentally in the SNOM measurements, the field intensity measured by the top detector is concentrated between two subsequent pillars forming the parastichies, while in the bottom detector the signal mostly originates from the substrate plane. In figures 4(c) and (d), we reported the same field distributions of figures 4(a) and (b) convolved with a Gaussian function, characterized by a full-width at half-maximum (FWHM) of 250 nm, in order to take into account the finite spatial resolution of our experimental setup. It is evident that the main effect of the Gaussian convolution is to blur the images. After convolution, it is no longer possible to distinguish by analyzing the top detector that the signal is localized in the regions between two adjacent pillars. Nevertheless, it is still clear that the optical energy flows along the parastichies. In contrast, when analyzing the data collected by the bottom detector, we observe that the electric field intensity has a maximum on the substrate. It is also remarkable that, as a consequence of the Gaussian convolution, the electric field intensity maxima drop from 27 to 4 in the top detector and from 5 to 2 in the bottom detector, respectively. It follows that the comparison between the experimental and the calculated electric field intensity distributions is quantitatively very good, as it is clear by comparing figure 3(a) with figure 4(d). Here, the signal collected by the near-field probe on the parastichies should be associated with the top detector of the simulation, and the signal collected when the near-field probe reaches the substrate should be compared with the bottom detector. In the following, we address the wave diffraction properties of the sample by studying the intensity distribution of the field scattered by the SRN α 1 spiral at different heights from the pillars array. Figures 5(a) and (b) show the numerical FDTD calculations of the electric field intensity calculated with a field monitor placed 1 and 4 µm above the α 1 spiral, respectively. In figures 5(c) and (d), we plot the experimentally measured scattered fields, collected within a 30 × 30 µm 2 region, and probed at a constant height of approximately 2 and 6 µm away from the sample surface, respectively. We can observe that at a distance of 1-2 µm from the sample surface, the spatial distribution of the electric field is characterized by the 21 curved arms associated with the parastichies that follow a CCW orientation, as observed in both measurements and calculations. However, by increasing the distance from the surface, the 21 curved arms are still unambiguously discernable, but now appear with a CW orientation, demonstrating a remarkable inversion in the optical intensity pattern along its propagation direction in free space. This interesting inversion of the propagating field pattern is observed both in the numerical calculations and the experimental data. However, minor differences between the experiment and calculations results both from the uncertainty in the experimental determination of the height of the measuring plane and from the fact that the uncoated probe collects signal at slightly different heights. In fact, the SNOM setup used lacks calibration for determining the sample-to-probe distance when the probe is not in feedback. To further explore the spatial evolution of the scattered field pattern as it propagates away from the pillars array, we have carried out additional FDTD calculations as a function of the propagation distance and created a video combining all the frames, which is available as supplementary material (available from stacks.iop.org/NJP/15/085023/mmedia). The supplementary video illustrates the evolution of the scattered field intensity as it propagates from the top plane of the pillars up to 8 µm above the array plane in approximately 0.1 µm vertical steps. Through the entire progression of vertical planes, we can observe unambiguously that the overall field pattern continuously passes from a CCW rotation to a CW rotation. The parastichy field components begin with a CCW orientation, exactly following in the near-field zone the geometry of the arms. However, as the plane of observation reaches approximately 3 µm, the parastichies field pattern evolves into a mixture of both CCW and CW arms, slowly transitioning into purely CW arms when propagating in between the 4 and the 8 µm observation planes. The distinct rotation of the scattered field intensity highlights the very rich dynamics of interacting scattered wavelets that transfer net orbital momentum to the overall radiation field in the intermediate Fresnel zone. To the best of our knowledge, the data in figure 5 provide the first evidence of net OAM transfer occurring within few micrometers from the object in Vogel spiral arrays of dielectric nanopillars. In fact, while previous studies have focused on the rich spectrum of azimuthal OAM values transferred to the far-field radiation zone by Vogel spirals [3,4,19], the complex wave diffraction effects that develop in the intermediate Fresnel zone remain to be explored. In order to investigate the free propagation of the scattered field intensity over a larger range of distances, we have resorted to the method of fractional Fourier transformation. This approach provides an equivalent formulation of the paraxial wave propagation and Fresnel scalar diffraction theory [20], and considers light propagation as a process of continual fractional transformation of increasing order. However, this approach neglects materials dispersion and the spiral array is modeled by circular apertures within the scalar approximation. The FRFT is a well-known generalization in fractional calculus of the familiar Fourier transform operation, and it has successfully been applied to the study of quadratic phase systems, imaging systems and diffraction problems, in general [20][21][22][23]. Given a function f(u), under the same conditions in which the standard Fourier transform exists, we can define the ath order FRFT f a (u) with a being a real number in several equivalent ways [20,24]. The most direct definition of the FRFT is given in terms of the linear integral transform: with kernel defined by [20] K a (u, u ) ≡ A α exp [iπ(u 2 cot α − 2uu csc α + u 2 cot α)], for a = 2n and K a (u, u ) = δ(u − u ) when a = 4n, and K a (u, u ) = δ(u + u ) when a = 4n ± 2, with n an integer. The ath order fractional transform defined above is sometimes called the αth order transform and it coincides with the standard (i.e. integer) Fourier transform for a = 1 or π/2. More information on alternative definitions, generalizations and the many properties of FRFTs can be found in [20,24]. The FRFT of a function can be thought of as the Fourier transform to the nth power, where n need not be an integer. Moreover, being a particular type of linear canonical transformation, the FRFT maps a function to any intermediate domain between time and frequency and can be interpreted as a rotation in the time-frequency domain [20]. In optics, the main advantage of the FRFT method compared with numerical FDTD simulation relies on its superior computational efficiency, which enables a more detailed investigation of the qualitative behavior of the intensity propagation over an extended propagation range. In this paper, we computed the two-dimensional FRFT according to [25] (http://nalag.cs.kuleuven.be/research/software/FRFT/frft22d.m) and studied the propagation of the diffracted field up to 50 µm above the spiral plane. We additionally prepared a high-resolution movie that illustrates the free-field propagation of the intensity scattered by the Vogel spiral, and we have included this as supplementary material (available from stacks.iop.org/NJP/15/085023/mmedia). In figure 6, we show few representative movie frames that display the FRFT-calculated intensity at different distances from the spiral plane, as specified in the caption. We can clearly appreciate in figures 6(c) and (g) the cases in which the diffracted field patterns are rotated with respect to the geometry of the parastichies arms, as previously discussed in relation to figure 5. However, the emergence of this rotation phenomenon from the FRFT scalar field simulations demonstrates its very robust and general nature, which follows from the coherent interactions of (singly) diffracted wavelets in the central and more disordered area of the spiral array, as evident in figures 6(c), (f) and (g). In particular, we can appreciate from the field evolution in the near-field region (figure 6(c)) that the central area of the spiral couples radiation into directions that are orthogonal to the surrounding parastichies arms. These secondary lines of scattered radiation spatially define a complementary set of parastichies arms that are responsible for the inversion of the intensity pattern at short distances from the object plane. As propagation unfolds, the diffracted wavelets coherently reinforce each other along distinctively rotating ring-like structures observed in figures 6(f) and (g), which gradually transition at larger distances into the characteristic circularly symmetric far-field patterns of Vogel spirals (figure 6(i)) [2,5]. However, depending on the propagation distance, we have observed (see accompanying movie) a characteristic oscillation between the formation of ring-like structures and the inversion of the field intensity patterns compared with the image of the spiral geometry. We believe that this effect unveils a distinctive self-imaging property of Vogel spirals, which is particularly evident in figures 6(e) and (g). We recall that full image reconstruction will happen at a distance L from a coherently illuminated array with discrete spatial frequencies located in reciprocal space at rings of radii ρ 2 = 1/λ 2 − (m/L) 2 , where m is an integer such that 0 m L/λ [26]. Self-imaging effects have been vastly investigated in the context of periodic structures (e.g. the Talbot effect) as well as in quasi-periodic Penrose arrays, where it has been recently shown that they can focus light into sub-wavelength spots in the far field without contributions from evanescent fields [27][28][29]. However, to the best of our knowledge, self-imaging effects in aperiodic media with isotropic Fourier space that lacks diffraction peaks have never been demonstrated before, and will be the subject of a more extended follow up paper. The results in figure 6 and the corresponding movie indicate that the unique propagation behavior of diffracted waves from the α 1 spiral results from the coherent interplay of two wellseparated spatial regions of very dissimilar structural order: (i) on one side the disordered central region of the spiral that diffracts wavelets in any direction, and (ii) the surrounding region with the well-defined chirality of the parastichies arms on the other side. In this region, the radiated optical power from each excited particle defines an inverted set of orthogonal parastichies arms and 'inverts' the spatial pattern of the self-images of the array. We believe that the unique interplay between aperiodic order and chiral structures such as the investigated α 1 spiral can provide novel opportunities for the manipulation of sub-wavelength optical fields and disclose richer scenarios for the engineering of focusing and self-imaging phenomena in nanophotonics. Conclusions In this work, by using scanning near-field optical microscopy, FDTD numerical simulations and the FRFT method, we have investigated the near-field coupling behavior and the propagation of electromagnetic energy distribution of scattered radiation from silicon nitride-based arrays of nanopillars with α 1 spiral geometry. In particular, we studied the spatial distribution of scattered radiation at 1.56 µm, and found excellent agreement between numerical simulations and SNOM data demonstrating near-field coupling of dielectric pillars along the parastichies arms. Moreover, by measuring and computing the spatial structure of the scattered field at different heights from the array plane, we have demonstrated experimentally net orbital momentum transfer to the radiation field occurring within few micrometers from the array plane, in agreement with FRFT diffraction calculations in the paraxial regime. We believe that the unique features of structurally disordered and chiral Vogel spirals provide interesting new opportunities for a number of engineering applications in singular optics, secure communication, imaging and optical sensing.
5,353.6
2013-08-22T00:00:00.000
[ "Physics" ]
Determinants of Insurance Sector Development in Nigeria : Insurance market in Nigeria like other developing African countries have remained small, less pervasive, and underdeveloped with evidence from the abysmally low density and penetration rates. These casts doubt on insurance sector development in Nigeria to question whether the issues are related to dynamics in macroeconomics, demographic, and institutional factors affect the sector. The determinants of insurance sector development in Nigeria for the period 1987 to 2020 follows a multiple regression framework through ARDLbounds cointegrationtesting. The Error Correction Model (ECM) results show the speed-of-adjustment to equilibrium-level following a short-term distortion had negative coefficients of0.02725;p=0.000<0.01and 1.08206; p=0.014<0.05 for non-life insurance density and penetration, respectively.Non-life insurance demand is positive and significantly influenced by trade openness, real interest rates, population growth, and financial development in the long run, according to long-term estimates. Non-life insurance premiums are reduced by inflation and the age of the population. This study recommends that GDP per capita be grown further through quick investment and social spending, greater exports, and a decrease in unemployment, while interest rates and inflation levels should be checked (monitored) through monetary policy activities of the apex financial institution. INTRODUCTION Large investors in the insurance industry provide risk management services to various economic sectors, making it an important component of the financial sector (Gaganis, Hasan &Pasiouras, 2019). Insurance companies areimportant financial intermediaries that perform critical risk underwriting, financing, and management for individuals and companies. Besides, these institutions help to channel longterm resources and domestic savings through their financial intermediation process (Olayungbo, 2015;Guerineau& Sawadogo, 2015). Life and non-life insurance activities that encourage long-term savings, investment, and growth could drive the insurance market. Despite the insurance sector's perceived rolein business survival and economic growth, several factors can improve or plague its development. Extant studies have identified several factors which may be classified into macroeconomic, demographic, sociocultural, and institutional factors(for instance, interest rate, dependency ratio, and economic freedom) as determinants of insurance sector development ( Nigeria and other developing African countries have extremely low levels of insurance penetration, despite the low costs of insurance products (Alhassan&Biekpe, 2016). The insurance market in Nigeria has remained underdeveloped. The market activities contribute minimally to the economy's growth due to the lack of adequate reforms andstrict regulations (Sawadogo, Guerineau&Ouedraogo, 2018). The non-life insurance market activities mostly dominate Africa's insurance markets,and Nigeria has the largest market players(Alhassan&Biekpe, 2016a).Non-life insurance penetration of 0.18% in Nigeria is one of the lowest in the world, according to insurance statistics. As a result, Nigeria's insurance sector is still in its infancy, and the country's growth in the sector should be given the utmost importance. The insurance industry in Nigeria is still developing. An investigation into possible impact of economic variables' on the insurance market is necessary. There are limited studies on growth of non-life insurance markets in African economies. This study's overarching goal is to investigate the factors that influence the growth of Nigeria's insurance market. The non-life insurance is more common in Kolapo, FunsoTajudeen 1 , AFMJ Volume 7 Issue 03 March 2022 developing countries like Nigeria, this study focuses on this area. Empirical studies have examined the role of insurance sector for economic expansion in Nigeria, there is little empirical evidence on what drives the development of the sector. In light of the foregoing, a comprehensive market approach (considering both demand and supply) is needed to examine the factors influencing insurance sector development in Nigeria, as this topic has received little attention in the country's academic literature. LITERATURE REVIEW Conceptual Issues The expansion of the insurance industry necessitates an increase in the market's density and penetration. Increased per capita insurance premium spending leads to an increase in the sector's density (Brokeová&Vachálková, 2016). Another factor that influences how quickly the industry grows is the numbers of insurance companies. Direct premiums written in relation to productivity increase each year, resulting in insurance penetration. Life, non-life, and total insurance companies can contribute to the expansion of the insurance market. Non-life insurance companies are essential to any financial system because they promote long-term savings and large-scale reinvestment in public-private projects (Satrovic&Muslija, 2018). Premiums adjusted for population, insurance penetration, insurance density, and netwritten premiums are four measures of the insurance sector's development (Din, Regupathi, Abu-Bakar, Lim, & Ahmed, 2020). Using insurance premium penetration and density (the amount of money people spend on insurance per person) as indicators of the development of the insurance sector is appropriate. This study examines the growth of the insurance industry in terms of GDP and non-life insurance premiums paid per person using both non-life insurance penetration and density. The Life Cycle Theory According to the life-cycle hypothesis proposed by Ando and Modigliani (1963), households aim to maximize the expected utility of their consumption over the course of their lifetime. The life-cycle hypothesis of Ando and Modigliani (1963) was espoused by Yaari (1965) to explain the need for insurance because of an individual's uncertain lifespan. According to this theory, a person's savings habits show that he or she is trying to spread out his or her consumption over the course of a lifetime, from work life through to retirement. A person's utility function is increased by purchasing insurance to provide for his or her dependents in the event of his or her death (Beck & Webb, 2003). The life cycle model considers an individual's wealth, estimated lifetime income, interest rate level, insurance policy fees (administrative costs), and the assumed subjective discount for current and future consumption (Satrovic&Muslija, 2018). According to the life cycle model's underlying principles, the insurance sector's growth could increase as life expectancy rises. Based on this hypothesis, insurance demand is inversely related to age dependency. As the number of people who are dependent on others grows older, fewer people are able to save for the future because they are too busy taking care of their immediate needs (Zerriaa&Noubbigh, 2016). Empirical Review A well-developed financial sector has also been shown to boost people's confidence in taking out insurance policies (Alhassan&Biepke 2016a; Mishra, 2014; Sen &Madheswaran, 2013; Zerriaa&Noubbigh, 2016). These studies agree that development in the insurance industry is influenced by changes in the financial, social, and macroeconomic environments. Many studies have shown that a combination of favorable economic conditions, a welleducated populace, high national income and financial development, and the strict enforcement of property rights have the potential to help the insurance sector thrive. The factors identified above can influence the insurance industry, but how much depends on environmental, population, and other societal factors. Thus, empirical studies focused on cultural (Chui & Kwok, 2009); religious (Feyen, Lester & Rocha, 2013); globalisation (Lee & Chiu, 2016); interest rate (Lee & Chang, 2015); the perception of health status (Al-Wang, Lee, Lin, & Tsai, 2018); and health expenditure (Alhassan&Biekpe, 2016a) differences as factors that affect insurance sector development. Brokešováet al. (2014) studied the factors that influenced insurance sector development in four Central European transition economies over the period 1995 and 2010. Adopting a panel regression approach, the results showed that insurance market development in transition economies differs from the experience in advanced economies. Factors such as the elderly-to-dependents ratio, inflation, social security, urbanization, and criminality have an effect on the growth of the insurance sector in Central European economies. Zyka and Myftaraj (2014) looked at how the Albanian insurance industry have grown from the period 1999 to 2009. Economic growth, population growth, urbanization, and paid insurance claims have a positive effect on the overall insurance premium. A rise in insurance premiums results from an increase in demand, which has an effect on the culture of insurance use. Non-life insurance consumption in 16 CSEE countries was studied by Petkovski and Kjosevski (2014) for the period from 1992 to 2011. The long-term results of the cointegration test and Dynamic Ordinary Least Squares (DOLS) estimator showed that nonlife insurance consumption is positively influenced by the number of passenger cars per 1,000 people, as well as GDP per capita. An error correction model was used by Kjosevski and Petkova (2015) in a study of non-life insurance consumption in 14 countries in Central and Southeast Europe, which spans from 1995 to 2010. Findings reveal the longterm impact of household size and car ownership on nonlife Kolapo, FunsoTajudeen 1 , AFMJ Volume 7 Issue 03 March 2022 insurance consumption, while the rule of law and EU membership have short-term impacts. Poposki et al. (2015) examined the elements that influenced the penetration of non-life insurance for eight SEE countries from 1995 to 2011. An error correction model was used by Kjosevski and Petkova (2015) in their study of nonlife insurance consumption for 14 countries in Central and Southeast Europe, which ran from 1995 to 2010. People, houses, and cars have long-term effects on nonlife insurance consumption, while rule of law and EU membership have short-term effects, according to the findings. For four Central European countries, Brokeová and Vachálková (2016) studied the macroeconomic environment's influence on the development of the insurance industry from 1995 to 2013. Macroeconomic conditions have an enormous impact on the insurance industry development for the transition countries through the results of the pooled OLS model. It was discovered that GDP per capita has a negative impact on insurance premiums in Tanzania, according to Abbas and Ning (2016), the study used the OLS estimator for the period 1991 to 2010.Inflation and interest ratesnegatively impact on Tanzania's insurance industry. There was evidence to suggest that GDP growth has a positive impact on the industry's development. Over a period from 2000 to 2011, Trinh, Sgro and Nguyen (2016) examined the factors that determinenonlife insurance expenditures for 36 developed-and 31 developingcountries. Using several estimators, the results showed that across countries, income, bank development, economic freedom, urbanization, law systems, and culture drive nonlife insurance expenditures, and their impact varies across countries. Akhter and Khan (2017) focused on the macroeconomic factors that influence Takaful (Islamic insurance) and conventional insurance in the 14 ASEAN and Middle East regions from 2005 to 2014. Urbanization, financial development, as well as income levels affect insurance demand positively, according to Fixed and Random Effects regression models. All regions' Takaful demand was found to be positively affected by inflation; dependency and education ratios had a negative impact. An analysis of the influence of economic factors on insurance development in Western Balkan countries was carried out by Buric et al. perspective and analysed the data with the Autoregressive Distributed Lag (ARDL) regression. Trade openness, urbanization, income, financial development, and economic growth were found to have positive and significant effects on the development of the insurance industry, according to the findings. As a result, insurance demand is negatively correlated with inflation.Gaganiset al. (2019) examined the relationship between insurance sector regulation and development in 44 developed and developing countries from 2000 to 2008. Feasible Generalized Least Square estimator results showed that inflation, dependency ratio and life expectancy have a negative impact on the development of the insurance sector while GDP per capita and the growth of banks have a positive impact. Government expenditure has no effect on the insurance industry. An investigation the drivers of insurance demand in Ethiopia from 2001 to 2016 was carried out by Meko, Lemie, and Worku (2019). Age dependency, urbanization, real interest rate, inflation, and life expectancy have positively significant effect on insurance demand in Ethiopia, while GDP per capita and the price of insurance have no effect. Insurance consumption in South-Asia insurance markets was examined by Sanjeewa, Hongbing, and Hashmi (2019) from 1996 to 2017. The results showed that demographic factors are important in explaining insurance consumption than financial factors. Furthermore, it reported that urbanisation, private health expenditure, income, dependency, and life expectancy reduce insurance demand whereas financial development and education affectinsurance consumptionpositively. Research Gap Most studies have identified the factors influencing insurance sector development in developed countries. However, there are limited studies from other developing and/or emerging countries while the subject of discussion is relatively underexplored in Nigeria where the economic freedom seemed to be less solid. Therefore, the importance of identifying the country-specific determinants of insurance sector development that could help policymakers in taking responsive actions cannot be overemphasised.As a result of the wide range of factors influencing insurance demand that exist from country to country, insurance consumption differs among countries. Probably as a result of the insurance industry's small size in comparison to the banking industry, there have not been many studies in Nigeria looking at its development. Thus, this study focuses on Nigeria's insurance industry's development drivers. Besides, this study employs two alternative measures of insurance sector development, namely insurance sector density and penetration, to better understand the subject of discussion. In empirical studies on the development of Nigeria's insurance sector, the role of institutional factors, such as banking sector development and economic freedom, has been overlooked. METHODOLOGY Model Specification This study adopts the model from the study of Brokešováet al.(2014) by incorporating foreign direct investment, real interest rate, and an additional institutional variable(index of economic freedom) as plausible determinants of insurance sector development.There are two ways in which this study differs from Brokeová et al. (2014). Firstly, it focuses on both demand and supply, i.e. density and penetration, in the insurance sector development. Second, the variables incorporated are important because increasing inflows of foreign direct investment without macroeconomic disturbances, market-entry restrictions, and trade barriers could help to accumulate more insurance assets, thus enhancing insurance sector development. This study examines the determinants of insurance sector development using variables such as income, trade openness, interest rate, inflation, education, dependency ratio, population growth, life expectancy, urbanization, financial development, and economic freedom. The functional model is specified as: The econometric form of the functional model is restated as: (2001) is specified as follows; where ∆ refers to the first-difference operator, α0 is the equation's drift component, while T refers to time-trend. Yt is the dependent variable and Xt is the vector for Yt determinants, δ's are the short-run coefficients to be calculated, β's are the long-run multipliers, and et represents the error-term that are assumed to be identically distributed and independent. Following the establishment of the ARDL model and the cointegration of the variables using the bounds testing approach, it is necessary to estimate the short-run relationships of the variables using an error-correction model in the generalised form specified by Pesaran and Shin (1999) and Pesaran et al. (2001). To incorporate variables of study into the ARDL framework, the model is specified as: where is the error-correction coefficient, ECMt-1. could be negatively signed, implying variables in the model can be restored backtoequilibrium levels at the instance of any shortrun deviations. Research Design and Data Information The ex-post facto research design is used to unravelthe factors that determineNigeria's insurance sector development. Based on existing facts and data, this design is appropriate. The data used in this study spans the years 1987 to 2020. The start period 1987, a year after Nigeria's financial sector had just been liberalized and the Structural Adjustment Program (SAP) had been implemented. The emergence of SAP led to significant improvement in the insurance industry's activities and created a wave of macroeconomic, demographic, and institutional dynamics that may negatively affect insurance businesses. Definition of Variables, Measurements, and Data Information The data-series are gleaned from the Fraser Institute and World Bank's database. Table1 presents the variables description and data sources, their measurements, and the supporting literature. A study with a small sample size can use the ARDL model rather than the Johansen's cointegration model (Pesaran et al., 2001). Results' reliability was tested by applying a model diagnostic procedure which includes the tests for normality of series and model misspecification error, serial correlation and heteroscedasticity, model instability and structural changes. Table 2 summarizes the descriptive statistics for the variables used in the study. The sample mean, standard deviations, minimum and maximum values are all included in these statistics. Results for the Stationarity of Variables Unit Root Test It is critical to guarantee that all time-series data are stationary, with constant mean and variance throughout time, before estimating a regression model. The test determines whether or not the model's variables have a unit root (stationarity properties). When using non-stationary data in a regression, the existence ofspurious result becomes imminent (Wang &Hafner, 2018). The test is also used to examine the order of integration-I(d) for each variable, since this will indicate the correct regression model to estimate. As a result, the augmented Dickey-Fuller Test of Unit Root (ADF-URT) confirms the variables' stationarity. Table 3 shows the findings from the unit root test. The results of the ADF unit-root test reveal that the stationarity of variables at I(0) or I(1). Except for PEN, GY, and FRE, all of the variables are level stationary. This shows that the null hypothesis of variable non-stationarity at their respective significance levels is rejected. This finding meets the requirement for estimating the ARDL framework, which ensures the establishment of long-term linkages between variables. The bound testing technique to cointegration assumes that all variables must be I(0) and I(1), implying that the variables are mutually integrated. ARDL Bounds Testing Approach for Co-integrating Relationship To determine the existence of long-run equilibrium relationships among the variables, this study employs the ARDL framework, which was developed by Pesaran and Shin (1999) and endorsed by Pesaran et al. (2001). The ARDL limits test's lower and upper bound critical values are used to test the null hypothesis that the underlying variables have no long-term association. When the estimated F-statistic exceeds the upper bound critical values, the null hypothesis of no cointegration is rejected; otherwise, it is accepted. Results of the ARDL Estimates This study generates long-run and short-run coefficients for the two separate models for comparison analysis using Stata 13 software. The results of the estimation will provide answers to the study's hypotheses. It will identify the demographic, macroeconomic, and institutional elements that impact on the insurance sector. Table 5 summarizes the findings. Stata could not generate a matrix with too many rows or columns, or fit a model with too many variables for the lag length technique, the study dropped life expectancy, which is directly linked to the demand for life insurance. (2021). Notes:***, **, and * imply the null hypothesis is rejected at 1%, 5%, and 10% levels of significance, respectively. The standard errors are denoted by (), while the p-values are denoted by []. Exponential values appear in a variety of variables in model one. ARDL Long-Run Regression Estimates Estimates for Non-Life Insurance Density On Panel A of Table 5, this study presents the ARDL longrun regression estimates. The findings highlight the impact of macroeconomic, institutional, and demographic factors on insurance sector development using two separate measures: non-life insurance density and penetration, which represent demand and supply, respectively. First, financial development and insurance sector density are positively linked, according to the results of model 1. Similarly, economic freedom shows a positive correlation with non-life insurance density, as expected, and this relationship is significant at the 1% level. Having a larger population has a positive influence on the density of non-life insurance. Model 1 has a 1% significance level for the relationship described. Similarly, urbanization is related with a higher density of non-life insurance. Non-life insurance is more prevalent in areas with a higher level of education. In model 1, the relationship is statistically significant at the 1% significance level. With a negative coefficient, the age dependence ratio shows an adverse effect on non-life insurance density. At the 1% significance level, age Kolapo, FunsoTajudeen 1 , AFMJ Volume 7 Issue 03 March 2022 dependency is significant in model 1. Expected positive and significant correlation between non-life insurance density and real interest rate. Non-life insurance density is positively correlated with the real interest rate at a 10% level of significance in model 1. Non-life insurance density and inflation go hand in hand. At the 5% significance level, the association issignificant in model 1. Although it is not statistically significant, the income growth coefficient was shown to be positive. Expected, but inconsequential at whatever significance level, is this positive association Nonlife insurance density was positively associated with trade openness. Model 1 shows a substantial 1 percent correlation between trade openness and non-life insurance sector density. Insurance density for non-life businesses is positively associated with urbanization, but the correlation is not significant at any level. Estimates for Non-Life Insurance Penetration The results from model 2 reveal a relationship between insurance sector penetration and its determinants, as shown in Table 5. In model 2, the growth rate of income has a positive but non-significantly related with penetration of nonlife insurance. As expected, trade openness was positively correlated with non-life insurance penetration. At a 5% level of significance, the direct relationship between insurance sector relationship and trade openness is significant for model 2. The relationship between the real interest rate and insurance sector penetration becomes substantial at 5%. Inflation has a negative impact. The percentage of people who have non-life insurance is inversely proportional to their level of education. At a 5% level of significance, the relationship is significant. Although there is a positive relationship between age dependency and non-life insurance penetration, age dependency is a nonsignificant factor of insurance sector penetration. Non-life insurance penetration shows an increasing effect as the population increase. Non-life insurance penetration is adversely related with urbanisation. The relationship between urbanisation and non-life insurance demand is nonsignificant. Financial development has a favorable link with insurance sector penetration, which is statistically significant at 10% level of significance. Economic freedom is positively connected to non-life insurance penetration, as one would assume, although it is insignificant in explaining non-life insurance penetration. ARDL Short-Run Regression Estimates Estimates for Non-Life Insurance Density The adjustment (ADJ) coefficient indicates how quickly the model 1 returns to equilibrium following a short-term distortion. The coefficients are negative, as expected, with a value of -0.02725. As demonstrated in model 1, this figure is significant at the 1% level of significance. Model 1's adjustment speed is quite slow, as shown in the diagram. The cointegration relationship between the insurance sector density and its determinants is confirmed by the negatively signed ADJ coefficient. On Panel B of Table 4.4, the study also reports the ARDL short-run regression estimates non-life insurance density. The short-term behavior of the variables is depicted by the regression parameters from the one-period lagged variables in model 1. In the short run, the one-period lagged values of growth rate of income, real interest rate, urbanisation, and economic freedom are positively but non-significantly related to non-life insurance density, while other variables like level of education and population growth are significantly related to nonlife insurance density with a negative outcome. Nonlife insurance density shows a negative, but not statistically significant, link with financial progress in the short term. Estimates for Non-Life Insurance Penetration The adjustment (ADJ) coefficient for model 2 is also indicated by the ARDL short-run results in Panel B of Table 4.4. It displays the speed with which the model 2 returns to equilibrium after a short-term shock. With a value of -1.08206, the coefficient is negative as expected and significant at the 5% level of significance. The ADJ coefficient is negatively signed, a long-run relationship between non-life insurance penetration and the plausible determinants may now be proven. The ARDL short-run regression estimates for non-life insurance penetration, as shown in Panel B of Table 5, show the regression parameters from the one-period lagged explanatory factors as well as the non-life insurance penetration's short-term behavior. In the short run, the oneperiod lagged values of growth rate of income, age dependency, population growth, and urbanisation are positively but non-significantly related to non-life insurance penetration, whereas level of education is positive and significantly linked to non-life insurance penetration. Nonlife insurance prevalence is negatively but insignificantly connected to real interest rates, financial development, and economic freedom. Results of Model Diagnostics Tests The model diagnostic and stability tests used in the study are to validate the regression results. The presence of serial correlation and heteroscedasticity assumptions are tested. Table 5 shows that the Breusch Godfrey LM Serial Correlation test found no evidence of higher-order serial correlation in the error term. For models 1 and 2, the White Heteroscedasticity test revealed (p-value= 0.42>0.1) and (p-value= 0.41>0.1), respectively, indicating homoskedastic errors. The Ramsey Reset test shows that the models are specified in a correct style, with p-values of 0.3923 and 0.5451 for models 1 and 2, respectively, that are nonsignificant at the 10% level of significance. The test statistic value and the p-value at a 10% level of significance are used Kolapo, FunsoTajudeen 1 , AFMJ Volume 7 Issue 03 March 2022 to determine whether the null hypothesis is rejected in these diagnostic tests. Brown, Durbin, and Evans (1975) proposed the cumulative sum of squares (CUSUMSQ) and cumulative sum (CUSUM) tests to determine the structural stability of the long-run estimates. According to the results presented in the appendix, neither the plots of CUSUM nor CUSUMSQ statistics remain within the limit of critical values at a 5% significance level. As a result, the null hypothesis of nonstability of regression coefficients cannot be rejected. According to the diagnostics and stability tests, the regression results are effective. Discussion of Findings Non-life insurance penetration and density were positively linked with income growth with a non-significant relationship. Kjosevski (2012) and Nkotsoe (2018) found that the development of insurance in developing countries is positively affected by an increase in income.Non-life insurance penetration and density increase with greater trade openness.An increasing opportunity to global trade increases insurer profits through increase in insurance assets. Petkovski and Kjosevski (2014), Chitayo (2017), and Zewge (2018) support this conclusion. Real interest rates had a positive impact on the density and penetration of non-life insurance. When real interest rates rise, households are more likely to purchase non-life insurance products.An increase in the real interest rate helps to increase insurers' investment returns and profitability. Mekoet al. (2019) showed a similar finding. Inflationnegatively impacts on non-life insurance density and penetration, however, its density is significantly affected by inflation.Demand and supply of insurance products, as well as their expected returns, are impacted by inflationary pressures. This study's findings agree withBeck and Webb (2003). The demographics, starting with the level of education is negatively and significantly related with non-life insurance density but has positively significant relations with non-life insurance penetration. This result is ambiguous, a high level of education could make an individual more familydependent for a long period, and this can affect the demand for insurance products. Moreso, highly educated persons with an increasing desire for higher returns on investment may hold more risky assets rather than insurance products. The positive result is a shred of evidence that highly educated individuals are aware of the benefits associated with insurance products, their risk-averse attitude will make them consider insurance products as risk mitigating tools. The finding is similar to the outcome in the studies of Zerriaaet al. (2017) and the assertations from the life-cycle hypothesis. The age dependency ratio has a negatively significant effect on insurance density, but a non-significant positive effect on penetration. Households with a high proportion of young people possibly have to save more to meet the emerging daily consumption and future needs of the family, thus reducing the possibility of insurance consumption. This finding supports the outcome in the studies of Chui and Kwok (2008) and Guerineau and Sawadogo (2015). Increased insurance consumption is expected to increase with higher working population and a higher proportion of elderly dependents. A considerable and favorable impact of population on non-life insurance density has emerged, but there is no significant impact of population expansion on the penetration of non-life insurance. As population increases, there is a greater need insurance to offset the escalating costs of property damage. Non-life insurance density has a positive relationship with urbanization, whereas non-life insurance penetration has a negative relationship with urbanization. Insurance goods could be more widely available to the population if there is a high level of urbanization. This would reduce households' reliance on informal insurance agreements. Non-life insurance density and penetration tend to improve with higher financial development. In a bank-based financial system like Nigeria, the presence of well-developed and functioning banks may increase consumer confidence in insurance companies and other non-bank financial institutions. The findings of Alhassan and Biekpe (2016b), Zerriaa and Noubbigh (2016) and Zerriaa et al. (2017), support this conclusion. Economic freedom has a positive impact on non-life insurance density and penetration, but it has no effect on insurance penetration. Consequently, the removal of entry restrictions into the insurance market tends to increase the market's competitiveness. Park CONCLUSION & RECOMMENDATIONS This study looked at the elements that influence the development of Nigeria's insurance sector from 1987 to 2020. Factors such as trade openness, real interest rates, population growth, and financial development influence Nigeria's demand for non-life insurance services positively and significantly, whereas inflation rate, level of education, and age dependency have negatively significant effect on demand for Nigeria's non-life insurance services. Real interest rates, trade openness, education levels, and the country's financial development determine the availability of non-life insurance services in Nigeria. Besides, the measures adopted in capturing insurance sector development in Nigeria matters, the determinants are responsive to such measures. Based on policy measures, Nigeria should increase GDP per capita by increasing investment and social spending, Kolapo, FunsoTajudeen 1 , AFMJ Volume 7 Issue 03 March 2022 exporting more, and decreasing unemployment, as suggested by findings of this study. Inflation should be checked and monitored through the monetary policy actions of the apex banking institution, it tends to discourage potential and returning customers who cannot pay for the highly-priced insurance products. It is critical to continue to open up the economy to global trading activities so that more enterprises engaged in import and export operations can benefit from non-life insurance to cover their goods, services, and human capital from unforeseen future losses or damage. It is equally important to formulate policies that will ensure strict compliance to the removal of restrictions to market entry as well as heavy regulatory requirements to make the insurance market more competitive to enhance efficient service delivery.
6,995.8
2022-03-24T00:00:00.000
[ "Economics" ]
Review of the Relationship between Reactive Oxygen Species (ROS) and Elastin-Derived Peptides (EDPs) Reactive oxygen species (ROS) are central elements of a number of physiological processes such as differentiation and intracellular signaling, as well as pathological processes, e.g., inflammation or apoptosis. ROS are involved in the growth and proliferation of stem cells, cell communication, cell aging, all types of inflammation, cancer development and proliferation, or type 2 diabetes. Elastin-derived peptides (EDPs) are detected in all these conditions and, according to the current state of knowledge, the role of the extracellular matrix (ECM) protein is crucial. It is believed that EDPs are a result of the aforementioned pathological conditions and are generated during degradation of ECM. However, as shown in the literature, the production of EDPs can be induced not only by inter alia chemical, enzymatic, and physical factors but also directly by ROS. No comprehensive study of the impact of ROS on EDPs and EDPs on ROS production has been conducted to date; therefore, the aim of this paper is to summarize the current state of knowledge of the relationship between ROS and ECM with special involvement of EDPs in the processes mentioned above. Depending on the type of cells, tissue, or organism, the relationships between ROS and ECM/EDPs may differ completely. Introduction Reactive oxygen species (ROS) are highly reactive chemical molecules formed during various biological processes [1]. ROS can be an effect of physiological and pathological processes leading to rearrangement of the cell structure, an increase or a decrease in cell metabolism, and even cell death [2]. The involvement of ROS in the development of inflammation, neurodegenerative diseases, and cancer is also well documented [3,4]. Moreover, it is currently believed that ROS are a key element in stem cell aging and autophagy [5]. The elastin protein is widely distributed in the organism [6]. However, during various physiological and pathological processes, elastin is degraded to elastin-derived peptides (EDPs) [7,8]. To date, a number of papers have reported that κ-elastin, EDPs, or peptide VGVAPG (signaling sequences from elastin) affect the ROS level in cell culture models or in vitro in organisms [9][10][11][12]. It has been described that, similar to the ROS level, the level of EDPs increases during aging and in various pathological processes [13,14]. Therefore, EDPs are recognized as hallmarks of aging and are called matrikines-matrix fragments having the ability to regulate cell physiology [8,15]. Moreover, the influence of ROS on the EDP formation, as well as the influence of EDPs on the ROS formation has now been demonstrated [9,16]. Therefore, the aim of this paper is to summarize the current state of knowledge of the interaction between ROS and EDPs levels in biological systems. Role of ROS in the Origin of Elastin-Derived Peptides Elastin occurs naturally in skin, arteries, lung, and other tissues [6]. This protein is characterized by high tolerance to mechanical damage, giving additional resistance of certain tissues. It is also a matrix for cells, especially in the nervous system [17,18]. The tropoelastin molecule is the building-block of elastin, which makes this protein nearly insoluble in water and responsible for its unique properties [19,20]. It has been observed that the half-life of elastin is over 70 years [21]. However, an intensified degradation process of this protein is observed during aging [22]. Hence, it is difficult to study the direct effects of elastin on the entire organism, especially on ROS production. However, in the 1950s, the first in vitro hydrolyzation of elastin was performed with the use of potassium hydroxide (KOH), resulting in κ-elastin production. Subsequently, the degradation of elastin by oxalic acid (C2H2O4), which yields an α-elastin molecule, was developed [23]. Thus, the next discoveries in this field were focused on characterization of the obtained molecules as products of elastin degradation called elastin-derived peptides (EDPs). It has been observed that the peptides with the Gly-x-x-Pro-Gly (GxxPG) amino sequence were repeated many times in the EDP structure, which defines the conformation of elastin [24]. The most common sequence, i.e., the Val-Gly-Val-Ala-Pro-Gly (VGVAPG) hexapeptide, has been found in the EDP group [25]. Literature highlights three main causes (types) of elastin degradation: chemical, enzymatic, and ROS-dependent ( Figure 1). The first type can be achieved with the use of potassium hydroxide (KOH) or oxalic acid (C2H2O4), which are not present in the organism but are used only to prepare EDPs in vitro given their physicochemical properties. The second type of elastin degradative factors is represented by various proteinases, which act both in vitro and in vivo. To date, three groups of proteolytic enzymes acting as elastases have been described [26]. Serine proteases released from neutrophils or macrophages act as chymotrypsin-like proteins. They are able to degrade the whole elastin molecule or pre-degraded protein, e.g., proteinase 3, neutrophil elastases, and cathepsin G [27]. Another group comprises cysteine proteases such as cathepsin K (CatK), L, S, and V, which naturally occur in lysosomes [28]. The third group of proteases capable of hydrolyzing elastin are metalloproteinases (MMPs), which require metal ions for their hydrolyzing activity, e.g., MMP-2, -7, -9, and -12 [26]. The last type of elastin degradation is the ROS-dependent mechanism. To date, it has been described that tropoelastin is sensitive to ROS [29]. It has been shown that ROS generated by ultraviolet A (UV-A) and hematoporphyrin rapidly degraded tropoelastin within 5 min [29]. Treatment of tropoelastin with copper sulfate/ascorbic acid resulted in degradation of tropoelastin producing fragments of molecular weight 45, 30, and 10 kDa within 30 min. The degradation of tropoelastin was partially blocked by the addition of mannitol, which is a well-described radical scavenger [29,30]. ROS generation induced by the xanthine-xanthine oxidase system caused degradation of tropoelastin within 6 h. The degradation was blocked by the antioxidant enzyme catalase (CAT), which is a ROS scavenging enzyme, proving the engagement of free radicals in the elastin degradation process. ROS generated by copper-ascorbate seems to be unique in that it cleaves relatively specific sites of the tropoelastin molecule. Thus, ROS may play a degradative role in elastin metabolism, which may cause elastolytic changes or deposition of fragmented elastic fibers in photoaged skin or age-related elastolytic disorders [29]. Moreover, a positive correlation between metal ion-induced ROS and an increased level of elastin derivatives has been showed by Umeda et al., showing that ROS can directly influence on elastin degradation [31]. However, the cited authors have also shown weakening of such an effect during solarization of elastin derivatives, suggesting the ability of ROS to degrade the whole elastin molecule, but not every derivative. Moreover, elastin is a major part of the extracellular matrix (ECM) of connective tissues, giving these tissues resistance to stretching and injuries thanks to its properties. Interestingly, as shown by Yao et al., superoxide dismutase 3 (SOD3) gene knock-out in mice contributed to an increase in ECM fragmentation after treatment with cigarette smoke, which is considered to be a source of exogenous ROS [32]. This indicates again that ECM, whose major part is elastin, can be successfully degraded by ROS. The studies cited above show that an increased ROS level in the organism, also during ischemia and inflammation, may be engaged in acceleration of elastin degradation and production of higher amounts of EDPs. This also proves that the highly reactive ROS are able to damage the highly mechanically resistant and insoluble elastin. In the further part of this study, the correlation between EDP internalization and intracellular ROS generation will be discussed. Interestingly, solar elastosis of skin elastin is a complex process combining UV, ROS, and enzymatic degradation [33,34]. To date, UV radiation has been described to reduce desmosine cross-links in elastin [35]. Besides direct destruction of the elastin structure, UV irradiation also causes the formation of large amounts of ROS [36]. Moreover, data suggest a role of intracellular elastin degradation by catK in the formation of solar elastosis [33]. Induction of catK expression in fibroblasts was observed both in vitro and in vivo after exposure to longwave UVA. Moreover, UV irradiation has been shown to induce the expression of MMPs. Irradiation of skin was found to produce an 11.9-fold increase in the expression of human macrophage elastase (MMP-12) mRNA within 16 h of UV exposure [34]. Research reports also show that the increase in ROS generation during inflammation and bacterial and viral infections can accelerate elastin degradation [37]. Impact of EDPs on ROS Production As described above, ROS can take part in elastin degradation directly, resulting in increased amounts of EDP protein in the organism. To date, EDPs have been well-described to cause biological effects mainly through activation of the cell receptor present naturally in the cell membrane [18,38]. EDP binds to 67-kDa elastin-binding protein (EBP), which is a catalytically inactive form of beta-galactosidase produced by alternative splicing of the GLB1 gene ( Figure 2) [39,40]. The presence of EDPs in the extracellular matrix is able to have an impact on many metabolic parameters in cells, e.g., proliferation, metabolic activity, expression of such proteolytic enzymes as MMPs, and even cell death [41][42][43]. Moreover, the ability of these peptides to induce ROS production has been also described in the literature [9,44,45]. The increase in the level of ROS in the cell is believed to be caused by at least two main processes, i.e., Ca 2+ influx and disruption of the expression and/or activity of antioxidant enzymes mediated through the peroxisome proliferator-activated receptor gamma (PPARγ) pathway ( Figure 2) [9]. A number of studies have reported that tropoelastin, κelastin, EDPs, and the VGVAPG peptide increase Ca 2+ influx in human monocytes, fibroblasts, human umbilical venous endothelial cells (HUVEC), different glioma cell lines (C6, CB74, CB109, and CB191), and smooth muscle cells from pig aorta or mouse astrocytes [10,[46][47][48][49]. It is well known that different Ca 2+ signaling pathways can increase the level of cell ROS [50]. To date, depending on the model, EDPs have been shown to decrease or increase the cell ROS level [51]. However, the vast majority of studies show that EDPs induce ROS production in human fibroblasts and neuroblastoma (SH-SY5Y) as well as and murine monocytes and astrocytes [9,12,44,45,52]. In addition to the impact on ROS, various authors have shown that EDPs affect the expression and/or activity of antioxidant enzymes. As reported by Gmiński et al., EDPs enhance the activities of superoxide dismutase (SOD), CAT, or glutathione peroxidase (GPx) and increase lipid peroxidation in human fibroblasts [53]. Similarly, Szychowski et al. have described that the VGVAPG peptide increases GPx activity and slightly (but not statistically significantly) influenced the activity of SOD and CAT [9]. Szychowski et al., have also described PPARγ as a key receptor involved in the VGVAPG peptide mechanism of action in mouse astrocytes and the human SH-SY5Y neuroblastoma cell line [9,43]. Furthermore, it is generally accepted that PPARγ together with ROS are crucial in the inflammation process [54,55], which is in line with data on the proinflammatory mechanism of action of EDPs. To date, EDPs have been reported to increase inflammatory markers in various cell types such as human malignant melanoma (M3Da), human monocytes, and human ligamentum flavum cells [56][57][58]. Moreover, EDPs have also been shown to be chemotactic agents for monocytes, which are responsible for development of inflammation [58][59][60]. Interestingly, the VGVAPG peptide does not activate the inflammatory process in mouse astrocytes, probably due to the special role of the nervous system [61]. PPARγ is also involved in the control of the expression of MMPs [62]. To date, it has been well described that EDPs increase the expression of various MMPs and degradation of ECM [63][64][65]. It has been shown that the VGVAPG peptide in human endothelial cells upregulates the expression of mRNA of membrane-type matrix metalloprotease-1 (MT1-MMP) and MMP-2 [63]. Similarly, Ntayi et al. (2004) described that cell culture plates coated with the VGVAPG peptide were characterized by increased expression and activation of MMP-2 and MT1-MMP in two melanoma (M1Dor and M3Da) cell lines [66]. Furthermore, the VGVAPG peptide added to the culture medium upregulated the MMP-2, MT1-MMP, and TIMP-2 mRNA expression and activity in the human fibrosarcoma (HT-1080) cell line [64,65]. Furthermore, in human glioblastoma multiforme cell lines CB74, CB109, and CB191 and the rat astrocytoma cell line C6 exposed to the (VGVAPG)3 peptide, mRNA expression of MMP-2 increased with very low stimulation of MMP-9 [49]. The authors correlate the high expression of MMP-2 mRNA with increasing degradation of ECM and production of EDPs. All these mechanisms, i.e., the increase in ROS production, Ca 2+ influx, PPARγ signaling pathway activation, and the increase in the expression and/or activity of MMPs lead to accelerated elastin degradation in ECM and production of increased numbers of EDPs ( Figure 2). Conclusions and Perspectives To date, the effect of EDPs has been shown to be dependent on the type of cells or model organisms. However, it is generally accepted that EDPs can indirectly (by affecting the expression and/or activity of antioxidant enzymes such as CAT or SOD) and/or directly affect the level of ROS. Moreover, increased EDP levels have also been detected in various pathological conditions associated with a high level of ROS (e.g., cancers, arteriosclerosis, obesity). Given the literature data presented here, it can be assumed that there is some positive feedback between ROS and EDPs, and the presence of one component increases the amount of the other one ( Figure 3). This phenomenon fits into the free radical theory of aging. Moreover, due to the inseparability of EDPs and ROS, the cellular aging can only be slowed down by reduction of the amount of ROS in the organism. A healthy lifestyle and an antioxidant-rich diet or avoidance of high sun exposure may be helpful to slow down aging. Unfortunately, we have no influence on the spontaneous elastin degradation process during aging. Therefore, future medicine should focus on the antiaging therapies on inhibition of elastin degradation, removal of EDPs from the organism, and acceleration of the reconstruction of elastin in tissues.
3,227
2021-09-18T00:00:00.000
[ "Medicine", "Environmental Science", "Biology" ]
Design and Fabrication of 3.5 GHz Band-Pass Film Bulk Acoustic Resonator Filter With the development of wireless communication, increasing signal processing presents higher requirements for radio frequency (RF) systems. Piezoelectric acoustic filters, as important elements of an RF front-end, have been widely used in 5G-generation systems. In this work, we propose a Sc0.2Al0.8N-based film bulk acoustic wave resonator (FBAR) for use in the design of radio frequency filters for the 5G mid-band spectrum with a passband from 3.4 to 3.6 GHz. With the excellent piezoelectric properties of Sc0.2Al0.8N, FBAR shows a large Keff2 of 13.1%, which can meet the requirement of passband width. Based on the resonant characteristics of Sc0.2Al0.8N FBAR devices, we demonstrate and fabricate different ladder-type FBAR filters with second, third and fourth orders. The test results show that the out-of-band rejection improves and the insertion loss decreases slightly as the filter order increases, although the frequency of the passband is lower than the predicted ones due to fabrication deviation. The passband from 3.27 to 3.47 GHz is achieved with a 200 MHz bandwidth and insertion loss lower than 2 dB. This work provides a potential approach using ScAlN-based FBAR technology to meet the band-pass filter requirements of 5G mid-band frequencies. Introduction Wireless communication has overcome the limitations of time and distance on our communication and allows us to transfer information quickly [1][2][3][4].In particular, fifthgeneration (5G) systems have introduced new services to support higher data transmission rates of wireless communication [5][6][7].The proliferation of 5G communications has led to a gradual increase in data transmission bands, with frequencies covered ranging from 2.4 GHz to 5 GHz [8,9].Higher frequencies and frequency bands have been pursued to improve radio frequency (RF) systems, especially the filtering action of RF filters.As important elements in 5G data transmission, filters based on surface acoustic wave (SAW) resonators are difficult to use at a frequency higher than 3 GHz because of performance degradation at high frequencies [10][11][12].The IHP SAW can be used in higher frequency bands, but it requires more complex processing of the multi-layered structure, such as film transfer and bonding process [13][14][15].On the contrary, a bulk acoustic wave (BAW) filter seems a better choice for 5G communication.Most commercially available BAW filters are constructed with film bulk acoustic resonators (FBARs), in which an air cavity is created between the bottom electrode and the carrier wafer.FBARs can obtain a better effective electromechanical coupling coefficient (K 2 e f f ) and provide a higher quality factor (Q) [16,17].FBARs are preferred for higher-frequency applications due to characteristics such as good electivity and high power-handling. For FBARs and FBAR-based filters, which are capable of processing the high frequencies of the 5G system, higher-frequency operations need a reduction in the piezoelectric layer thickness [18][19][20], while commercially available AlN-based FBARs are capable of providing high longitudinal sound velocity v (11,354 m/s) and low acoustic and dielectric losses.However, studies have attempted to scale the frequency range of resonators.New material, such as scandium (Sc) doping aluminum nitride (Sc x Al 1−x N), has been used in attempts to increase piezoelectric coefficients.Sc x Al 1−x N offers a manner in which to create reconfigurable filters by utilizing its tuning and polarization-switching properties [21].As the Sc element ratio in the Sc x Al 1−x N increases, there is a significant increase in the piezoelectric coefficient e 33 and piezoelectric moduli d 33 [22].Sc x Al 1−x N performs well in terms of thermal stability [23].The results of these studies show that at temperatures up to 1000 • C, the Sc x Al 1−x N wurtzite structure is stable, and little element inter-diffusion happens at the Sc x Al 1−x N /Mo interface [24].According to reports, increasing the scandium doping in aluminum nitride to 40% can boost the piezoelectric coefficient d 33 by about five times [25].Giribaldi et al. demonstrated the high applicability of utilizing Sc 0.3 Al 0.7 N for microacoustic technologies in the sub-6G band [26].Moreira et al. enhanced the K 2 e f f of FBAR to 12.07% by doping 15% scandium into aluminum nitride [27].Ding R et al. produced an FBAR-based filter with a center frequency of 3.38 GHz with 160 MHz 3 dB bandwidth.The insertion loss of the filter has a minimum of 1.5 dB [28].Yang Q designed a high-selectivity FBAR filter for the 3.4-3.6GHz range with interpolation loss of −2.05 dB [29]. In this work, we report the use of a Sc 0.2 Al 0.8 N-based FBAR to design band-pass filters for the 5G mid-band frequencies of 3.4 to 3.6 GHz.Using high-quality c-axis orientation Sc 0.2 Al 0.8 N film, we verified that the 20 at.%Sc doped concentration can achieve a high K 2 e f f of 13.1%, which constitutes an improvement over Moreira's devices.With different order circuit design for ladder-type filters, the results show that the out-of-band rejection and the insertion loss can be adjusted to different specific requirements.Compared with the filters of Ding R and Yang Q et al., in which the in-band interpolation loss was −1.5 dB and −2.05 dB, the FBAR-based filter in this study has a lower insertion loss, at 1.28 dB.With a fabricated series and parallel FBARs, filters with passband from 3.27 to 3.47 GHz are achieved.The proposed Sc 0.2 Al 0.8 N-based FBAR filters show a potential for 5G mid-band applications with further optimized fabrication controls and updated designs. Design and Fabrication We chose piezoelectric film bulk acoustic devices to construct a 5G mid-band (3.4-3.6 GHz) filter with a center frequency of 3.5 GHz.The designed piezoelectric film bulk acoustic resonator is illustrated in Figure 1.The FBAR consists of a sandwich structure with a piezoelectric layer between the top electrode and bottom electrode (TE and BE, respectively).The electrical field between the two electrodes excites the bulk acoustic wave.As shown in Figure 1b, an air cavity is created between the bottom electrode and the substrate to trap the acoustic wave between the electrodes.Figure 1c shows the working principle of the filter based on FBARs.Each resonator in this filter has two resonant frequencies.One is the series resonant frequency f s , at which the impedance can be very low (Z min ), and the other one is a parallel resonant or anti-resonant frequency f p , at which the impedance can be very high (Z max ).The parallel resonator in the filter is tuned to be worked at a slightly lower frequency, compared to the series resonator, by adding a mass-loading layer on the top electrode.When f p2 , representing the parallel resonant frequency of the parallel resonators, is equal to or slightly lower than f s1 , representing the series resonant frequency of the series resonators, a passband is formed between the frequencies near f s2 and f p1 .As shown in Figure 1c, at the frequency point f 1 , the parallel FBAR can be regarded as a short-circuit state, and the signal cannot be passed to the output port.Hence, the f 1 is the left transmission zero point of the filter.For the frequency point f 2 , the impedance of the series FBAR is small enough, while the impedance of the parallel FBAR is very large.The circuit is manifested as a channel state, and the signals are basically transmitted to the output port.At the frequency point of f 3 , the series FBAR can be regarded as being in a disconnection state, and the signal cannot pass the output port.Therefore, the f 3 is the correct transmission zero point of the filter.For the design of the 5G mid-band (3.4-3.6 GHz) filter with a center frequency of 3.5 GHz, we used the Mason model [30][31][32] to simulate the transmit characteristics of filters.The effective electromechanical coupling coefficient (K 2 e f f ) of FBAR calculated by Equation (1) should reach a value of about 12% [33,34], which is suitable for the band-pass width of a 5G mid-band (3.4-3.6 GHz) filter.According to the requirements of FBARs, Sc 0.2 Al 0.8 N piezoelectric thin film should be a functional piezoelectric material, and the designed thickness of each layer is shown in Table 1. Micromachines 2024, 15, x FOR PEER REVIEW 3 of 10 and fp1.As shown in Figure 1c, at the frequency point f1, the parallel FBAR can be regarded as a short-circuit state, and the signal cannot be passed to the output port.Hence, the f1 is the left transmission zero point of the filter.For the frequency point f2, the impedance of the series FBAR is small enough, while the impedance of the parallel FBAR is very large.The circuit is manifested as a channel state, and the signals are basically transmitted to the output port.At the frequency point of f3, the series FBAR can be regarded as being in a disconnection state, and the signal cannot pass the output port.Therefore, the f3 is the correct transmission zero point of the filter.For the design of the 5G mid-band (3.4-3.6 GHz) filter with a center frequency of 3.5 GHz, we used the Mason model [30][31][32] to simulate the transmit characteristics of filters.The effective electromechanical coupling coefficient ( 2 eff K ) of FBAR calculated by Equation ( 1) should reach a value of about 12% [33,34], which is suitable for the band-pass width of a 5G mid-band (3.4-3.6 GHz) filter.According to the requirements of FBARs, Sc0.2Al0.8Npiezoelectric thin film should be a functional piezoelectric material, and the designed thickness of each layer is shown in Table 1.The FBAR devices were manufactured in an eight-inch wafer, as shown in Figure 2. First, high-resistivity silicon was etched to form the separation walls; these are used to accurately define the cavity and prevent over-etching from damaging the devices.Next, the cavity was filled with SiO 2 , and the excess sacrificial layer was removed using chemical mechanical polishing.Then, an AlN seed layer was deposited as a buffer layer, and the Mo bottom electrode was dual-deposited and patterned.Subsequently, 500 nm thick Sc 0.2 Al 0.8 N film was reactively sputtered.Then, a 100 nm thick top Mo electrode and a 37 nm thick Mo mass-loading layer were deposited and patterned above the structure.Subsequently, 1 µm thick Al was deposited by magnetron sputtering and patterned to define the probing pads.Finally, the cavity filled with SiO 2 was opened by using inductively coupled plasma (ICP) etching, and then the hydrofluoric acid vapor release cavity was introduced. Micromachines 2024, 15, x FOR PEER REVIEW 4 of 10 The FBAR devices were manufactured in an eight-inch wafer, as shown in Figure 2. First, high-resistivity silicon was etched to form the separation walls; these are used to accurately define the cavity and prevent over-etching from damaging the devices.Next, the cavity was filled with SiO2, and the excess sacrificial layer was removed using chemical mechanical polishing.Then, an AlN seed layer was deposited as a buffer layer, and the Mo bottom electrode was dual-deposited and patterned.Subsequently, 500 nm thick Sc0.2Al0.8Nfilm was reactively sputtered.Then, a 100 nm thick top Mo electrode and a 37 nm thick Mo mass-loading layer were deposited and patterned above the structure.Subsequently, 1 um thick Al was deposited by magnetron sputtering and patterned to define the probing pads.Finally, the cavity filled with SiO2 was opened by using inductively coupled plasma (ICP) etching, and then the hydrofluoric acid vapor release cavity was introduced. Results and Discussions Sc0.2Al0.8Npiezoelectric thin films were prepared by magnetron sputtering (SPTS, Sigma fxP system, Newport, UK) with a 20 at.% doped ScAl alloy target.A sputter power of 6 kW and bias power of 160 W were used for the film deposition, under a substrate temperature of 200 °C, with flow rates of N2 and Ar of 60 sccm and 20 sccm, respectively.The surface morphology of the as-deposited film was observed by using scanning electron microscope (SEM) as shown in Figure 3a, and it was found that a small amount of Sc precipitated on the surface of the as-deposited piezoelectric films.The concentration of doped Sc in the marked area of Figure 3a was tested as 21.8 at.% using energy-dispersive X-ray spectroscopy.The morphology of the Sc0.2Al0.8Npiezoelectric thin film, measured by atomic force microscopy, is shown in Figure 3b, along with the corresponding Rq surface Results and Discussions Sc 0.2 Al 0.8 N piezoelectric thin films were prepared by magnetron sputtering (SPTS, Sigma fxP system, Newport, UK) with a 20 at.% doped ScAl alloy target.A sputter power of 6 kW and bias power of 160 W were used for the film deposition, under a substrate temperature of 200 • C, with flow rates of N 2 and Ar of 60 sccm and 20 sccm, respectively.The surface morphology of the as-deposited film was observed by using scanning electron microscope (SEM) as shown in Figure 3a, and it was found that a small amount of Sc precipitated on the surface of the as-deposited piezoelectric films.The concentration of doped Sc in the marked area of Figure 3a was tested as 21.8 at.% using energy-dispersive X-ray spectroscopy.The morphology of the Sc 0.2 Al 0.8 N piezoelectric thin film, measured by atomic force microscopy, is shown in Figure 3b, along with the corresponding Rq surface roughness value of 9.8 nm.The results of X-ray diffraction (XRD) testing of the Sc 0.2 Al 0.8 N piezoelectric thin films are shown in Figure 3c, and the corresponding diffraction angle of the Sc 0.2 Al 0.8 N (002) peak is 2θ = 36 • , which corresponds to the (002) orientation.The inset shows the rocking curve of the Sc 0.2 Al 0.8 N piezoelectric film, and the full width at half maximum (FWHM) of the rocking curve is 1.73 • , indicating that the piezoelectric film has a good c-axis orientation. roughness value of 9.8 nm.The results of X-ray diffraction (XRD) testing of the Sc0.2Al0.8Npiezoelectric thin films are shown in Figure 3c, and the corresponding diffraction angle of the Sc0.2Al0.8N(002) peak is 2θ = 36°, which corresponds to the (002) orientation.The inset shows the rocking curve of the Sc0.2Al0.8Npiezoelectric film, and the full width at half maximum (FWHM) of the rocking curve is 1.73°, indicating that the piezoelectric film has a good c-axis orientation.Figure 4a shows the optical view of the fabricated FBAR.The structure in the middle part of the figure is the resonant region, which is the main operating region of the FBAR.On each side of the resonant region are Mo anchors linked to each other with test ports.Release holes are retained around the resonance area.The space below the resonance region is released through the release holes to form a cavity.A cross-sectional view of the resonant region is shown in Figure 4b.The growth of the different material films can be clearly identified from the figure.The piezoelectric stack consists of Mo/Sc0.2Al0.8N/Mowith thicknesses of 122 nm/708 nm/186 nm, respectively, and a 27 nm thick AlN seed layer is under the bottom Mo layer.From the results of SEM inspection, it can be found that the thickness of electrodes and piezoelectric layers is different from the design.The variations in thickness may be an error generated by the machining process.Figure 4a shows the optical view of the fabricated FBAR.The structure in the middle part of the figure is the resonant region, which is the main operating region of the FBAR.On each side of the resonant region are Mo anchors linked to each other with test ports.Release holes are retained around the resonance area.The space below the resonance region is released through the release holes to form a cavity.A cross-sectional view of the resonant region is shown in Figure 4b.The growth of the different material films can be clearly identified from the figure.The piezoelectric stack consists of Mo/Sc 0.2 Al 0.8 N/Mo with thicknesses of 122 nm/708 nm/186 nm, respectively, and a 27 nm thick AlN seed layer is under the bottom Mo layer.From the results of SEM inspection, it can be found that the thickness of electrodes and piezoelectric layers is different from the design.The variations in thickness may be an error generated by the machining process.Figure 4a shows the optical view of the fabricated FBAR.The structure in the middle part of the figure is the resonant region, which is the main operating region of the FBAR.On each side of the resonant region are Mo anchors linked to each other with test ports.Release holes are retained around the resonance area.The space below the resonance region is released through the release holes to form a cavity.A cross-sectional view of the resonant region is shown in Figure 4b.The growth of the different material films can be clearly identified from the figure.The piezoelectric stack consists of Mo/Sc0.2Al0.8N/Mowith thicknesses of 122 nm/708 nm/186 nm, respectively, and a 27 nm thick AlN seed layer is under the bottom Mo layer.From the results of SEM inspection, it can be found that the thickness of electrodes and piezoelectric layers is different from the design.The variations in thickness may be an error generated by the machining process.The impedance responses of series and parallel resonators, measured by a Keysight network analyzer (N5222B) connected to a Cascade Microtech GSG probe station, are shown in Figure 5. Table 2 summarizes the relevant measured and extracted parameters of the series and parallel resonators used in the filter.The quality factor (Q) of FBARs can be calculated by Equation (2) [33,35], where the τ(f ) is the group delay of S 11 .Based on the Sc 0.2 Al 0.8 N film, the prepared resonator reached a K 2 e f f of 13.1%.However, the resonant frequency of both series and parallel resonators are lower than the designed ones shown in Figure 1b, which can be contributed to the thickness deviations of deposited electrodes and piezoelectric layers. and piezoelectric layers. ( ) ( ) Figure 6a is a schematic of the ladder-type circuit design of the FBAR filter consisting of series resonators and parallel resonators.To achieve its passband transmit characteristics, an additional Mo mass-loading layer is added to the parallel resonators to make resonant frequencies lower than those of the series resonator.Figure 6b-d is the optical view of fabricated FBAR filters with different orders.Figure 6b is a second-order ladder-type FBAR filter, which includes two series resonators and two parallel resonators.Figure 6c and Figure 6d show third-and fourth-order ladder-type FBAR filters, respectively.Figure 6a is a schematic of the ladder-type circuit design of the FBAR filter consisting of series resonators and parallel resonators.To achieve its passband transmit characteristics, an additional Mo mass-loading layer is added to the parallel resonators to make resonant frequencies lower than those of the series resonator.Figure 6b-d is the optical view of fabricated FBAR filters with different orders.Figure 6b is a second-order ladder-type FBAR filter, which includes two series resonators and two parallel resonators.Figures 6c,d show third-and fourth-order ladder-type FBAR filters, respectively. The measured transmission responses (S 21 ) of the FBAR filters with different orders are shown in Figure 7. Figure 7a illustrates how the out-of-band rejection gradually strengthens as the order of the filter increases.However, the insertion loss becomes worse, and the in-band ripple becomes more severe, as shown in Figure 7b.When the order of the filter is two, the out-of-band rejection is around −15 dB, and the minimum insertion loss is around −1 dB.For the fourth-order filter, the low-frequency out-of-band rejection is below −30 dB.Though the high-frequency out-of-band rejection reduces as the frequency increases, the high-frequency out-of-band rejection is still below −20 dB.The minimum insertion loss is around −1.5 dB, and the ripples are more pronounced in the fourth-order filter.What is more, consistent with the resonant performances of fabricated FBARs, the frequency ranges of the passbands of the filters are from 3.27 to 3.47 GHz, with insertion losses less than 2 dB in a 200 MHz passband range.This passband shift can be also attributed to the thicker electrodes and piezoelectric layers compared with the designed parameters [36,37].To reduce the in-band ripple, two capacitors and two inductors were added to the circuit of the filters, as depicted in Figure 7c.The capacitances of the capacitors are 0.06 pF and 0.03 pF, and the inductances of the inductors are 1 nH and 0.4 nH, respectively.The out-ofband rejection of filters changes slightly, but the in-band ripple is reduced.The in-band interpolation loss of the fourth-order FBAR-based filter was reduced from −1.28 dB to −1.39 dB.This was mainly due to the fact that the addition of more electrical elements increases the insertion loss in the filter, resulting in a slightly lower in-band ripple.Figure 7d shows that the in-band ripple is reduced in all filters.As the filter order increases, out-ofband rejection improves but, inevitably, leads to increased insertion loss [38].The measured transmission responses (S21) of the FBAR filters with different orders are shown in Figure 7. Figure 7a illustrates how the out-of-band rejection gradually strengthens as the order of the filter increases.However, the insertion loss becomes worse, and the in-band ripple becomes more severe, as shown in Figure 7b.When the order of the filter is two, the out-of-band rejection is around −15 dB, and the minimum insertion loss is around −1 dB.For the fourth-order filter, the low-frequency out-of-band rejection is below −30 dB.Though the high-frequency out-of-band rejection reduces as the frequency increases, the high-frequency out-of-band rejection is still below −20 dB.The minimum insertion loss is around −1.5 dB, and the ripples are more pronounced in the fourthorder filter.What is more, consistent with the resonant performances of fabricated FBARs, the frequency ranges of the passbands of the filters are from 3.27 to 3.47 GHz, with insertion losses less than 2 dB in a 200 MHz passband range.This passband shift can be also attributed to the thicker electrodes and piezoelectric layers compared with the designed parameters [36,37].To reduce the in-band ripple, two capacitors and two inductors were added to the circuit of the filters, as depicted in Figure 7c.The capacitances of the capacitors are 0.06 pF and 0.03 pF, and the inductances of the inductors are 1 nH and 0.4 nH, respectively.The out-of-band rejection of filters changes slightly, but the in-band ripple is reduced.The in-band interpolation loss of the fourth-order FBAR-based filter was reduced from −1.28 dB to −1.39 dB.This was mainly due to the fact that the addition of more electrical elements increases the insertion loss in the filter, resulting in a slightly lower in-band ripple.Figure 7d shows that the in-band ripple is reduced in all filters.As the filter order increases, out-of-band rejection improves but, inevitably, leads to increased insertion loss [38]. Conclusions In this study, we designed ScAlN-based FBAR filters with a passband of 3.4 to 3.6 GHz for 5G mid-band frequencies and fabricated ladder-type filters with different orders.A 20 at.%Sc doping concentration was chosen to meet the requirement of 2 eff K of 12% for FBARs, and the 2 eff K of the fabricated Sc0.2Al0.8N-basedFBARs can reach a value of 13.1%.Ladder-type filters of various orders were fabricated, and the S21 results show that the outof-band rejection gradually strengthened as the order of the filter increased.However, the Figure 1 . Figure 1.Structures of FBAR and characteristics of filters.(a) Schematic drawing of a typical FBAR.(b) The cross-sectional view of FBAR.(c) Working principle of filter based on FBARs in the circuits model. Figure 1 . Figure 1.Structures of FBAR and characteristics of filters.(a) Schematic drawing of a typical FBAR.(b) The cross-sectional view of FBAR.(c) Working principle of filter based on FBARs in the circuits model. Figure 2 . Figure 2. Fabrication of the Sc0.2Al0.8N-basedFBARs.(a) The film structure of FBAR.(b) Forming separation walls.(c) SiO2 deposition and chemical mechanical polishing.(d) Seed layer deposition.(e) Bottom Mo and Sc0.2Al0.8Nlayers were dual-deposited and then patterned.(f) Top Mo was dualdeposited and then patterned.(g) Al was deposited and then patterned.(h) Opening the release window and release. Figure 2 . Figure 2. Fabrication of the Sc 0.2 Al 0.8 N-based FBARs.(a) The film structure of FBAR.(b) Forming separation walls.(c) SiO 2 deposition and chemical mechanical polishing.(d) Seed layer deposition.(e) Bottom Mo and Sc 0.2 Al 0.8 N layers were dual-deposited and then patterned.(f) Top Mo was dual-deposited and then patterned.(g) Al was deposited and then patterned.(h) Opening the release window and release. Figure 4 .Figure 3 . Figure 4. Characteristics of fabricated FBARs.(a) SEM image of top view of the fabricated FBAR resonator.(b) SEM image of the cross-sectional view of the fabricated FBAR resonator.The impedance responses of series and parallel resonators, measured by a Keysight network analyzer (N5222B) connected to a Cascade Microtech GSG probe station, are shown in Figure 5.Table 2 summarizes the relevant measured and extracted parameters Micromachines 2024 , 15, x FOR PEER REVIEW 5 of 10 roughness value of 9.8 nm.The results of X-ray diffraction (XRD) testing of the Sc0.2Al0.8Npiezoelectric thin films are shown in Figure3c, and the corresponding diffraction angle of the Sc0.2Al0.8N(002) peak is 2θ = 36°, which corresponds to the (002) orientation.The inset shows the rocking curve of the Sc0.2Al0.8Npiezoelectric film, and the full width at half maximum (FWHM) of the rocking curve is 1.73°, indicating that the piezoelectric film has a good c-axis orientation. Figure 4 .Figure 4 . Figure 4. Characteristics of fabricated FBARs.(a) SEM image of top view of the fabricated FBAR resonator.(b) SEM image of the cross-sectional view of the fabricated FBAR resonator.The impedance responses of series and parallel resonators, measured by a Keysight network analyzer (N5222B) connected to a Cascade Microtech GSG probe station, are shown in Figure 5.Table 2 summarizes the relevant measured and extracted parameters Figure 5 . Figure 5. Measured impedance response of series and parallel resonators. Figure 5 . Figure 5. Measured impedance response of series and parallel resonators. Figure 7 . Figure 7. Measured transmission responses (S 21 ) of the fabricated FBAR filters.(a) The S 21 responses of the filters with different orders.(b) The enlarged view of the passband in (a).(c) The S 21 responses of the filters in (a) with the external circuit compensation.(d) The enlarged view of the passband in (c). Table 1 . The parameters of FBAR. Table 1 . The parameters of FBAR. Table 2 . The fabricated resonators used for the FBAR filter. Table 2 . The fabricated resonators used for the FBAR filter.
6,020
2024-04-25T00:00:00.000
[ "Engineering", "Physics" ]
The black box problem of AI in oncology The rapidly increasing amount and complexity of data in healthcare, the pace of published research, drug development, biomarker discovery, and clinical trial enrolment in oncology renders AI an approach of choice in the development of machine assisted methods for data analysis and machine assisted decision making. Machine learning algorithms, and artificial neural networks in particular, drive recent successes of AI in oncology. Performances of AI driven methods continue to improve with respect to both speed and precision thus leading to a great potential for AI to improve clinical practice. But the acceptance and a lasting breakthrough of AI in clinical practice is hampered by the black box problem. The black box problem refers to limits in the interpretability of results and to limits in explanatory functionality. Addressing the black box problem has become a major focus of research [1]. This talk describes recent attempts to addressing the black box problem in AI, offers a discussion on the suitability of those attempts for applications to oncology, and provides some future directions. Introduction Artificial Intelligence (AI) is a broad interdisciplinary field whose success is largely driven by advances in artificial neural networks (ANNs). Deep learning methods such as convolutional neural networks, generative adversarial networks, and graph neural networks are particularly influential ANNs that have driven AI to break-throughs in numerous application domains. Results produced by many traditional machine assisted non-parametric methods such as regression, decision trees, K-Nearest Neighbour, and rule based learning were found uncompetitive to those obtained by modern AI methods. AI is rapidly replacing traditional methods and has become one of the most important technologies that transform oncology in a wide range of applications such as: • Diagnosis: Cancer imaging and detection, cancer recognition, image analysis, image segmentation, pathologic diagnosis, genotype-phenotype correlation, mutation detection and identification. • Prognosis and Prediction: Toxicity of treatment, outcome prediction and survivability, cancer risk prediction. • Decision support: Biomarker discovery, patient profiling, cancer management, risk modelling and prediction. • Treatment: Optimal dose identification and energy deposition modelling in radiotherapy, patient journey optimization. ANNs are increasingly applied in oncology for the purpose of assisting clinicians and patients in decision making processes. A long-standing problem with ANN, the black-box problem, inhibits wider adoption of AI in such decision making processes. It is important for a clinician and for patients to understand why a given machine response was made to be able to make founded and informed decisions. There is significant risk associated with methods that would require humans to blindly trust the result of a machine. Without interpretation facilities the suitability of ANNs for many decision support applications is limited [2]. ANNs are imperfect systems that can make errors. But due to the black-box problem we cannot understand why a particular error was made. An understanding of the factors that led to the error is crucial for the design and development of ANNs that avoid making subsequent errors of the same or similar nature. The back-box problem is particularly unhelpful in oncology since many processes in oncology require certification or underlie regulatory requirements. It is imperative to have transparency and interpretability for AI solutions to gain regulatory acceptance [3]. The black-box problem is not new. Relevant AI methods have their foundation in [4]. Technological limitations and knowledge gaps prevented the development of approaches that tackle the black-box problem until recently. The black-box problem is a hard problem that de-scribes two inabilities: 1. The inability to explain what the values inside the model actually represent, and 2. The inability to explain reasons that led the model to produce a given output. These problems are closely linked but differ profoundly. The first problem concerns the understanding of how a given model works whereas the second problem concerns the understanding of why a given model produced a particular result. Research and methods that address the first problem are called "Explainable AI" whereas "Explanatory AI" or "Interpretable AI" addresses the latter problem. Related work There is a distinction among models that are interpretable by design (such as regression, decision trees, K-Nearest Neighbour, rule based learning), and black box models (e.g. Support Vector Machines, Artificial Neural Networks) which need to be explained by means of external augmented techniques. An alternative categorization of these models are transparent models and ad-hoc explainability. Ad-hoc techniques are developed to augment models which are not readily interpretable by design. Ad-hoc explainable techniques can be classified into: • Text explanations: Techniques that deal with explainability by learning to generate text explanations that help explain the results from a given model [6]. • Visual explanation: Techniques for ad-hoc explainability deal with visualising the black box model's behaviour [7,8]. These visualisation approaches aid human interpretation by visualizing complex interactions among variables involved in the model. • Local explanation: Techniques that create explanations representing smaller solution subspaces which are relevant for the whole model. These techniques aim at obtaining discernability characteristics to explain certain parts of the whole model [9]. • Explanations by simplification: A second model is developed based on a trained black box model. The second model aims at reducing the complexity of the black box model to help simplify the understanding of the functioning of the original black box model [10]. • Feature importance/relevance: Techniques for the computation of a relevance score or importance score for the input variables. These methods can reveal the sensitivity of an input variable on the output of a black box model [5]. Plug-in methods are developed to work with a variety of AI methods. For example: • LIME (Local Interpretable Model-Agnostic Explanations): Generates locally linear models around the predictions of a black box model to explain its functioning [9]. The method is a variant of explanations by simplification as well as of local explanations. • G-REX: Extracts rules from some AI methods [11]. This was further enhanced in [12] to explain complex AI models in a human-interpretable form. Limitations for these approaches are (i) AI expertise is needed to operate these methods, (ii) correlation between features are ignored which can lead to unrealistic explanations, (iii) the method is very sensitive to data variations. Small value changes can lead to radically different explanations. • SHAP (SHapley Additive exPlanations): This is an approach to feature relevance identification [5]. The method calculates additive feature importance scores for predictions with a set of desirable properties that the black box model lacked. • Other approaches that tackle the contribution of features to predictions as in SHAP are (i) coalitional Game Theory [13], (ii) local gradients [14], and (iii) the automatic STRucture IDentification method (ASTRID) that inspect which input attributes are exploited by a classifier [15]. Limitations affecting SHAP and its derivates are (i) the method is computationally expensive which makes it impractical to use in the presence of large number of instances, (ii) the techniques often ignore feature dependence and correlation (with the exception of the algorithm called TreeSHAP). • Methods that obtain visual explanations are (i) Sensitivity Analysis methods (data based, Monte-Carlo, cluster-based methods) and a novel input importance measure [8,16], and (ii) a modular ensemble technique which uses a dimension reduction technique and prototyping methods to discover correlations and importance of input features [7]. There have been several other attempts to improving explanations by means of modular ensemble methods: One of the earliest studies proposed to create a second, less complex model from a set of randomly selected samples from a set of data [17]. The Simplified Tree Ensemble Learner (STEL) is a more effective and recent approach for simplification [10]. The approach is similarly to [18] in that they suggest the creation of two models where one model is in charge of interpretation and the other of prediction by means of Expectation-Maximization. DeepSHAP, too, is an ensemble model which stacks multiple classifier systems in addition to Deep Learning models [19]. There are studies introduce explanations to deep learning models such as Deep Multi-Layer Networks (MLP), Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN). Corresponding approaches target explainability by using external augmentation techniques for either local explanations, feature importance detection, or both. • MLP: Relatively little has been done to address explanability in MLP. DeepRED, a computationally expensive approach, uses a decompositional approach to rule extraction by engaging the decision tree method on neuron level for rule extraction [20]. An approach by [21] uses model simplification through a distillation method called Interpretable Mimic Learning. The algorithm uses gradient boosting trees and hence the approach is also computationally expensive. DeepLIFT is an approach to compute feature importance scores in an MLP by computing an interestingness measure known as LIFT which is widely used in Association rule Mining [22]. The approach is computationally efficient, but cannot detect relationships between inputs and the explanatory capabilities are limited. • CNN: Due to the popularity of CNNs numerous attempts have been made to address explanability. For example, Deconvnet uses the feature map from a selected CNN layer to reconstruct the maximum activations [23]. These reconstructions can reveal some insight about the most influential parts of an input image. A subsequent work demonstrated how a saliency map can be generated by iteratively occluding different region of an input image [23]. The approach is computationally very expensive but can significantly enhance the explanability of a CNN. A different approach is to compute a loss for each filter in the first convolutional layer [1]. • RNN: RNNs are commonly used when processing temporal information (i.e. time series of data, data sequences). An approach to extract a specific propagation rule of the RNN uses the Long Short Term Memory (LSTM) and analyses the Gated Recurrent Units of the LSTM [24]. RETAIN (REverse Time AttentIoN) is an approach to detect influential past patterns by means of a two-level neural attention model [25]. These approaches introduce limited explanability but provide a good framework for future improvements. There is thus a wide spectrum of approaches to achieving explanatory AI. Research is still in very early stages. Current approaches afflict constraints which prevent a wider adoption to oncology. Many existing methods are either not scalable, robust, or, most commonly, do not offer legible results. For example, decision trees provide legible explanation of results but the corresponding algorithms are computationally inefficient and are not robust to noise and outliers. SHap on the other hand is computationally efficient but does not provide legible explanations. SHap requires human experts to translate the numerical associations found by SHap. The way ahead Current research shows that explanatory AI is possible but further research is needed to address a gap in knowledge on how to obtain legible interpretations of results produced by a given AI model to oncologists and cancer patients. Key requirements for explanatory AI in oncology are to obtain: 1. Legible, human interpretable explanations. 2. Explanations that are suitable for the target audience: explanations should not require an AI expert for interpretation of explanations, explanations should be suitable for interpretation by an oncologist, or by cancer patients, or both. 3. Explanations provide meaningful information of either the logic involved, or the role of input attributes and their values, or both. 4. If used for automated or semi-automated decision-making, then the methods should also explain "legal or similarly relevant effects" on individuals. These requirements address questions of ethics, accountability, safety and liability. Point 3 and 4 are legally required i.e. by the EU General Data Protection Regulation. In addition to these requirements the following capabilities are needed to achieve full acceptance of AI in oncology: 5. Produce explanations in natural language. 6. Explanations need to be informative for a given user. Obvious explanations or explanations that are already known by a user should be avoided or suppressed. 7. Incorporation of user feedback mechanisms for obtaining cooperative argumentative AI. Explanatory AI for oncology is within reach but research is needed. Collaborative research with domain experts will make AI an accepted tool and allow AI to make important contributions to advances in oncology. Some early work has been conducted in this area. A research team at the University of Wollongong developed machine learning ensemble methods that combine an explanatory subsystem with given black box method [7]. They demonstrated their work as part of a proof-of-concept study. They showed that the approach can introduce an effective explanatory subsystems to AI which in turn can significantly enhance precision of results. They demonstrated this on breast cancer survivability prediction of a large set of patients (SEER dataset) where accuracy improved from 63.27% to 86.96% while offering previously unknown insights into factors that influence survivability [7]. Though challenges of translating machine interpretable results into human interpretable results remain. Conclusions Significant advances in addressing the black-box problem in AI have been made in recent years. Challenges remain to render AI a valuable, permitted, and more widely accepted tool in oncology. Continued collaborative research engagements between oncologists, radiologists, and machine learning experts will allow AI to accelerate and drive advances in oncology. Acknowledgements The author acknowledges the funding received from the EIS seed grant (University of Wollongong) to carry out some of the presented work in this paper.
3,046.4
2020-10-01T00:00:00.000
[ "Medicine", "Computer Science" ]
SUSY QCD corrections to the polarization and spin correlations of top quarks produced in e+e- collisions We compute the supersymmetric QCD corrections to the polarization and the spin correlations of top quarks produced above threshold in e+e- collisions, taking into account arbitrary longitudinal polarization of the initial beams. Introduction A future linear e + e − collider will be an excellent tool to search for and investigate extensions of the Standard Model (SM) of particle physics [1]. One particularly attractive extension of the SM is Supersymmetry (SUSY) [2], which solves several conceptual problems of the SM. Apart from their direct production, also virtual effects of SUSY particles may lead to observable deviations from the SM expectations. In particular, top quark pair production at a linear collider may be a sensitive probe of such effects. Very high energy scales are involved in the production and decay of top quarks. Moreover, since they decay very quickly, the spin of top quarks is not affected by hadronization effects and becomes an additional observable to probe top quark interactions. At a future linear e + e − collider, the electron (and possibly also the positron) beam may have a substantial longitudinal polarization, which will be an asset to study top quark spin phenomena. We therefore study in this paper the impact of virtual effects of SUSY particles on spin properties of tt pairs in e + e − collisions. We restrict ourselves here to the SUSY QCD sector of the Minimal Supersymmetric Standard Model (MSSM). SUSY QCD corrections to the (spin-summed) differential cross section for e + e − → tt have already been studied quite some time ago [3], and we extend these results by keeping the full information on the tt spin state. The full MSSM corrections to the spin-summed differential cross section have been calculated in [4]. In section 2 we define the spin observables that we calculate in this paper and also discuss how they can be measured. Section 3 gives analytic results for these observables, and section 4 contains numerical results for specific choices of the SUSY QCD parameters. In section 5 we present our conclusions. Spin observables We consider the reaction where λ − (λ + ) denotes the longitudinal polarization of the electron (positron) beam 1 . Within the Standard Model, spin effects of top quarks in reaction (1) have been analysed first in ref. [5]. QCD corrections to the production of top quark pairs, including the full information about their spins, can be found in refs. [6,7]. Fully analytic results for the top quark polarization [8] and a specific spin correlation [9] to order α s are also available. The top quark polarization is defined as two times the expectation value of the top quark spin operator S t . The operator S t acts on the tensor product of the t andt spin spaces and is given by S t = σ 2 ⊗ 1l, where the first (second) factor in the tensor product refers to the t (t) spin space. (The spin operator of the top antiquark is defined by St = 1l ⊗ σ 2 .) The expectation value is taken with respect to the spin degrees of freedom of the tt sample described by a spin density matrix R, i.e. For details on the definition and computation of R, see e.g. [6]. The polarization of the top antiquark Pt is defined by replacing S t by St in (2). For top quark pairs produced by CP invariant interactions, we have Pt = P t . The spin correlations between t andt can be calculated by using the matrix Using arbitrary spin quantization axesâ andb for the t andt spins, the spin correlation with respect to these axes is given by The directionsâ,b can be chosen arbitrarily. Different choices will yield different values for the spin correlation c(â,b). The spin properties of the top quarks and antiquarks can be measured by analysing the angular distributions of the t andt decay products. For example, if both t andt decay semileptonically, t → bℓ + ν ℓ ,t →bℓ ′−ν ℓ ′ , the following double differential lepton angular distribution is sensitive to the tt spin state: with σ being the cross section for the channel under consideration. In Eq. (5) θ + (θ − ) denotes the angle between the direction of flight of the lepton ℓ + (ℓ ′− ) in the t (t) rest frame and the chosen spin quantization axisâ (b). The coefficients B 1,2 and C are related to the mean (averaged over the scattering angle) t (t) polarization and spin correlation projected onto the directionsâ andb. Using the double pole approximation [10] for the t andt propagators, one obtains for the so-called factorizable contributions [11,12] B 1 = κ + P t ·â, where the overline indicates the average over the scattering angle, e.g. etc., where y is the cosine of the top quark scattering angle. In (6), κ ± is the spin analysing power of the charged lepton ℓ ± . At leading order, κ ± = +1. QCD corrections to this result are at the per mill level [13]. SUSY QCD corrections to the spin analysing power κ ± are exactly zero [14]. Analytic results We now turn towards the calculation of the SUSY QCD corrections to the polarization and spin correlations of top quark pairs produced in e + e − collisions. These corrections directly determine the SUSY QCD corrections to the double lepton distribution (5) within the double pole approximation, since the corrections to the LO result κ ± = +1 are exactly zero and the non-factorizable contributions due to SUSY particles also vanish within that approximation. The amplitude for reaction (1) including SUSY QCD corrections may be written as follows: where g e V = − 1 2 + 2 sin 2 ϑ W , and g e A = − 1 2 , with ϑ W denoting the weak mixing angle. The function χ is given by where m Z stands for the mass of the Z boson. We neglect the Z width, since we work at lowest order in the electroweak coupling and the c.m. energy is far above m Z . The hadronic currents have a formfactor decomposition as follows: with In (11), V 0 γ = Q t , where Q t denotes the electric charge of the top quark in units of e = √ 4πα, and S γ,Z . Scalar and pseudoscalar couplings proportional to (k t + kt ) µ and (k t + kt ) µ γ 5 have been neglected in (8), since they induce contributions proportional to the electron mass. In addition CP violating formfactors proportional to (k t − kt ) µ γ 5 are possible in SUSY QCD through a complex phase in the squark mass matrices. In [15] it has been shown that the dependence of the cross section on these phases is weak and that CP odd asymmetries are typically of the order of 10 −3 . We therefore set these phases to zero in the following. To make this paper selfcontained we list the form factors V 1 γ,Z , A 1 γ,Z and S γ,Z in the appendix. We have performed an analytic comparison to the corresponding results in [3] and found complete agreement. We define and The electroweak couplings that enter the Born results are then given by Likewise, defining the SUSY QCD contributions may be written in terms of the following quantities: , , It is convenient to write the results in terms of the electron and top quark directionsp andk defined in the c.m. system, the cosine of the scattering angle y =p ·k, the scaled top quark mass r = 2m t / √ s, and the top quark velocity β = √ 1 − r 2 . The differential cross section including the SUSY QCD corrections reads: where and dσ 0 /dy is obtained by setting h + We further introduce a vector perpendicular to k in the production plane k ⊥ =p − yk and a vector normal to this plane, n =p ×k. The top quark polarization including the SUSY QCD corrections is equal to the top antiquark polarization and reads: For the matrix C i j defined in (4) we find The Born results P 0 t and C 0 i j are obtained from (19) and (20) by For fully polarized electrons (or positrons) a so-called 'optimal spin basis' can be constructed. This is an axisd with respect to which the t andt spins are 100% correlated at the tree level in the Standard Model for any velocity and scattering angle [16]. This axisd is the solution of the equationd One getsd with x ∈ [−1, 1] only if either P + = 0 or P − = 0. For P + = 0, which can be realized with left-handed electrons (λ − = −1), one finds For right-handed electrons, the optimal basis is obtained by replacing . Note that at thresholdd β→0 −→p, i.e. the optimal basis at threshold is defined by the direction of the beam, while in the high-energy limitd β→1 −→k, i.e. the optimal basis coincides with the helicity basis. By analytically evaluatingd i C 1 i jd j we find that the virtual SUSY QCD corrections to the tt spin correlations in the optimal basis are exactly zero. Numerical results In this section we present numerical results for the SUSY QCD corrections to the top quark polarization and tt spin correlations. We also include a discussion of the corrections to the differential cross section and compare our results to the literature. We take into account the effects of mixing of the chiral components of the top squark. The stop mass matrix can be expressed in terms of MSSM parameters as follows: where MQ, MŨ are the soft SUSY-breaking parameters for the squark doubletq L (q = t, b) and the top squark singlett R , respectively. Further, A t is the stop soft SUSY-breaking trilinear coupling, and µ is the SUSY-preserving bilinear Higgs coupling. The ratio of the two Higgs vacuum expectation values is given by tan β, and we use the abbreviation s W = sin θ W . The squared physical masses of the stops are the eigenvalues of the above matrix. In order to simplify the discussion, we set tan β = 1 for all following results. Further, we assume that the sbottom mass matrix is diagonal with degenerate mass eigenvalues, . Neglecting m b in the sbottom mass matrix this leads to MQ = mb, and the stop mass matrix simplifies under the above assumptions to with M LR = A t − µ. The stop mass eigenstates are obtained from the chiral states by a rotation: Maximal mixing (θt = π 4 and M LR = 0) corresponds to M 2Ũ = m 2 b . The latter relation will also be assumed for M LR = 0, leading to the following stop mass eigenvalues 2 : Note that we use here the same set of assumptions on the squark mass matrices as we did in our study of the SUSY QCD corrections in the decay of polarized top quarks [14]. Further we use sin 2 θ W = 0.2236, α s = 0.11, and we set the top mass to m t = 174 GeV and the sbottom mass that enters Eq. (27) to mb = 100 GeV. We have compared our results for σ and dσ/dy with [3] and found agreement with their Fig. 3 (no mixing case), while we disagree with the results depicted in Fig. 4 (σ and forward-backward asymmetry with stop mixing). We have also compared our results including the mixing with [4,18] and find complete agreement. 2 Note that by fixing θt = π 4 the light stop can be eithert 1 ort 2 depending on the sign of M LR . We now turn towards the discussion of the SUSY QCD corrections to the tt spin properties. In Fig. 3 we investigate the expectation value of the top spin operator as a function of the centre-of-mass energy. We have computed the average projected polarization defined in Eq. (7) for three choices of the quantization axisâ, namely forâ =k (flight direction of the top), for a =p (electron beam direction), and forâ =n (normal to the event plane). These quantities are shown in three different plots, where thin curves correspond to the tree level results and the thick curves are the relative corrections in percent. The corrections are shown for the case of mixing (θq = π/4 and M LR = 200 GeV) and a gluino mass of mg = 150 GeV. For the polarizations of the initial beams we choose λ + = 0 and consider the three cases λ − = −1, 0, +1. The projection of the top quark polarization onton vanishes at tree level, and thus we only show the contribution from SUSY QCD absorptive parts in percent. In all cases SUSY QCD effects change the tree level results by less than 1% and vanish at threshold. In Fig. 4 we show the averaged spin correlationsâ i C i jb j for the choicesâ =b =k (helicity correlation),â =b =p (beamline correlation), andâ =p,b =k, for the same choice of parameters as in Fig. 3. Again the SUSY QCD correction are tiny. Fig. 5 shows the correlations for the choicesâ =k,b =n andâ =p,b =n. The first of these two choices of spin quantization axes leads to SUSY QCD effects slightly larger than 1% around c.m. energies of 700 GeV and for a fully polarized electron beam. Conclusions In this paper we have derived analytic expressions for the SUSY QCD corrections to the polarization and spin correlations of tt pairs produced in e + e − annihilation with longitudinally PSfrag replacements k i C 0 i jk j (thin) Figure 4: Same as Fig. 3, but for the quantitiesk i C i jk j (top),p i C i jp j (middle), andp i C i jk j (bottom). We find (with C F = (N 2 C − 1)/2/N C = 4/3): PSfrag replacements α s π C F g t A cos 2 θt − Q t sin 2 ϑ W C 11 24 + g t A sin 2 θt − Q t sin 2 ϑ W C 22 . (A.7) In the above expressions, the one-loop integrals C i j 0 , . . .C i j 24 are defined by the decomposition of Passarino and Veltman [19], with (k V = k t + kt ): (A.9) The quantities δZ R,L denote the one-loop renormalization constants for the chiral components of the top quark field in the on-shell renormalization scheme. They are given explicitly by
3,472
2003-01-17T00:00:00.000
[ "Physics" ]
Are Mendelian randomization investigations immune from bias due to reverse causation? Mendelian randomization uses genetic variants as instrumental variables to make causal inferences about the effect of a risk factor on an outcome [1, 2]. If a genetic variant satisfies the instrumental variable assumptions for the given risk factor and outcome [3], then an association between the genetic variant and the outcome implies the risk factor affects the outcome in some individuals at some point in the life-course [4]. Combining the instrumental variable assumptions with further assumptions and precise specification of the outcome (including specifying a time period for the outcome) allows valid testing of a more specific causal hypothesis and/or valid estimation of global or local, and point or period average causal effects [5]. Two motivations for Mendelian randomization are primarily stated: avoiding bias from unmeasured confounding and avoiding bias from reverse causation [6]. Reverse causation occurs when the outcome variable at an earlier timepoint, or a proximal precursor of the outcome (such as pre-clinical disease), has a causal effect on the risk factor which can bias estimates of the effect of the risk factor on the outcome. Though it can often be viewed as a specific form of confounding (when pre-clinical disease is a shared cause of the risk factor and outcome leading to violation of exchangeability conditions [7]), reverse causation has been treated as distinct from other forms of confounding in the motivation for Mendelian randomization [6, 8]. (We underscore that reverse causation does not imply that time flows backwards or somehow that future measurements influence the past, but that even if the outcome is measured at a later timepoint to the risk factor, either the outcome at an earlier timepoint or a precursor of the outcome may have influenced the measured value of the risk factor.) An individual’s genetic code is fixed at conception. This implies that associations between genetic variants and subsequent outcomes are less vulnerable to bias from many sources of confounding and reverse causation. For example, environmental or lifestyle factors that occur post-conception cannot be a cause of the genetic variants and therefore cannot be a shared cause of the variants and outcome. Further protection from confounding comes from the random allocation of genetic variants during meiosis and from random mating within the population (although completely random mating is not plausible, mating is often plausibly random with respect to the genetic variants included in Mendelian randomization analyses) – meaning that genetic variants are often independent of confounding factors other than ancestry [9, 10]. It has also often been stated that the fixed nature of the genetic code provides complete immunity to bias from reverse causation in Mendelian randomization studies because genetic variants must precede the outcome in time. For example, Davey Smith and Ebrahim [8] wrote about “the lack of possibility of reverse causation as an influence on exposure–outcome associations in both Mendelian randomization and randomized controlled trial settings” and remarked “the instrument will not be influenced by the development of the outcome (i.e., there will be no reverse causation)”. Here, we demonstrate how reverse causation can lead to bias in Mendelian randomization analyses. For each scenario, we show that even though the variant–outcome associations may not suffer from reverse causation, reverse causation between the risk factor and outcome either in individuals or across generations can result in bias in Mendelian randomization analyses. That is, even though the outcome may not cause the genetic variant (and thus the variant–outcome association may not seem to suffer * Stephen Burgess<EMAIL_ADDRESS> Mendelian randomization uses genetic variants as instrumental variables to make causal inferences about the effect of a risk factor on an outcome [1,2]. If a genetic variant satisfies the instrumental variable assumptions for the given risk factor and outcome [3], then an association between the genetic variant and the outcome implies the risk factor affects the outcome in some individuals at some point in the life-course [4]. Combining the instrumental variable assumptions with further assumptions and precise specification of the outcome (including specifying a time period for the outcome) allows valid testing of a more specific causal hypothesis and/or valid estimation of global or local, and point or period average causal effects [5]. Two motivations for Mendelian randomization are primarily stated: avoiding bias from unmeasured confounding and avoiding bias from reverse causation [6]. Reverse causation occurs when the outcome variable at an earlier timepoint, or a proximal precursor of the outcome (such as pre-clinical disease), has a causal effect on the risk factor which can bias estimates of the effect of the risk factor on the outcome. Though it can often be viewed as a specific form of confounding (when pre-clinical disease is a shared cause of the risk factor and outcome leading to violation of exchangeability conditions [7]), reverse causation has been treated as distinct from other forms of confounding in the motivation for Mendelian randomization [6,8]. (We underscore that reverse causation does not imply that time flows backwards or somehow that future measurements influence the past, but that even if the outcome is measured at a later timepoint to the risk factor, either the outcome at an earlier timepoint or a precursor of the outcome may have influenced the measured value of the risk factor.) An individual's genetic code is fixed at conception. This implies that associations between genetic variants and subsequent outcomes are less vulnerable to bias from many sources of confounding and reverse causation. For example, environmental or lifestyle factors that occur post-conception cannot be a cause of the genetic variants and therefore cannot be a shared cause of the variants and outcome. Further protection from confounding comes from the random allocation of genetic variants during meiosis and from random mating within the population (although completely random mating is not plausible, mating is often plausibly random with respect to the genetic variants included in Mendelian randomization analyses) -meaning that genetic variants are often independent of confounding factors other than ancestry [9,10]. It has also often been stated that the fixed nature of the genetic code provides complete immunity to bias from reverse causation in Mendelian randomization studies because genetic variants must precede the outcome in time. For example, Davey Smith and Ebrahim [8] wrote about "the lack of possibility of reverse causation as an influence on exposure-outcome associations in both Mendelian randomization and randomized controlled trial settings" and remarked "the instrument will not be influenced by the development of the outcome (i.e., there will be no reverse causation)". Here, we demonstrate how reverse causation can lead to bias in Mendelian randomization analyses. For each scenario, we show that even though the variant-outcome associations may not suffer from reverse causation, reverse causation between the risk factor and outcome either in individuals or across generations can result in bias in Mendelian randomization analyses. That is, even though the outcome may not cause the genetic variant (and thus the variant-outcome association may not seem to suffer 1 3 from reverse causation), the type of reverse causation that affects traditional analyses may still indeed bias estimates from Mendelian randomization studies (when a Mendelian randomization analysis is undertaken to estimate a causal parameter) and invalidate causal conclusions (when a Mendelian randomization analysis is undertaken to test a causal hypothesis) [11,12]. In the former case, bias relates to a specified average causal effect estimate; in the latter case, bias relates to the test statistic for a causal hypothesis. Scenario 1. Genetic association with the risk factor is not primary The first mechanism we consider is that a genetic variant is associated with the risk factor via a primary effect of the variant on the outcome variable or on a precursor of the outcome (Fig. 1). By primary, we mean that the risk factor occurs upstream of the outcome in all directed causal paths from the genetic variant to the outcome; that is, any directed causal pathway from the genetic variant to the outcome at a specified follow-up time pass via the risk factor at preceding times. In the opposite scenario, the genetic association with the risk factor is not primary if the effect of the genetic variant on the risk factor is mediated (at least in part) by the outcome. As an example, testosterone has been hypothesized as a possible causal risk factor for polycystic ovary syndrome (PCOS). Genetic variants that predict testosterone concentration in women have been shown to be associated with risk of PCOS [13]. However, one of the symptoms of PCOS is increased testosterone. Therefore, it may not be that elevated testosterone that leads to increased risk of PCOS, but increased predisposition to PCOS that leads to elevated testosterone levels. Genetic variants identified as instruments for testosterone may not affect testosterone directly, but rather via their association with PCOS. The variants may affect risk of PCOS directly ( Fig. 1a) or indirectly via an alternative risk factor for PCOS or pre-clinical PCOS (Fig. 1b). The genetic variants are still primary in the causal chain, but reverse causation between the putative risk factor and outcome means that the variants influence the risk factor secondarily. In this case, an association between the genetic variants and outcome can be present without a causal effect of the risk factor on the outcome. As a further example, genetic variants associated with aspirin treatment were used in a Mendelian randomization analysis to assess the effect of aspirin use on risk of lung cancer [14]. However, all the genetic predictors of aspirin use are all also associated with risk of coronary heart disease [15]. It is likely that the genetic associations with aspirin use arise due to individuals with coronary heart disease or high levels of risk factors for coronary heart disease being preferentially prescribed aspirin. As coronary heart disease and lung cancer are competing outcomes, the reported protective effect of aspirin on lung cancer risk in the Mendelian randomization analysis may be due to the genetic associations with aspirin being secondary to their effects acting via coronary heart disease and/or risk factors for coronary heart disease. This could lead to alternative pathways from the genetic variants to the outcome not via the risk factor. Genetic associations will broadly be lesser in strength when the path from the gene to the trait is less direct. However, as sample sizes for genetic discovery increase, it is increasingly likely that some genetic associations with risk factors are secondary to their association with another variable. The chances of finding such a variant also increase when reverse causation between the risk factor and outcome is stronger. In other words, if Mendelian randomization is being used specifically because of concerns about reverse causation in a traditional observational analysis, the risk of bias due to reverse causation via this mechanism in Mendelian randomization will also be higher. In this scenario, not only are effect estimates expected to be biased, but tests of causal null hypotheses are also not valid. Scenario 2. Feedback mechanism Secondly, Mendelian randomization studies with genetic variants that have direct effects only on the risk factor (i.e. they do not directly affect the outcome) can still suffer from bias due to reverse causation. For instance, if the risk factor influences the outcome and the outcome influences the risk factor at a later time-point (Figure 2a), then genetic associations with the risk factor will be distorted, and Mendelian randomization estimates may be misleading. As an example, genetic variants that predict obesity have been shown to associate with income in women [16]. However, income affects many lifestyle factors, including obesity, leading to a feedback loop. A similar story can be told for cigarette smoking and obesity: genetic predictors of obesity associate with increased smoking prevalence (perhaps (a) (b) Fig. 1 Diagrams illustrating relationships between a genetic variant (G), risk factor (X), and an outcome (Y), where the effect of the genetic variant on the risk factor is a) through its effect on the outcome previous to the risk factor ( Y 0 ) and b) through a confounder (C) -a common cause of risk factor and outcome. Unmeasured confounding is represented by U. In both diagrams, the effect of interest is the effect of X on Y smokers seeking to reduce weight) [17], but genetic predictors of cigarette smoking associate with decreased weight (as cigarette smoking is an appetite suppressant) [18]. Depending on the strength and direction of the reverse causal effect and the prevalence of the outcome, genetic associations with the measured value of the risk factor can be over-or underestimated due to reverse causation [19]. However, some tests of causation will be valid regardless of the presence of this type of reverse causation [5]. For instance, this type of reverse causation will not affect the validity of a test of the sharp causal null (that there is a causal effect in at least one person at one point in time) of the risk factor on the outcome assuming the instrumental variable assumptions hold (Fig. 2b). This is because an association between the genetic variant and outcome still reflects the existence of pathways that go through the risk factor first, even though effect estimation cannot as readily tease apart the feedback loops. Feedback scenarios can occur other than due to reverse causation. A different feedback scenario is that individuals with high levels of a risk factor will preferentially take medication to lower the risk factor. For example, individuals with high levels of cholesterol are more likely to take cholesterol-lowering medication, and similarly for blood pressure. The reverse is true for factors that are beneficial for health outcomes. For example, pregnant women with low iron status are more likely to take iron supplements [20]. In extreme cases where intervention on the risk factor is common and substantial, it may even be that medication or supplementation attenuates completely or even reverses genetic associations with the risk factor. This is particularly important in the example of iron and pregnancy, as the risk factor of interest is not maternal iron levels in general, but maternal iron levels during the critical period of pregnancy. Scenario 3. Cross-generational effects Finally, even though they are fixed at the start of an individual's life, genetic variants are inherited from an individual's parents. Hence when considering effects that may span across generations, an individual's genetic variants are no longer primary in the causal chain. Therefore, when trying to estimate the effect of a risk factor for individuals in one generation, the outcome in the parental generation could influence the outcome or confounders of the outcome in the target generation directly, leading to a pathway from the genetic variants of an individual in the target generation to their outcome that is not via their risk factor (Fig. 3). While this scenario stretches the common understanding of reverse causation somewhat, this is still an example of the outcome influencing a downstream variable, even if the outcome in this case is in the previous generation, and so we believe it is worth discussing while addressing the topic of reverse causation. For instance, the same genetic variants that predispose an individual to increased alcohol consumption also predisposed at least one of the individual's parents to increased alcohol consumption. Outcomes in the offspring generation Mendelian randomization estimates will typically be non-null, but biased. When the risk factor does not cause the outcome in either generation (panel b), Mendelian randomization estimates will not be biased and will provide a valid test of the sharp causal null hypothesis. Shared causes of the parent's exposure and outcome, and their effects on the child's exposure and outcome that are not relevant to the bias under study, are omitted for clarity may be driven by the outcomes caused by the parents' alcohol consumption, rather than from the offspring's alcohol consumption directly. Hence there may be causal effects of alcohol even amongst individuals who themselves do not drink. Additionally, increased parental predisposition to drinking alcohol may affect offspring alcohol consumption, distorting Mendelian randomization estimates. As a further example, genetic variants associated with body mass index may be associated with outcomes not only due to the effect of obesity in the individuals observed, but also due to obesity and its consequences in the parent generation. From the perspective of aetiology, this is not always such a serious problem as even if the offspring outcomes are driven by the risk factor and its consequences in the parents, it is still the risk factor that is causal for the outcome. However, from the perspective of intervention, changing the risk factor in the offspring may not lead to the consequences for offspring outcomes that are predicted by straightforward interpretation of a Mendelian randomization estimate. Hence Mendelian randomization investigations with cross-generational effects are able to assess the causal relevance of the risk factor in a broad sense, in that they can test the sharp causal null that the risk factor affects the outcome in at least one generation. However, the pathway by which the risk factor influences the outcome may be driven by the effect of the risk factor in a previous generation. Discussion and conclusion In this short manuscript, we have discussed three ways in which Mendelian randomization analyses may be susceptible to bias due to reverse causation. Although in some cases a causal hypothesis can still be validly tested, in other cases causal inferences of all types from the approach may be unreliable. Several methodological researchers have already cautioned against interpretation of causal effect estimates from Mendelian randomization as the expected impact of intervening on the risk factor in a clinical setting, or even advised against presenting causal effect estimates at all [4,11,21]. This manuscript provides further reasons for caution not only in the interpretation of effect estimates, but also in the validity of causal null hypothesis testing. It is important to appreciate context when interpreting findings from a Mendelian randomization analysis, and to be aware that the estimated causal effect of the risk factor (which typically gets interpreted as the impact of a lifelong change in the trajectory of a risk factor) may not be achievable by a practical intervention on the risk factor in the target population. Drawing directed acyclic graphs, carefully defining the risk factor and outcome (in a way that acknowledges time), and thinking closely about how the genetic variant influences the trajectory of the risk factor will help analysts to precisely define the causal effect of interest, and hence detect the possibility for findings to be influenced by reverse causation. There are several approaches that can be taken by investigators to mitigate or identify bias due to reverse causation. Some of this guidance follows best practices for Mendelian randomization studies more broadly [12]. Overall, where possible, Mendelian randomization analyses should be performed using genetic variants for which the mechanism of association of the variants with the risk factor is both primary and well-understood. As a consequence of this, investigators should prioritize Mendelian randomization analyses for risk factors that have proximal genetic variants. When the mechanism linking genetic variants and risk factors is unclear or distant, inferences from Mendelian randomization generally carry less evidential weight. As for more advice more specific to the scenarios considered here, first, statistical methods have been developed to help distinguish whether genetic variants primarily influence the risk factor or another variable (as per Scenario 1). The MR-Steiger method measures the proportion of variance explained by a genetic variant in the risk factor and in the outcome [22] and can be used to flag for removal from the analysis variants that are more strongly linked to the outcome than the risk factor. This method is not guaranteed to identify Scenario 1, and is sensitive to measurement error. Secondly, simulations can be used to explore the extent of bias due to feedback mechanisms (as per Scenario 2), although this relies on strong assumptions about the temporal nature and magnitude of the feedback [19]. Thirdly, statistical methods have been developed to consider cross-generational effects (as per Scenario 3) when data are available on parents and offspring [23,24]. If such data are not available, researchers should express caution in the interpretation of a Mendelian randomization investigation when it is plausible that causal effects may span across generations. Scenarios 2 and 3 further underscore the general recommendation to view Mendelian randomization as primarily testing a causal null hypothesis rather than estimating a causal effect [12]. In conclusion, while it is fair to say that Mendelian randomization investigations offer some protection against biases that can be conceptualized as reverse causation, it is not reasonable to claim that Mendelian randomization investigations are totally immune from the phenomenon. Researchers should consider carefully whether their findings could be explained by genetic variants having a primary association with the outcome, and how previous versions of an outcome (within an individual or across generations) can impact the stated risk factor. Compliance with ethical standards Conflict of interest/competing interests The authors have no conflicts of interest to declare that are relevant to the content of this article. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.
5,066.6
2021-02-21T00:00:00.000
[ "Medicine", "Biology" ]
The Influence of Disorder in Multifilament Yarns on the Bond Performance in Textile Reinforced Concrete In this paper we analyze the performance of a bond layer between the multi-filament yarn and the cementitious matrix. The performance of the bond layer is a central issue in the development of textile-reinforced concrete. The changes in the microstructure during the loading result in distinguished failure mechanisms on the micro, meso and macro scales. The paper provides a brief review of these effects and describes a modeling strategy capable of reflecting the failure process. Using the model of the bond layer we illuminate the correspondence between the disorder in the microstructure of the yarn and the bonding behavior at the meso- and macro level. Particular interest is paid to the influence of irregularities in the micro-structure (relative differences in filament lengths, varying bond quality, bond-free length) for different levels of local bond quality between the filament surface and the matrix. Introduction Textile reinforced concrete (TRC) has emerged in the last decade as a new composite material combining the textile reinforcement with the cementitious matrix.Its appealing feature is the possibility to produce filigree high-performance structural elements that are not prone to corrosion, as is the case for steel reinforced concrete.In contrast to other composite materials, in TRC both the matrix and the reinforcement exhibit a high degree of heterogeneity of their material structure at similar scales of resolution.As a consequence, the fundamental failure mechanisms in the yarns, in the matrix and in the bond layer interact with each other and can result in several macroscopically different failure modes. The development of a consistent material model for textile reinforced concrete requires the formulation and calibration of several sub-models on several scales of resolution.Each of these models represents the material structure at the corresponding scale (Fig. 1) with a focus on specific damage and failure mechanisms.The following correspondence between the scales and the observable components of the material structure and their interactions are specified: l micro level -filament, matrix -bond filament -matrix l meso level -yarn, matrix -bond yarn -matrix l macro level -textile, matrix -bond textile -matrix While models at the micro level are able to capture the fundamental failure and damage mechanisms of the material components (e.g.filament rupture and debonding from the matrix) their computational costs limit their application to small size representative unit cells of the material structure.On the other hand, macro level models provide sufficient performance at the expense of a limited range of applicability.Generally, all the scales must be included in the assessment of the material performance.The chain of models at each scale may be coupled (1) conceptually by clearly defining the correspondence between the material models at each level or (2) adaptively within a single multi-scale computation to balance accuracy and performance in an optimal way [1,2]. Due to the complex structure of textile reinforced concrete at several levels (filament -yarn -textile -matrix) it is effective to develop a set of conceptually related sub-models for each structural level covering the selected phenomena of the material behavior.The homogenized effective material properties obtained at a lower level can be verified and validated using experiments and models at higher level(s). The present paper is focused on the role of disorder in the bond layer between the yarn and the matrix.In Sec. 2 we review the elementary effects occurring in the bond layer during loading.After that, in Sec. 3, the model capturing some of these effects is introduced.Then, in Sec. 4, the calibration for a particular combination of yarn and matrix is performed, and finally, in Sec. 5, a parametric study shows the interaction effects between two failure mechanismsnamely the debonding of filaments from the matrix and rupture of the filaments with an included disorder in the bundle.constituting the yarn exhibits nonlinear behavior due to disorder in the filament structure.The delayed activation of individual filaments leads to a gradual growth of stiffness at the beginning of the loading process, the friction between filaments influences the maximum stiffness reached during loading and both these effects influence the rate of failure after reaching the maximum force.Both in the filaments and in the yarn we may also observe the statistical size effect leading to reduced strength with increasing length [3]. The fine grained concrete matrix exhibits the evolution of microcracks in the fracture process zone that gradually close up to the macro crack.However, for the purpose of the present study, focused solely on the role of heterogeneity in the yarn, this influence can be disregarded. The interaction between the reinforcement and the matrix can be seen on the micro-and meso-scales shown in Fig. 2. The fine scale interaction between the filament and the matrix includes the phases of bonding, debonding and friction.The interaction at the level of the yarn and matrix includes the same phases, but each of these phases includes fine scale interaction modes between the filaments and the matrix.Due to the complex structure of the failure process zone, the yarn-matrix bonding behavior cannot be captured without analyzing the interaction effects in micro-mechanical terms, as is done in this paper. Another interaction effect occurs upon cracking of the matrix and the evolution of crack bridges leading to the tension stiffening effect in the overall response.This interaction is studied using meso-scale models, and goes beyond the scope of the present study [4].The same holds for the interaction at the level of textile structures embedded in the matrix, which can be addressed by a macromechanical treatment [5,6]. Model of the bond layer In this model the interface layer between the yarn and the matrix is regarded as a set of laminas interacting with the matrix through the given bond law.The laminas represent groups of filaments with the same characteristics, and are coupled with the matrix using zero thickness interface elements [7]. The disorder in the filament bundle is taken into account using one of the three distributions of filament properties: (1) distribution of the bond quality, diminishing from the outside to the inside of the yarn (2) distribution of the bond free length, increasing from the outside to the inside of the yarn and (3) distribution of the delayed activation of filaments within the bond free length. These distributions do not represent the disorder in the filament bundle directly.The bundle geometry is assumed in the form of a parallel set of filaments.The effect of disorder is reflected indirectly in terms of the mentioned distribution functions inducing an inhomogeneous stress transfer throughout the bond layer that is assumed to occur in a similar way in the heterogeneous material structure.The model can serve the purpose of capturing the influence of the variations in the bond performance on the macroscopically observable failure process, so that these variations may be quantified in a calibration procedure.The calibration of the model is performed using both the load-displacement curve and the curve representing the instantaneous fraction of the broken filaments during the loading process.The latter is obtained experimentally by optical recording of the light transmission through the unbroken filaments [8].Using this model and the experimental data we are able to derive the effective bond law of the bond layer between the whole yarn Acta Polytechnica Vol.44 No. 5 (filament bundle) and the matrix, which can be used at the higher modeling levels. Identification of the characteristic parameters for the bond layer For the selected combination of the yarn and the matrix, the parameters characterizing the tensile behavior of the yarn and the parameters of the bond between the filament surface and the matrix can be determined from the preliminary numerical and experimental study.In particular, the characteristics of the yarn and of the filaments can be derived from tensile tests on yarn.The applied stochastic modeling of the multi-filament bundle allowed us to obtain also statistical distributions of strength and stiffness along the yarn as described thoroughly in [9,10].The local bonding between the filament surface and the matrix has been characterized by a bond model with parameters calibrated using the single filament pull-out experiment [11]. The sought material characteristics are the distributions of the bond quality, bond free length and activation strain across the filament bundle in the bond layer.The calibration procedure [12,13] is based on the experimental data shown in Fig. 3. Here, the left diagram shows the load-displacement curves and the right diagram shows the diminishing fraction of unbroken filaments during the pull-out test for four selected specimens. Before presenting the calibrated results, we first show the qualitative influence of the variations in the bond quality across the yarn cross section.Three examples of assumed bond quality distributions are shown in Fig. 4 with the maxi-mum achievable shear flow (100 %) at the outside of the yarn and linear, quadratic and cubic reduction in the internal layers. The influence of a linear, a quadratic and a cubic bond quality distribution (Fig. 4) on the pull-out curve and on the progression of filament rupture is shown in Fig. 5.While the linear and the quadratic interpolation functions result in a sharp kink in the pull-out curve at the onset of filament rupture, the cubic distribution leads to a curve with a higher deformation capacity and is able to qualitatively reproduce the pull-out behavior and the progression of filament breaks measured in the experiment. While the form of the bond quality distribution influences the post-peak slope of the pull-out curve the maximum pull-out force primarily depends on the tensile strength of the filaments.This correspondence is documented in Fig. 6.Higher tensile strength of the filaments results in a higher pull-out force.Furthermore, higher filament strength leads to a higher frictional force at the end of the pull-out test because a greater number of filaments are pulled out prior to their rupture. The effect of filament stiffness and strength and of the local bonding stiffness on the initial slope of the pull-out curve is not significant.On the other hand, their effect on the fraction of broken filaments is much higher.In other words, the initial stiffness in the pull-out test cannot be reproduced solely by reducing the bond quality and the tensile strength of the filaments.As a consequence, the reduced pull-out stiffness is explained by the existence of a free length inside the specimen between the macroscopic boundary of the matrix and the first contact of the filaments with the matrix inside the Similarly to the bond quality, we assume that the bond--free length increases from the outside of the yarn cross section to the inside, which is illustrated in Fig. 7.The influence of this free length on the initial stiffness is demonstrated in Fig. 8.The initial stiffness and the maximum pull-out force decrease with increasing bond-free length.Due to the shorter embedding length of the filaments the number of filaments being pulled out (i.e.debonded) increases and results in higher frictional force at the end of the pull-out test. The filaments in the bundle exhibit a waviness which is illustrated in Fig. 9. Within the internal free length the filaments have the possibility to straighten before they get activated.The delayed activation of the individual filaments is modeled by an activation strain, which has to be reached before a filament takes up force.The activation strain increases with increasing free length.Fig. 10 shows the influence of lin-ear distributions of the activation strain with different maxima is shown.The increasing delay of the activation results in further reduction of the initial stiffness and of the maximum pull-out force.In contrast to the parameters described above, it does not influence the number of broken filaments. Using the parameters described above a calibration of the model is possible, as exemplified in Fig. 11.The calibrated distribution of the bond quality across the yarn cross section provides the basis for further modeling on the meso and macro level. Parametric study of the bond performance with disorder in the yarn In the previous section the characteristics of the material structure in the bond layer were introduced and their influence on the bond performance was shown.In the following we will study the influence of the local bond quality on the overall bond performance.As already specified, the bond behavior is described by a bilinear bond law (Fig. 12) including the phases of adhesive bond, debonding and friction.The local bond quality can be modified by changing the maximum bonding stress t max , the frictional stress t fr and their ratio t max / t fr . The influence of the maximum bond stress t max and the ratio of maximum bond stress and frictional stress t max / t fr for an embedding length of 30 mm is shown in Fig. 13.It is obvious that the effect of the maximum shear stress on the maximum pull-out force is negligible.This is a result of the long embedding length of 30 mm accumulating a high amount of the frictional stress.Therefore the maximum pull--out force is depends essentially only on the frictional stress. The dependency of the maximum pull-out force on the frictional stress t fr is shown in Fig. 14.For a constant maximum bond stress t max the maximum pull-out force and the associated displacement decrease with increasing frictional stress tfr.This effect is rather surprising.It means that the improvement of the bond performance of the filament sur-face results in a reduction of the resulting bond performance of the bundle. In order to illuminate this effect, the shear flow at maximum pull-out force along the filament is displayed for each lamina in Fig. 15.The laminas in front represent the filaments inside the bundle with a low bond performance, while the rear laminas represent the outer filaments with a high bond performance.A constant shear flow indicates that up to this length the filaments have debonded.The left diagram shows the shear flow distribution across the bond layer for a low level of frictional stress t fr .The length activated for the stress transfer between the filaments and the matrix is much longer than in the right diagram, showing the shear flow distribution for a higher frictional stress.The longer stress transfer length results in a lower strain of the filaments, and leads to filament rupture at larger control displacements. The reduction of the maximum pull-out force with increasing local bond strength is explained using Fig. 16.The two diagrams show the accumulative pull-out response (thick curve) and the pull-out curves for each lamina separately.For the lower level of frictional stress (t fr = 05 .N/mm 2 ) more filaments can be activated simultaneously (left diagram).At the maximum pull-out force there are 95 % of the filaments active and at the end of the loading only 15 % of the filaments get broken while all the rest remain intact. On the other hand, for yarn with a higher level of friction are already broken).At the end of the loading all the filaments are broken.Thus, even though the inner filaments were able to transfer a higher amount of force to the matrix the resulting pull-out force was reduced, due to the non-uniformity of the transfer. This qualitative comparison demonstrates the role of disorder represented by the varying bond-free length, delayed activation and spatial variations in the bond quality.The improvement of the local bond performance is counterproductive and results in an earlier failure of the outer filaments with higher bond performance.Due to the increased pull-out stiffness, the maximum pull-out force reaches its maximum at a smaller control displacement.At this displacement most of the inner filaments with lower bond performance have not been activated and cannot contribute to the total pull-out force. Conclusions In this paper, a modeling strategy for supporting the development of textile reinforced concrete is presented.It is based on the assumption that there is no ultimate model able to capture all the aspects of the material behavior.Therefore the models currently being developed in the framework of the collaborative research center are classified and evaluated with respect to the failure mechanisms being captured.It is important that they have a defined validity and clearly specified interfaces.They are applied together in order to study the material response at various scales of material resolution. The modeling of the bond layer demonstrated that we face a failure process zone with complex interaction of elementary effects.The parametric study emphasized the role of disorder in this interaction and exemplified that it reverses the expected correlation between input parameters and material response. The final message of this paper can be put as follows: In the design of cementitious composites reinforced with multi-filament yarns, the issue of disorder must be carefully analyzed.Only with a good knowledge of the phenomena in the microstructure it is possible to balance the performance of the individual components of the material structure to obtain optimum performance of the composite. Fig. 2 : Fig. 2: Correspondence between scales, components and effects used in this model Fig. 4 : Fig. 4: Possible functions representing the decrease in the bond quality
3,904.6
2004-01-05T00:00:00.000
[ "Materials Science", "Engineering" ]
Diethyl [hydroxy(2-nitrophenyl)methyl]phosphonate In the title molecule, C11H16NO6P, the nitro group is twisted out of the mean plane of the benzene ring at 29.91 (3)°. The two ethyl groups are disordered between two orientations in the ratios 0.784 (7)/0.216 (7) and 0.733 (6)/0.267 (6). Intermolecular O—H⋯O hydrogen bonds link the molecules into centrosymmetric dimers. In the title molecule, C 11 H 16 NO 6 P, the nitro group is twisted out of the mean plane of the benzene ring at 29.91 (3) . The two ethyl groups are disordered between two orientations in the ratios 0.784 (7)/0.216 (7) and 0.733 (6)/0.267 (6). Intermolecular O-HÁ Á ÁO hydrogen bonds link the molecules into centrosymmetric dimers. Comment Phosphonates, especially enantiomerically pure forms, are particularly important in connection with their remarkable biological activities. They have been used as enzyme inhibitors, antibacterial agents, anti-HIV agents, botryticides, and haptens for catalytic antibodies (Allen et al., 1978;Hirschmann et al., 1994). In this regard, the preparation of various optically active phosphonates with a diversity of structures is highly desirable for drug discovery and medicinal chemistry. The title compound (I) was obtained in the reaction of diphenylphosphite with an aromatic aldehyde in the presence of triethylamine. After 15 minutes, triethylamine (0.1 ml) was added, and the reaction mixture was stirred for 2 h at 0°C. The resulting solution was washed with saturated NaHCO 3 solution, extracted with dichloromethane and dried over MgSO 4 . The solution was filtered and purified by column chroatography on silica gel, using ehtyl acetate and petroleum as eluant to afford the title compound. Crystals of (I) suitable for X-ray data collection were obtained by slow evaporation of a chloroform and methanol solution in ratio of 100:1 at 293 K. Special details Geometry. All e.s.d.'s (except the e.s.d. in the dihedral angle between two l.s. planes) are estimated using the full covariance matrix. The cell e.s.d.'s are taken into account individually in the estimation of e.s.d.'s in distances, angles and torsion angles; correlations between e.s.d.'s in cell parameters are only used when they are defined by crystal symmetry. An approximate (isotropic) treatment of cell e.s.d.'s is used for estimating e.s.d.'s involving l.s. planes. Refinement. Refinement of F^2^ against ALL reflections. The weighted R-factor wR and goodness of fit S are based on F^2^, conventional R-factors R are based on F, with F set to zero for negative F^2^. The threshold expression of F^2^ > 2sigma(F^2^) is used only for calculating R-factors(gt) etc. and is not relevant to the choice of reflections for refinement. R-factors based on F^2^ are statistically about twice as large as those based on F, and R-factors based on ALL data will be even larger.
620.2
2007-12-06T00:00:00.000
[ "Chemistry" ]
The Influence of Dynamic Replacement Method on the Adjacent Soil The purpose of this paper is to report on the field tests for the formation of a single DR (dynamic replacement) column and its influence on the surrounding weak soil deposit. The influence of the column formation has been assessed with piezocone and dilatometer measurements, as well as by changes in the strength and deformation parameters obtained from the field tests. These measurements were carried out during and after the column formation and at varying distances from the column. The tests carried out have shown that soil close to the column became weaker during the column formation. As a result the soil stiffness and strength were found to increase over time. The weaker the soil was in natural state, the more significant the strengthening effect became. That indicates that changes occurring around a DR column are complex. The measurements suggest that the changes in soil structure have a tendency to be dependent on the distance from the column, elapsed time and the type and the initial condition of the soil. Introduction In the dynamic replacement method, stiff granular columns are rammed into natural soil to improve its strength and stiffness characteristics (Fig. 1). Previous studies on DR method have been based on the results from different types of field tests (Table 1). They were sometimes additionally accompanied by laboratory tests [1,2]. All tests focused on the influence of a group of columns on the adjacent soil. The procedures were performed before and after soil strengthening. In few cases they were performed during column formation [3,4]. The findings show that the stiffness and strength of the column is greater than in the surrounding soil. The values of the cone resistance q c from CPTs which penetrated columns have been about 20-30 times [3], 13-30 times [5], 12-57 times [4] and up to 150 times [6] higher than those measured in the natural weak soil. Similarly, pressuremeter tests have recorded maximum pressure increasing approximately 10-13 times [7] and 12 times [6]. Blow counts recorded by dynamic probing are found to typically increase by at least 3 times [8]. The strengthening effect of the soil adjacent to the formed columns is subtle. It appears to be dependent upon the type of soil and the time period between the column formation and testing. The improvement effect occurred in soils which were compacted during column formation, i.e., in the peat deposits and was found both under the columns and around them. Lo et al. [6] indicated that the cone resistance in the CPT increased 7 times, the maximum pressure in the pressuremeter test (PMT)-3 times. Gunaratne et al. [4] and Stinnette et al. [9] obtained similar results. Cohesive soils have experienced various trends of changes during and after column formation. Certain studies detected a decrease in soil strength and deformation parameters [5], although there is no change observed both under [10] and with adjacent columns [8]. In contrast, some authors observed soil strengthening. Hamidi et al. [7] recorded an increase in limit pressure (by about 100%) and an increase in Menard modulus (over 350%). Both increases were recorded inside a DR column, while only small changes were noticed between blows. Dumas et al. [10] observed that (PMT), standard penetration test (SPT) and deep cone penetration test (DCPT) parameters measured after strengthening were 1.4-2.0 higher than initial. Han [1] has presented an increase of q c and number of blows (from SPT) which vary from 1.5 to 4 times. The available research has not considered the influence of a single DR column on the surrounding soil. This paper seeks to address this aspect of column behavior. It also gives some insights on time and distance influences. The process of DR column formation is complicated and affected by many factors. Therefore, to measure and analyze some of those first, a single DR column formation was chosen to be the topic of the research. Further research is planned to analyze a group of column (which will allow to consider other factors). Currently researchers aim for numerical modeling of DR column formation; examples of such attempts are described in [11][12][13]. However, to calibrate numerical models we first need to perform wide range of well documented in situ tests. Such tests are the focus of the article. Many studies are performed at different sites, but as they mainly are based on a few in situ results [14], they do not form good enough basis for numerical modeling. Test Field and Research Program CPTU, DMT and boreholes were carried out to determine soil characteristics on the test field. The soil profile consisted of four layers [15]. The first layer (layer I) up to 1.5 m below ground level (b.g.l.) comprised silty sands and sandy silts. The second layer (layer II) between 1.5 and 2.5 m b.g.l. is silt (10% sand, 84% silt, 6% clay). The third layer (layer III), up to 4.8 m b.g.l. was built of silt/silty sand (20% sand, 74% silt, 6% clay). The fourth layer (layer IV) consisted of fine and medium sand (Figs. 2, 3). The water table was found during drilling at a depth of 4.8-5.3 m b.g.l. and rose to 3 m b.g.l. The DR column was formed using a free dropped, 10 t barrel-like pounder from the height of 15 m. The pounder diameter was 1.0 m in the middle section, whereas at the bottom and the top it was 0.8 m. The mixture of fine gravel with coarse sand and rubble (0-200 mm fraction in 1:1 proportion) was used as backfill material. The uniformity coefficient of the material was greater than 25 and the coefficient of gradation was less than 1. Approximately 18 m 3 of the mentioned material was used to form the column. The column was formed by dropping the rammer onto the soil 36 times from different heights (2-15 m). The column formation was divided into three stages (10 ? 15 ? 11 drops). During the test, ground heave was measured in points located 2, 3, 4 and 6 m from the column axis. The total volume of heave was roughly 7.5 m 3 . The maximal uplift was noted 2 m from the column (0.15-0.32 m). At the point located 6 m from the column axis, the uplift was between 0 and 0.03 m. CPTU and DMT tests were carried out at various time intervals and at different distances from the column and depth. The first series of measurements was carried out before the stone column formation and consisted of four CPTU tests conducted at 2, 3, 4 and 6 m from the column axis and of three DMT tests performed at 2, 3 and 6 m from the column axis. A further series of tests were conducted at points located on the circumference of a circle passing through the points from the initial tests. This was designed to ensure that future testing would not be unduly affected by the previous tests. The field tests were conducted after 1/3, 2/3 and a completion of the column to the full depth. Tests were performed 1, 8 and 30 days after construction. CPTU and DMT were performed using a static probe Hyson 200 kN produced by Dutch company A.P. van den Berg Machinefabriek. The piezocones had base surface area equal to 10 cm 2 , friction sleeve surface equal to 150 cm 2 , apex angle 60°and a filter installed directly behind the cone tip (u 2 ). The soundings were made with a constant penetration velocity of 20 mm/s [15]. The following parameters were recorded continuously during the tests: cone resistance (q c ), sleeve friction (f s ), and excess pore water pressure (u 2 ). They were standardized and normalized [16][17][18] to the following values: corrected cone resistance q t , friction ratio R f , excess pore water pressure parameter B q and normalized effective cone resistance Q t The soil type was determined in two stages. During the first stage, Harder-von Bloh procedure [19] was applied to divide soil into layers and localize them using the classification system proposed by the Department of Geotechnics at Poznan University of Life Sciences [20]. The second stage consisted in grouping the soil types by applying Hegazi-Mayne procedure [21]. In the second phase, the division was based on Hegazi-Mayne procedure [21] and the soil type was determined using Robertson's diagram [22]. This second phase was applied to confirm the soil type indicated in the first stage. Deformation parameters, i.e., effective friction angle (/ 0 ), effective cohesion intercept (c 0 ) were determined on the basis of Senneset and Janbu's procedure [23] whereas undrained shear strengthon the basis of Lunne et al. [16]. The latter, as well as Mayne [24,25] served also to indicate deformation parameters, i.e., constrained modulus. Dilatometric tests were performed with a flat plate dilatometer. Pressure values were recorded at 0.2 m intervals at increasing depth [15]. Based on these readings, the following parameters were indicated: non-dimensional mechanical properties (I D ), non-dimensional lateral stress indices (K D ) and dilatometric moduli (E D ). With these parameters it was possible to estimate the soil type and its mechanical parameters (/ 0 , M), applying the procedures prepared by Marchetti [26]. Results The outcome of the tests could not be analyzed in the first superficial layer (layer I) due to the detrimental effect of heavy machinery and weather conditions. Layer IV consisted of sands where the compaction mechanism caused by the use of high energy impact was already recognized (e.g., [27]). Detailed parameters of layers II and III which have been subjected to extensive analysis are shown in Table 2. x parameters not possible to define with used procedure During the column formation a significant (up to 50%) decrease of cone resistance q c was noted in the closest vicinity of the column (i.e., up to 3 m from the column axis) in both layers (Fig. 4). The values of the cone resistance increased with time and exceeded the initial values in the weaker layer II by approximately 70-100% and by 30% in layer III (except 2 m from the column). Further from the column (at 4 m), the changes were only local, and 6 m from the center were not visible (Fig. 5). After the column formation the highest q c increases were measured in layer II in the distance of 4 m from the center (approximately 60%). Soil friction angle and cohesion in the layer II at a distance of 2 and 3 m from the column increased after the column completion by 35 and 100%, respectively. At all the points in layer III parameters / 0 and c 0 dropped by 10-50% during tamping and then increased after the column formation finally reached values that are similar to the initial values. Similar changes were observed in the soil undrained shear strength (S u ). An increase of 90 and 48% was noted in the layer II, 3 and 6 m away from the column axis, respectively. Undrained shear strength in the layer III decreased in points located 3 and 4 m away from the column axis (by maximum 20%) and increased in other points (by maximum 33%). The constrained modulus in the layer II, 3 m from the column axis, increased by 90% whereas it did not change in other points. The value of the constrained modulus dropped in the layer III (10-15%) during the column formation process. After the construction, the increase was observed only in points located 6 m away from the column axis. Pressure P 0 in layers II and III was increasing (up to 40%) already during the column formation (2, 3 m from the column) or after the formation process had been completed (6 m from the column)- Fig. 6. The maximum P 0 values were recorded in the layer II 1 day after construction (the values were up to 150% higher than the initial in the distance 6 m from the column center). The values gradually decreased over time reaching the initial values (layer II, 6 m from the column center) or exceeding these (30-120%) 30-day post-construction. Pressure P 1 measured at 2 and 3 m from the column decreased during column formation in both layer II and III. The pressures subsequently increased during the last stage of the column formation or 1 day later (Fig. 7). This effect was more pronounced in layer II (up to 300%), while in Based on DMT testing an increase in the friction angle (up to 10%) and the constrained modulus (up to 50%) was previously noted during the column formation (2 m from the column axis) or 1 day after construction (3, 6 m). Those values did not change afterwards. Discussion The completed DMT and CPTU tests indicated that the surrounding soil softens during the dynamic replacement process. The range of the impacted zone may vary. The zone radius can be estimated as up to 2.5 times the diameter of the top of the column. The extent of the softening effect depends on soil condition. For cohesive soils, characterized by higher stiffness (layer III), the softening is considerably greater than for weaker soils (layer II). These findings are similar to those of Yee and Chua [5]. However, these authors have not determined the radius of the impacted zone. The dynamic replacement construction is successful in improving the surrounding soil. Soil parameters after strengthening varies over time and are dependent on the initial soil condition as well as on the distance from the column. In this paper the authors examined cohesive silty soils, in which the increase of parameters were measured for 30 days. After that period, the values of mechanical parameters of layer II were higher (even by 100%) than before the ground improvement. However, soils in the layer III returned to the initial state. The extent of strengthening zone was similar to the softening zone. Generally, the changes measured in q c and in / 0 , c 0 , S u , M were similar to those described by Dumas et al. [10] and addresses in this paper. The dilatometer results are similar to those presented by Dumas et al. [10] and Bates and Merifield [3]. However, the cited authors did not examine the course of changes of the mentioned parameters within time, after the construction of the ground improvement. The parameters changes after 30-day postconstruction can be explained by consolidation process implicated by high energy impacts. During the pounder drops in close vicinity of the column, soil is subjected to large deformations. That can lead to internal cracking in some regions. These regions can be identified with privilege filtrating paths which shorten the consolidation time. Conclusion This paper presents a summary of the results of unique (due to its wide character) field tests on DR column and surrounding soil. It has been shown that changes occurring in the soil surrounding a DR column are complex. They depend on the distance from the column, elapsed time, the type and the initial condition of the soil. During the strengthening process, the soil softens in close vicinity of the column, but then soil parameters increases over time. On the basis of the test results, it is possible to conclude that the less stiff the natural in situ soil is, the more significant the improvement becomes. The soil softening is less evident at greater distances from the column; however, the properties of surrounding soil are still improved with time. These conclusions are true for the particular technique of DR column formation that is presented in this paper and the particular soil conditions investigated (*15% sand, *79% silt, *6% clay). Consideration of the pre-strengthening soil parameters in the design could underestimate soil-column interaction. This may happen even when in the design sophisticated constitutive model of soil was used [28]. The present increases in soil mechanical parameters have been identified for the ground surrounding a single DR column only and would be expected to be subjected to further increase if more columns were constructed. Nonetheless, if the acceptance tests are carried out too early then it might be a case that DR has not met its design criterion.
3,785.6
2017-07-24T00:00:00.000
[ "Geology" ]
Research Article Regulator-Based Risk Statistics with Scenario Analysis the Introduction Research on risk is a hot topic in both engineering and theoretical research, and risk models have attracted considerable attention. e research of engineering risk involves two problems: choosing an appropriate risk model and allocating the risk to individual production line. is has led to further research on risk statistics. In the seminal paper, risk models were introduced by axiomatic system, see Artzner et al. [1,2], Föllmer and Schied [3], and Frittelli and Rosazza Gianin [4]. However, as pointed out by Cont et al. [5], these axioms fail to take into account some key features encountered in the practice of risk management. In fact, sometimes, when measuring the risk, it is only relevant to consider the losses, not the gains. For this reason, we are able to derive the risk based on losses, not gains. Next, from the statistical point of view by Kou et al. [6], the behavior of a random variable can be characterized by its samples. At the same time, one can also incorporate scenario analysis into this framework, see Antolín-Díaz et al. [7]. erefore, a natural question is how about the discuss of regulator-based risk with scenario analysis. It is worth mentioning that the issue of risk measures with scenario analysis has already been studied by Delbaen [8]. It has also been extensively studied in the last decade, for example, see Kou et al. [6], Ahmed et al. [9], Assa and Morales [10], Hassler et al. [11], Sun et al. [12], Tian and Jiang [13], Tian and Suo [14], and the references therein. However, as pointed out by Deng and Sun [7], people sometimes only pay attention to the losses caused by the risk. us, it is of special sense to derive the risk statistics for such risk, especially the engineering risk. In the present paper, we are able to derive convex and coherent regulator-based risk statistics in engineering, and dual representations for them. Finally, the relationship between regulator-based risk statistics and the convex risk statistics introduced by Tian and Suo [14] also is given to illustrate the regulator-based risk statistics. e remainder of this paper is organized as follows: in Section 2, we briefly introduce some preliminaries. e main results of regulator-based risk statistics are stated in Section 3, and their proofs are postponed to Section 4. Finally, in Section 5, we are able to derive the relationship between regulator-based risk statistics and the convex risk statistics introduced by Tian and Suo [14]. Preliminaries In this section, we briefly introduce the preliminaries that are used throughout this paper. Let N ≥ 1 be a fixed positive integer. Denote X by a set of random losses, and X N by the product space X 1 × · · · × X N , where X i � X for 1 ≤ i ≤ N. Any element of X N is said to be a portfolio of random losses. In practice, the behavior of the N-dimensional random vector M � (X 1 , . . . , X N ) under different scenarios is represented by different sets of data observed or generated under those scenarios because specifying accurate models for M is usually very difficult. Some detailed notations can be found in Kou et al. [6]. Here, we suppose that there always exist m scenarios. Specifically, suppose that the behavior of M is represented by a collection of data M � (X 1 , . . . , X N ) ∈ R N which can be a data set based on historical observations, hypothetical samples simulated according to a model, or a mixture of observations and simulated samples. For any Regulator-Based Risk Statistics In this section, we state the main result of regulator-based risk statistics in engineering. Firstly, we derive the properties related to regulator-based risk statistics. is said to be a convex regulator-based risk statistic if it satisfies the following properties: Moreover, a convex regulator-based risk statistic ρ is said to be a coherent regulator-based risk statistic if it still satisfies (A.5) Positive homogeneity: for any α ≥ 0 and M ∈ R N , ρ(αM) � αρ(M) Remark 1. e main objective of this section is to derive the macromodels for measuring the engineering risk by the properties introduced above. In fact, the properties in Definition 1 can also be called the axioms related to risk statistics. And among all the current research on risk models through axioms, the dual representation is most widely used. Next, we derive the dual representations of regulatorbased risk statistics, and the proofs were given in the next section. such that e function α for which (2) holds can be choosen as Remark 2. e dual representation result in eorem 1 depends only on the negative part of M due to the lossdependence property (A.3). In eorem 2, let N � 1, then representation result is reduced to the one-dimensional case which coincides with the representation results of Cont et al. [5]. Proofs of Main Results In this section, we are able to derive the proof of main results in Section 3. Proof of eorem 1. Let f(X) � ρ(− X), then f is an increasing convex function. According to Cheridto and Li [15], we have where Hence Hence where and using loss-dependence property of ρ, we have Now, let α be any penalty function for ρ. en, for any Mathematical Problems in Engineering Hence, Taking supremum over R N for M � (X 1 , . . . , X N ) in gives rise to □ Proof of eorem 2. If ρ is a coherent regulator-based risk statistic, then from the proof of eorem 1and the positive homogeneity of ρ, for any Q ∈ R N and λ > 0, we have Hence, α min can take only the values 0 and +∞. is completes the proof of eorem 2. Regulator-Based Version of Convex Risk Statistics In this section, we derive a new version of regulator-based risk statistics in engineering. It is worth noting that this version can be related to convex risk statistics introduced by Tian and Suo [14],. For any convex risk statistic ρ on R N defined in Tian and Suo [14], we can define a new risk statistic ρ by ρ(M): � ρ(M∧0) for any M ∈ R N . Obviously, ρ is a convex regulatorbased risk statistic defined in Section 3. We call ρ the regulator-based version of ρ. We can prove that a convex regulator-based risk statistic ρ is a regulator-based version of some convex risk statistic. Project-loss additivity: for any M ∈ R N and a ∈ R where M ≤ 0, a ≥ 0, On the one hand, if ρ(M) � ρ(M∧0) for certain convex risk statistic ρ on R N , then for any M ∈ R N , M ≤ 0 and a ≥ 0: where the second equality is due to the project-additivity property of ρ. Mathematical Problems in Engineering Let us now suppose that a convex regulator-based risk statistic ρ satisfies the project-loss additivity property. Define for any M: � (X 1 , . . . , X N ) ∈ R N where a M is any upper bound of each X i . Using the project-loss additivity property for ρ, we know that ρ is well defined. Next, we need to claim that ρ is a convex risk statistic where ρ(M) � ρ(M∧0). To this end, for any M: � (X 1 , . . . , X N ) ∈ R N and a ∈ R, Next, let M 1 : Taking a M 1 , a M 2 to be the upper bound of each X 1 i and X 2 i . en, which yields ρ monotonous. Finally, for any M 1 , M 2 ∈ R N and 0 ≤ t ≤ 1, which implies ρ convex. Conclusions In fact, risks in engineering are not the same as financial risk. In the study of financial risk, people are concerned with not only the loss caused by the risk, but more importantly, the high return hidden behind the risk. As for engineering risk, however, people only pay attention to the loss it brings. us, we derive a new class of risk statistics for engineering, named regulator-based risk statistics. Yet, we do not conduct theoretical analysis on engineering risk like Hassler et al. [11]. Our results provide the macromodels for project managers who deal with the measurement of regulator-based risk in engineering project. Data Availability No data and code were generated or used during the study. Conflicts of Interest e authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflicts of interest.
2,052.4
2021-03-31T00:00:00.000
[ "Engineering", "Mathematics" ]
A comparative study on the in vivo degradation of poly(L-lactide) based composite implants for bone fracture fixation Composite of nano-hydroxyapatite (n-HAP) surface grafted with poly(L-lactide) (PLLA) (g-HAP) showed improved interface compatibility and mechanical property for bone fracture fixation. In this paper, in vivo degradation of n-HAP/PLLA and g-HAP/PLLA composite implants was investigated. The mechanical properties, molecular weight, thermal properties as well as crystallinity of the implants were measured. The bending strength of the n- and g-HAP/PLLA composites showed a marked reduction from an initial value of 102 and 114 MPa to 33 and 24 MPa at 36 weeks, respectively. While the bending strength of PLLA was maintained at 80 MPa at 36 weeks compared with initial value of 107 MPa. The impact strength increased over time especially for the composites. Significant differences in the molecular weight were seen among all the materials and g-HAP/PLLA appeared the fastest rate of decrease than others. Environmental scanning electron microscope (ESEM) results demonstrated that an apparently porous morphology full of pores and hollows were formed in the composites. The results indicated that the in vivo degradation of PLLA could be accelerated by the g-HAP nanoparticles. It implied that g-HAP/PLLA composites might be a candidate for human non-load bearing bone fracture fixation which needs high initial strength and fast degradation rate. samples containing hydroxyapatite (HA) or tricalcium phosphate (TCP) fillers after immersing in simulated body fluid (SBF) for 12 weeks. Niemelä et al. 8 reported that the degradation of the β -TCP/PLA composite was slower than that of PLA. Araújo et al. 9 observed that clay mineral incorporation in PLA matrix enhanced the polymer thermal stability. Whereas other authors have observed increase of the degradation rate in the presence of HA, TCP or other fillers, attributed to the particle/matrix interface and the hydrophilicity of the fillers. Delabarde et al. 10 and Jiang et al. 4 reported that incorporation of HA into HA/PLA (or HA/PLGA) composites could accelerate degradation at the matrix/particle interfaces. Addition of β -TCP 11 and soluble calcium phosphate (CaP) glass 12 were also found to accelerate the degradation of the PLA. Besides, montmorillonite, nanoclay and titanium dioxide (TiO 2 ) nanoparticles were also proved to decrease the thermal stability and accelerate the in vitro degradation of PLA matrix [13][14][15][16] . In addition to the above in vitro studies, Furukawa et al. 5 evaluated the in vivo degradation of PLA based composite rods and found that addition of HA showed a faster rate of degradation. As a novel modification method, n-HAP surface grafted of PLLA (g-HAP) attracted researchers' attentions and Li et al. 17 found that the g-HAP particle slowed down the thermal degradation of PLA polymer matrix. Based on our previous study, in the present work, we tried to focus our research on the comparative in vivo degradation study of g-HAP/PLLA and n-HAP/PLLA composites. Results Mechanical properties. The mechanical property changes of the implants over time after surgery were shown in Fig. 1. The initial bending strength of g-HAP/PLLA composites (114 ± 3 MPa) was a little higher than that of PLLA (107 ± 4 MPa). While the initial bending strength of n-HAP/PLLA composites (102 ± 3 MPa) was slightly lower than that of PLLA. The bending strength of the n-and g-HAP/PLLA composites decreased gradually after surgery according to Fig. 1a. There was a slight decrease of n-HAP/PLLA composites 20 weeks after surgery, subsequently decreased remarkably. They maintained 81.6% of their initial values at 20 weeks and 43.8% at 28 weeks. The bending strength of g-HAP/PLLA composites decreased constantly post-surgery and maintained 51.0% of their initial values at 20 weeks and 34.0% at 28 weeks. At 36 weeks the g-HAP/PLLA composites maintained only 21.4% of their initial bending strength, while the n-HAP/PLLA composites maintained 31.8%. On the contrary, there was only a little reduction of PLLA compared with the two composites and maintained 74.7% of initial bending strength even at 36 weeks. There was a significant difference among the three materials which was listed in Table 1. The bending modulus retention of the materials were similar with the bending strength retention as shown in Fig. 1b. The n-HAP/PLLA composites maintained 81.1% of their initial values at 20 weeks and 44.4% at 28 weeks. The bending modulus of g-HAP/PLLA composites maintained 53.7% of their initial values at 20 weeks and 42.7% at 28 weeks. At 36 weeks the g-HAP/PLLA composites maintained only 26.8% of their initial bending modulus, while the n-HAP/PLLA composites maintained 34.5%. Correspondingly, PLLA maintained 81.6% of initial bending modulus even at 36 weeks. Details of statistical analysis were shown in Table 2. Interestingly, the impact strength exhibited completely difference from the bending strength and modulus as in vivo degradation (Fig. 1c). There was a slight increase in the impact strength of n-and g-HAP/PLLA composites 4 weeks and 12 weeks after surgery. Unlike n-HAP/PLLA composites with a slight increase to 195% and 211.9% of their initial impact strength at 20 and 28 weeks, it increased remarkably to 283.5% and 269.3% of initial impact strength for g-HAP/PLLA composites. Moreover, the impact strength of g-HAP/PLLA were always higher than that of n-HAP/PLLA composites at any time interval prior to 28 weeks. The impact strength decreased at 36 weeks for all the two composites, especially the g-HAP/PLLA composites. Conversely, there was no obvious change of PLLA compared with the two composites and maintained 103.8% of initial impact strength even at 36 weeks. Details of statistical analysis were shown in Table 3. As shown in Fig. 1d, the viscoelasticity of PLLA, n-and g-HAP/PLLA composites were evaluated at 37 °C. The viscoelasticity slightly increased at 4 weeks and then decreased gradually to 43.9% and 38.8% of their initial values at 36 weeks for n-and g-HAP/PLLA composites. However, there was no obvious change of PLLA with 105.5% of their initial values at 36 weeks. Fig. 2 showed changes in molecular weight for PLLA in all the implants at all the time intervals. The molecular weight of the g-HAP/PLLA composites at 4, 12, 20, 28 and 36 weeks after implantation were 89.7, 64.8, 54.1, 45.5 and 29.4% of their initial values, respectively. While those of the n-HAP/ PLLA composites were 97.1, 78.4, 64.5, 40.7 and 33.4%, respectively. Conversely, the molecular weights of PLLA at 4, 12, 20, 28 and 36 weeks after implantation were 97.8, 94.4, 89.8, 77.2 and 65.7% of their initial values. Thus, the g-HAP/PLLA composites exhibited a significantly greater decrease in molecular weight than the n-HA/PLLA composites and the composites decreased at a significantly faster rate than the unfilled PLLA samples. Molecular weight change. Thermal and crystalline properties. The thermal and crystalline properties of the samples before and throughout in vivo degradation period were shown in Fig. 3 and Table 4. The glass transition (Tg) and melting temperature (Tm) of PLLA matrix for both the composites were observed to decrease and consequently the crystallinity were found to increase with in vivo degradation. Before implantation, the initial Tg and Tm of n-and g-HAP/PLLA composites were seen to be around 58.9, 164.7 °C and 58.3, 163. °C, respectively. A significant decrease in Tg and Tm of n-and g-HAP/PLLA composites by approximately 7.5, 2.5 °C and 9.6, 3.9 °C, Table 2. Statistical analysis of the data shown in Fig. 1b: change in bending modulus of n-and g-HAP/ PLLA and PLLA samples with time. aWith a significant difference at P < 0.02. bWith a significant difference at P < 0.01. cWith a significant difference at P < 0.005. dWith a significant difference at P < 0.05. en.s. = not significant. Table 3. Statistical analysis of the data shown in Fig. 1c: change in impact strength of n-and g-HAP/ PLLA and PLLA samples with time. aWith a significant difference at P < 0.02. bWith a significant difference at P < 0.01. cWith a significant difference at P < 0.005. dWith a significant difference at P < 0.05. en.s. = not significant. respectively, were observed after in vivo degradation for 36 weeks. However, there was almost no change in Tg and Tm of pure PLLA samples. As shown in Table 4, the initial values of the crystallinity of n-and g-HAP/PLLA composites were a little higher than that of pure PLLA. Both the composites showed similar patterns of increasing crystallinity until 20 weeks after implantation and pure PLLA showed increasing crystallinity until 28 weeks. However, the g-HAP/ PLLA composites exhibited the highest values among all the materials and pure PLLA always showed the lowest values in crystallinity at any time interval. Surface and fracture ESEM morphology. No apparent macroscopic changes were observed in the surface of the materials removed from the surrounding tissues over time after implantation. The surface ESEM morphology of PLLA, n-and g-HAP/PLLA composites at different time interval before and after implantation was shown in Fig. 4. More surface roughness was noted on the surface of all materials over time and some small pores appeared on the materials at 36 weeks, especially for the composites. The fracture ESEM micrographs of the samples were shown in Fig. 5. For pure PLLA samples, there were parallel fracture lines in the direction of stress (Fig. 5a) which might be due to the deformation of the matrix formed by external force. The fracture morphology of n-and g-HAP/PLLA composites was rougher than that of PLLA (Fig. 5b,c). It can be observed that n-and g-HAP particles diffused distribution in PLLA matrix. The n-and g-HAP particles significantly changed impact fracture morphology of PLLA matrix and large impact of the fault line were replaced by multiple fracture morphology. The morphological changes were far more marked for n-and g-HAP/PLLA composites than PLLA after comparable implantation times. With the in vivo degradation, n-HAP/ PLLA composites showed an apparently porous surface morphology full of pores and hollows until 28 weeks (Fig. 5b1-5), and the porous surface morphology turned more obvious at 36 weeks ( Fig. 5b-6). At 20 weeks, the fracture of g-HAP/PLLA composites showed visible cracks and wrinkles ( Fig. 5c-4). Notable sags, gaps, and pores were apparent at 28 and 36 weeks ( Fig. 5c5-6). However, no pore was detected with relative smooth structure on the PLLA fracture after 36 weeks of in vivo degradation ( Fig. 5a2-6).As shown in Fig. 6, the microscopic changes were also clearly observed with ESEM under high-magnification. There was no obvious changes in pure PLLA materials at any time interval. While many pores were formed as increasingly disappeared of the HAP particles from the matrix over time. The pores appeared in g-HAP/PLLA composites was a little earlier than that of n-HAP/PLLA. Some sags and gaps were observed in g-HAP/PLLA composites at 12 weeks after implantation and then pores turned more obvious over time. While pores appeared only from 28 weeks after implantation in n-HAP/PLLA composites. As seen in Fig. 7, energy-dispersive X-ray spectrometry (EDX) analysis was evaluated on the pores formed in g-HAP/PLLA composites at 20, 28 and 36 weeks which showed in Fig. 6 with red arrows. The EDX analysis of area pointed by red arrow shown in Fig. 6c-4 indicated that the g-HAP particles disappeared from the pore and left the matrix. More interestingly, the EDX results of area pointed by red arrow shown in Fig. 6c-5,c-6 showed that fiber-like morphology was formed in the pores detecting with Ca and P elements. Discussion An ideal absorbable device for bone fracture fixation should have a high initial strength, an appropriate modulus and retain strength as long as the healing fracture needs support 18 . In this paper, to improve the interface adhesion between PLLA and nanoparticle fillers and the mechanical properties of the PLLA based composites, we prepared g-HAP particles with grafting polymerization of L-lactide on the surface of n-HAP and g-HAP/PLLA composites as described in our previous studies 6,19 . The g-HAP particles could be more uniformly dispersed either in chloroform or in the PLLA matrix and showed improved adhesion with PLLA matrix. Consequently, the g-HAP/ PLLA composites exhibited improved mechanical properties due to the reinforcing and toughening effects in the composites. That was because the grafted-PLLA molecules played a role of tie molecules between the fillers and the PLLA matrix. And the g-HAP particles played the role of the heterogeneous nucleating agents in the crystallization of the PLLA matrix. So the initial values of bending strength, modulus, impact strength and crystallinity of the g-HAP/PLLA composites were a little higher than that of n-HAP/PLLA composites or pure PLLA. High initial strength of the fixation device is necessary in order to cope with external and muscular loads after reduction of fracture. Even though the bending strength of g-HAP/PLLA composites prepared in the present study is lower than that produced by a forging process 5 or by self-reinforced technique (SR-PLLA, BIONX, Finland), it will be suitable for the fixation of human non-load bearing bone fracture, such as cancellous bone fracture fixation 18 . However, it is urgent to make clear that the influence of grafted HAP nanoparticles on the in vivo degradation behavior of composite implants as degradation rate is a critical factor affecting bone fracture healing. The degradation mechanism of biodegradable polymer is chemical degradation via hydrolysis and it was regarded that the chain ends cleavage resulted in mass loss, while random scission dominated the reduction in molecular weight 20,21 . So the uptake of water is considered to be specifically important for the degradation of the material. In our study, the molecular weight of the composites decreased faster from the early period than the pure PLLA. This is possibly because the body fluid could diffuse more easily into the composites than into pure PLLA as no chemical bonding existed between the particles and the PLLA matrix in the composites. Therefore, it's deduced that the composites displayed a faster degradation than pure PLLA in well accordance with the literature 5 . A general difficulty in composite science is the development of good adhesion between matrix and reinforcement. If the adhesion is insufficient the composite has poor strength and fatigue properties 22 . In this study, fluids can diffuse rapidly along the interface of n-HAP/PLLA composite due to poor adhesion and disrupt the interface which leads to rapid strength loss of the composite. It has been reported that the hydrolytic chain cleavage proceeded preferentially in the amorphous regions, and hence leading to the increase in polymer crystallinity 16 . In the present study, the g-HAP/PLLA composites demonstrated a significant faster decrease in molecular weight than n-HAP/PLLA. What's more, ESEM results showed that the pores appeared in the g-HAP/PLLA composites were more rapidly than that of n-HAP/PLLA. This might be due to the dominated degradation occurred earlier on amorphous regions of grafted PLLA molecules on the HAP particles with uptake water from body fluid as the distribution and adhesion of g-HAP nanoparticles in the PLLA matrix was improved which has been shown in our previous study 19 . The well distributed of g-HAP nanoparticles helped in the easily invasion of body fluid into the inner of the g-HAP/PLLA composites from the interface between g-HAP nanoparticles and PLLA matrix. Subsequently, the degradation of amorphous regions of PLLA matrix in the g-HAP/PLLA composites occurred prior to the crystalline regions. As the polymer chains in amorphous regions degrade, the number of amorphous regions decrease, the proportion of crystalline to amorphous regions increased 15,23 , in agreement with the WAXD results (supplementary Figure 1). So the crystallinity of them increased gradually over time until 20 weeks after implantation. Afterwards, the crystalline region turned to be the dominant degradation region and resulted in the crystallinity decrease. Based on the above reasons, even if the crystallinity of n-HAP/PLLA composites increased over time until 20 weeks and the invasion of body fluid into the inner of the n-HAP/PLLA composites also occurred from the interface between n-HAP particles and PLLA matrix, the degradation of n-HAP/PLLA composites was a little slower than that of g-HAP/PLLA. However, water penetration into the pure PLLA samples was more difficult. So the degradation of PLLA was slower than that of the composites and its molecular weight decreased more slowly and the crystallinity increased until 28 weeks after implantation. The degradation of pure PLLA was also not observed apparently by ESEM. An interesting and important question is what is necessary and secure strength retention time for absorbable fixation materials in vivo. The healing of cancellous bone fracture through trabecular bone growth is a much faster process (4-6 weeks) compared to the healing of cortical bone fractures (12-24 weeks) 24 . Corresponding to the faster decrease in molecular weight, the bending strength and modulus of the g-HAP/PLLA composites decreased faster than that of n-HAP/PLLA composites with in vivo degradation. The bending strength of g-HAP/ PLLA composites maintained 51.0% of their initial values (58.14 MPa) at 20 weeks and 21.4% (24.40 MPa) at 36 weeks. This was higher than the strength of cancellous bone (10-20 MPa) 25 . While the impact strength increased obviously for the g-HAP/PLLA composites than that of n-HAP/PLLA. This might be due to small molecules produced by the firstly degradation of grafted PLLA molecules onto n-HAP particles played a role of plasticizer and hence improved the toughness of the composites. Small molecules produced by the slower degradation of PLLA matrix in n-HAP/PLLA composites also played a role of plasticizer and improved the toughness of the n-HAP/ PLLA composites to some extent. But the increased of impact strength of the n-HAP/PLLA composites was lower than that of g-HAP/PLLA. The impact strength of g-HAP/PLLA composite decreased at 28 weeks post-surgery and exhibited an abrupt decline at 36 weeks post-surgery due to a faster in vivo degradation rate than n-HAP/ PLLA composite. This results were in accordance with the change of tension-compression, molecular weight, Tg, Tm and ESEM morphology at 36 weeks post-surgery. In addition, the torsion test is also an important parameter of mechanical properties for bone fixation implants and it has been investigated with the PLA-based composites 26 . In this study, torsion test was also evaluated and the values of torsion firstly increased for the composites at 4 weeks because of absorbed body fluid and then decreased gradually with in vivo degradation. With the in vivo degradation results, we can conclude that the interface between the g-HAP particles and PLLA matrix was more susceptible to erosion by the body fluid. Although this report are based on the mechanical, molecular weight and ESEM morphology data obtained from implants which were implanted in muscle tissue, we believe that these results are in agreement with the behavior of the same implants in bone tissue, because we found in several studies that the strength retention of absorbable rods is practically the same in subcutaneous tissue as in bone tissue 5,18,27 . According to the present study g-HAP/PLLA implants seem to be suitable in the treatment of cancellous bone fractures where the fixation needs high initial mechanical properties and fast degradation rate. Conclusions The in vivo degradation of n-and g-HAP/PLLA composites were evaluated with mechanical properties, molecular weight, crystallinity, thermal behavior, and ESEM morphology. The g-HAP/PLLA composites showed the fastest degradation rate among all the materials and n-HAP/PLLA also exhibited faster degradation rate than pure PLLA in terms of molecular weight decrease, mechanical property changes and matrix erosion of micromorphology. This indicated that g-HAP/PLLA composite implants were more suitable for the bone fixation requiring rapid resorption. The results obtained from this in vivo study encourage the clinical use of the g-HAP/PLLA composites in the fixation of human non-load bearing bone fracture which needs high initial strength and fast degradation rate. Further long-term system studies for degradation of the g-HAP/PLLA materials are also needed. Methods Materials. PLLA with molecular weight 50,000 was prepared by the ring opening polymerization of the L-lactide in the presence of stannous octoate (Sn(Oct) 2 ) as catalyst according to our previous study 6 . The preparation of hydroxyapatite nanoparticles (n-HAP) and the surface-grafted hydroxyapatite nanoparticles by PLLA (g-HAP) have been described in our previous papers 19 . In brief, n-HAP was synthesized according to the reaction shown in Equation 1: It was an acicular crystal of about 100 nm in length and 20-40 nm in width, with the atomic ratio Ca/P ≈ 1.67. Then, L-lactide was ring-opening polymerized onto the surface of n-HAP particles in the presence of stannous octoate (Sn(Oct)) as catalyst to obtain g-HAP according to Equation 2: The amount of grafted polymer on the surface of g-HAP was determined by thermal gravimetric analysis to be about 5.0 wt%. Preparation of n-HAP/PLLA and g-HAP/PLLA composites. The n-HAP/PLLA and g-HAP/PLLA composites were prepared as follows. Pre-weighed dried n-HAP or g-HAP powders were uniformly suspended in 20 folds (in weight) chloroform with the help of magnetic stirring and ultrasonic treatment. And the suspension was added into a 10% (w/v) PLLA/chloroform solution to achieve the n-or g-HAP content of 10 wt% in the composite. The mixture was precipitated in an excess of ethanol, and the composite was dried in a vacuum-oven at 40-50 °C for 24 h to remove the residual solvent. For preparing mechanical and in vivo test specimens, all composites were blended in Torque Rheometer at 190 °C for 5 min and laminated into sheets with a thickness of about 2 mm by hot press molding at 190 °C and 15 MPa, then the samples were annealed at 115 °C for 1 h. Rectangular bars having effective dimensions of 30 mm × 5 mm × 2 mm were cut from a 2-mm-thick plate. Unfilled PLLA materials (100% PLLA) were made by the same method as a control group. Intramuscular implantation. All the materials used in this study were sterilized with ethylene oxide at 55 °C for 4 h and implanted intramuscularly for in vivo degradation assessment. Rearing of the New Zealand White Rabbits and all experiments using them were carried out at the Institute of Surgery, China-Japan Union Hospital, Jilin University. The guidelines for animal experimentation of Jilin University were carefully observed in accordance with international standards on animal welfare and were approved by the Animal Research Committee of Jilin University. The rabbits were anesthetized by an intravenous injection of Nembutal (50 mg/kg body weight) and local administration of 0.5% (w/v) lidocaine. The operations were carried out under standard aseptic conditions. The implants were embedded into dorsal muscle of rabbits. Four parallel samples in a rabbit were used for each material. After surgery, the rabbits were kept in cages and maintained with a regular diet. All the rabbits were given a daily injection of penicillin for one week and were sacrificed by Nembutal overdose at 4, 12, 20, 28 and 36 weeks post-surgery. The implants were taken out and all samples before and after surgery were measured for mechanical properties, surface and fracture images, molecular weight and thermal properties. Surface and fracture morphology. The surface and impact fractures of PLLA, n-HAP/PLLA and g-HAP/ PLLA composites were observed by an environmental scanning electron microscope (ESEM, XL30 FEG, Philips) connected to an energy-dispersive X-ray spectrometry (XL30W/TMP, Philips). X-ray intensities for calcium, phosphorus were analyzed across the fracture surfaces. Molecular weight change. The viscosity-average molecular weight (Mv) of PLLA in the composites before and after implantation for 4, 12, 20, 28 and 36 weeks was determined from the intrinsic viscosity in chloroform at 25 °C using the following Equation 3: To remove the nanoparticles from the n-HAP/PLLA and g-HAP/PLLA composites, the samples were firstly dissolved in chloroform thoroughly and the solutions were centrifuged with a speed of 10,000 rpm. The supernatant was taken and repeated the centrifugation process for two times. Then ethanol was added into the supernatant for the PLLA sedimentation. Finally the PLLA was dried with vacuum and measured the intrinsic viscosity. Thermal properties and crystallinity measurement. The thermal properties of PLA, n-HAP/PLLA and g-HAP/PLLA composites were measured by the differential scanning calorimetry (DSC-7, Perkin-Elmer) at a heating rate of 10 °C·min −1 from 20 to 200 °C. Crystallinity of the PLLA in the composites was calculated from the following Equation 4: Where  H Tm indicates the melting enthalpy (J/g) that was calculated from the fusion peak in DSC curve. And the value 93.7 (J/g) is the melting enthalpy of theoretical completely crystalline PLLA polymer (J/g) 28 . Statistical analysis. All quantitative datas were analyzed with Origin 8.0 (OriginLab Corporation, USA) and expressed as the mean ± standard deviation. Statistical comparisons were carried out using the analysis of variance (ANOVA One-way, Origin 8.0). A value of P < 0.05 was considered to be statistically significant.
5,682.6
2016-02-09T00:00:00.000
[ "Materials Science", "Engineering" ]
Location Privacy-Preserving Scheme in IoBT Networks Using Deception-Based Techniques The Internet of Battlefield Things (IoBT) refers to interconnected battlefield equipment/sources for synchronized automated decision making. Due to difficulties unique to the battlefield, such as a lack of infrastructure, the heterogeneity of equipment, and attacks, IoBT networks differ significantly from regular IoT networks. In war scenarios, real-time location information gathering is critical for combat effectiveness and is dependent on network connectivity and information sharing in the presence of an enemy. To maintain connectivity and guarantee the safety of soldiers/equipment, location information must be exchanged. The location, identification, and trajectory of soldiers/devices are all contained in these messages. A malicious attacker may utilize this information to build a complete trajectory of a target node and track it. This paper proposes a location privacy-preserving scheme in IoBT networks using deception-based techniques. Dummy identifier (DID), sensitive areas location privacy enhancement, and silence period concepts are used to minimize the attacker’s ability to track a target node. In addition, to consider the security of the location information, another security layer is proposed, which generates a pseudonym location for the source node to use instead of its real location when sending messages in the network. We develop a Matlab simulation to evaluate our scheme in terms of average anonymity and probability of linkability of the source node. The results show that the proposed method improves the anonymity of the source node. It reduces the attacker’s ability to link the old DID of the source node with its new DID. Finally, the results show further privacy enhancement by applying the sensitive area concept, which is important for IoBT networks. Introduction The Internet of Battlefield Things (IoBT) is a newly emerging field that uses Internet of Things (IoT) technologies for defense purposes [1]. IoT technology allows devices to collaborate to monitor physical or environmental conditions [2]. In addition, it allows sensing and transmitting information, enabling real-time decision making in military operations [3]. Due to the highly heterogeneous nature of the IoBT environment in terms of devices, network protocols, platforms, connectivity, and other factors, there are difficulties associated with trust, security, and privacy [3]. Since communication infrastructure might be unavailable, these entities communicate with one another utilizing device-todevice (D2D) communications to send and receive sensitive information, including location information [4]. The requested location information queries may be vulnerable to attacks, and the adversaries can use the location information to follow users in myriad ways or reveal their personal information to third parties. Securing the location of IoBT entities is essential to protect military equipment from enemy ambushes; if this information is leaked, it could result in the failure of an important mission or even the loss of life. Additionally, a wellsecured location helps the military to create an efficient attack and defense strategy, leading to successful mission outcomes. Because of these security precautions, cybersecurity is now crucial for protecting IoBT equipment from assaults that could impact military operations. Many methods have been proposed to protect the identity of the node and its location. The use of security deception in cyber defense is a promising strategy that has attracted the interest of researchers [5]. Some of the deception methods include, but are not limited to. pseudonyms ID, dummy location, and silence period. Pseudonym ID methods are frequently used in a variety of security research [6] to address the identity and location privacy issues of the user [7]. In this technique, each node has pseudonym identities that it uses to communicate with other nodes instead of its actual identity [8]. The node regularly changes its pseudonym for every new communication. As a result, the attacker cannot identify who exactly its target is [8]. Therefore, it is anticipated that the method will be able to offer untraceability, privacy, and user anonymity to its user [6]. In this manner, the node data and information are protected from access by unauthorized parties [9]. However, due to the adversary's ability to monitor and link the nodes that are sending their locations even after they change their pseudonyms, utilizing pseudonyms will unfortunately not completely solve all privacy issues. As a solution, a silence period is proposed to protect the identity information of the node once the pseudonym ID is about to expire. It is a time period during which the node does not participate in any activity. During the silence period, the node will change its dummy ID with a new one. Thus, the silence period prevents the attacker's linkability to link the old pseudonym with the new one [8]. A dummy location in another method is proposed to protect the real location information. Dummy location is a strategy that intends to deceive the adversary with fictitious locations [10]. By sending a dummy location instead of the real one to the service provider, the attacker will not be able to distinguish the real one from the dummy one. Therefore, this scheme is able to provide anonymity and location privacy to the user. To the best of our knowledge, there is no existing work that addresses the identity and location privacy issues in IoBT networks. In this paper, we propose a scheme that uses dummy ID, sensitive areas location privacy enhancement, and silence period concepts to enhance location information and identity privacy for IoBT entities. The sensitive area concept further enhances the location privacy in battlefield areas that are sensitive. In this paper, we make the following contributions: • We develop a scheme to protect the node's identity by using dummy ID, silence period, and sensitive areas location privacy enhancement concepts. • We generate a pseudonym location for each node in the IoBT environment to protect the node's real location information. • We introduce a new metric, average probability of linkability per DID change of a source node, to measure how successful the attacker is in linking the source node with its new DID after the silence period. • To evaluate our scheme, we use average anonymity and average probability of linkability per DID change of a source node. • We develop a Matlab simulation to validate our proposed scheme. The rest of our work is organized as follows. Section 2 presents an overview of the related work. Section 3 illustrates the proposed method. The proposed method analysis, including the performance metrics, simulation environment, parameters, network assumption, security analysis, and simulation results and analysis, are included in Section 4. Finally, Section 5 concludes the paper. Related Work In this section, we provide an overview of location privacy-preserving mechanisms. Many methods have been proposed to protect location information as explained below: Instead of using actual vehicle IDs, vehicles can share essential information while maintaining their privacy by using pseudonyms [11]. To maintain location privacy, the work in [12] employed a strong pseudonym change method that guarantees unlinkability. The suggested solution confuses the attacker during the pseudonym updating phase. It combines the principles of "hiding inside the crowd" and "location obfuscation." The authors in [13] offered a novel comprehensive pseudonym changing scheme that takes advantage of the vehicle context and pattern of the current traffic to determine the best scenario for switching pseudonyms. A dynamic pseudonym swap zone (DPSZ)-based location privacy-preserving technique was suggested in [14] in which each vehicle can generate a temporary pseudonym swap zone using DPSZ on demand to exchange pseudonyms with any other vehicle within a certain zone. Ref. [15] proposed using a genetic algorithm to create a set of pseudonyms using the crossover approach. The set is created for vehicles at different time intervals by crossing an initial pair of pseudonyms (GA). Ref. [16] proposed a novel scheme for centralized pseudonym changing for location privacy in vehicle-toanything communication. In [17], the authors discussed changing the identity of the vehicle with a mix zones-based authentication protocol for location privacy. In [18], a concerted silence-based location privacy preserving scheme for Internet of Vehicles (CSLPPS) was proposed to guarantee pseudonyms and prevent the adversary from tracking the participation node in IoV. The method is based on entering a silent period to change the identifiers of cooperative vehicles at the same time. The researchers in [19] used a cryptographic technique to present a reliable and effective scheme for preserving location privacy. Because the identity of the user is hidden from the location-based service (LBS) provider and fog server, it can protect the user's private information. Both AES with one-time-pad keys and IBE were used in communication to ensure the confidentiality and integrity of both requesting and receiving data. Ref. [20] suggested a (P 2 FHE-AES) method for LBS inquiry called privacy-preserving fully homomorphic encryption over advanced encryption standards for the purpose of encouraging drivers to utilize this service without worrying about being tracked. To avoid location privacy leakage from sensory data, the authors in [21] designed an encrypted data recovery scheme based on homomorphic encryption as part of their work. Several works used cryptography/encryption as a solution to protect the location information such as [22][23][24][25]. Ref. [26] proposed a scheme called dummy location provider (DLP) that consists of three algorithms: spread, shift, and switch. Spread and shift are responsible for creating deceptive dummies and trajectories, while switch replaces users' actual locations with dummy trajectories before submitting them to the LBS. The authors in [27] proposed query-based dual location privacy in vehicle ad hoc networks (VANETs). They used the circle-based dummy generation (CBDG) algorithm to create some dummy locations before sending the query to a trusted third party. The local differential privacy technique is used in [28] to present a novel privacy-aware framework for aggregating indoor location data. In this technique, user location data are altered locally in the user's device and then transferred to the aggregator. As a result, neither a server nor any potential attackers are aware of the user's geolocation. Ref. [29] suggested a blockchain privacy protection crowdsourcing solution that can secure employees' location privacy and increase job completion rates. In addition to using blockchain technology's anonymous capabilities to hide users' identities, this system creates private blockchains to distribute members' transaction records and selects jobs across several private blockchains to prevent the deletion of members' transaction information. The authors in [30] used blockchain technology for the Internet of Vehicles (IoV) to protect the task and worker's location privacy. The proposed system not only protects the privacy of worker locations but also improves task completion success rates. The work in [31] offered a new approach to protecting location privacy for mobile crowdsourcing systems that enhanced privacy protection and service quality. Other works used blockchain for location privacy purposes, such as [32][33][34][35][36]. Both works in [12,37] employed crowd-blending and obfuscation strategies to maintain location anonymity in VANETs. Ref. [38] proposed a coordinate transformation-based scheme. Location privacy was implemented using the CBDG algorithm and a trusted third party. The proposed method takes advantage of both obfuscation and anonymity methods by using a two-step authentication procedure to share location data among neighboring vehicles. A mobile device conducts some basic geometric functions (shifting, rotation) before relaying its positions to the LBS provider. Refs. [3,[39][40][41][42] are other efforts that use obfuscation mechanisms for location privacy. Table 1 illustrates the comparison of our work to recent deception-based related works where " " means the method is used, " × " means the method is not used. As we can see, all the existing schemes presented in the table address location privacy issues in IoT, WSN, and IoV, and none address location information and identity privacy of entities in IoBT networks. Additionally, no exiting works consider securing the location information if an entity enters a sensitive area, which can be critical, particularly in IoBT environments. In this paper, we propose a scheme that uses dummy ID, sensitive area, and silence period concepts to enhance location information and identity privacy for IoBT entities. Network Assumptions The network assumptions of our work are: 1. The nodes use D2D communication to communicate with the gateways. The communication between the gateways is secure. 4. The gateways know the real identities of each other. 5. The gateways are powerful devices and have controls on the nodes that are located in their communication range. 6. The registration table is distributed and secure, and only authenticated users can access it. 7. The network space is divided into n grid cells. The grid cells are numbered from 2 to n + 1. The gateway nodes know the cells' locations and numbers. The Network Architecture: This paper discusses how to protect identity and location information from unauthorized entities to provide a more secure system in the IoBT environment. Dummy ID, sensitive areas location privacy enhancement, and silence period concepts are used to minimize linkability to protect the node identity. Additionally, an alternative location for each node will be generated and used instead of the real location information. The proposed network architecture to request changing the Dummy ID (DID) is shown in Figure 1. Table 2 shows the notations of the proposed work. The Proposed Method The proposed method aims to protect both the identity and the location information of each node on the battlefield. First, the network is divided into grid cells, and each grid cell has a unique integer identifier n. The gateways have the same grid cell number as an identifier if they are located in that grid cell when they communicate for the first time with the node for registration purposes. The reason for using this identifier n is that it will be used to generate a pseudonym location, which will be explained later. Figure 2 demonstrates the second step; each node is initially required to authenticate itself and register with the nearest gateway G by sending its real ID. Once the registration is approved, the registration information table for each node will be created. The registration information contains the timestamp, the grid cell number, the ID for the node, the ID for the gateway, and a pool of dummy IDs. Then, the gateway will transmit the registration table to the node. Figure 3 shows the algorithm for the authentication and registration step. The DIDs will be used for any communication in the field in place of the real IDs. By using DIDs, the real identity will be protected. However, if the node changes its temporary DID during its cyber-activity, the information could be linked to track the target node. Thus, we propose using a silence period concept to protect the identity information while changing the temporary DID. The changing of the DID must occur under two circumstances as explained below. Figure 4 illustrates the formal algorithm for changing the DID. • The lifetime of the node DID is about to expire, and node N is not entering a sensitive area. • The node is about to enter a sensitive area. In this case, the node sends a sensitive area status (U = 1) to change the DID whether its lifetime is about to expire or not. This case will be explained in more detail later. Silence Period The silence period is a method used to protect the identity information of the node once the temporary DID is about to expire. It is a time period during which the node does not participate in any activity. During the silence period, the node will change its DID to prevent linkability. Once the DID is about to expire, the source node, N, must notify the gateway by sending a message that has an expiration status (E = 1). A message with E = 1 means the DID is about to expire, and the node needs approval to change it. Once the gateway receives the message with E = 1, it will request a confirmation from node N, which is the identity of the first gateway the node registered with and the timestamp. Thus, if the confirmation information matches the information in the registration table, the gateway will process the node request. Otherwise, the request will be dropped, and the gateway will flag the node as infected. Figure 5 explains the confirmation process, and Figure 6 illustrates the DID change message exchange between nodes and gateway. To process the request, the gateway will request a status message from all the M immediate neighboring nodes of node N (the first node that requested to change its DID). All M nodes must participate in this task by responding either with E = 1, which means my DID is about to expire too and I am ready to participate, or with E = 0, which means not ready to participate. If the number of the received status messages with E = 1 equals or exceeds the threshold K, then the gateway will approve the changing request by sending an approval message with APP = 1 to the source node and the nodes with the same status E = 1. The threshold is defined as in Equation (1): For a source node request to be approved in the non-sensitive area case, it has to have at least least two immediate neighboring nodes out of which one agrees to participate regardless of the total number of nodes in the network . Once the approval is received, the cooperating nodes and node N will enter a silence period synchronously to change the DID, and then they will return to cyber-activity again. If the number of nodes in the network T is small and there are no enough cooperating nodes, the gateway will extend the DID lifetime for node N for one period (60 s) and send a message to the node with APP = 0. The silence period is used to enhance the identity privacy of the node and reduce the chance of tracking. Generating Pseudonym Location To consider the privacy of the location information, another security layer is proposed in this paper, as shown in Figure 7. Consider a node with longitude L and latitude D. Assume that n is the grid cell number of the first gateway the node registered with. To generate a pseudonym location for this node, we propose changing L and D to L and D, respectively, as shown in Equations (2) and (3). Once the gateway receives the location information from the node, the gateway will ask the node for confirmation as we explained above. The confirmation information is the identity of the first gateway the node registered with and the timestamp. If the confirmation is approved, G will check the registration table and get the grid cell number to decrypt the location information. Using the grid cell number n, the gateway will find (n)th root of L and D to obtain L and D. Sensitive Areas Location Privacy Enhancement The sensitive area status is also used to secure the location of the node. In this case, once the node enters a sensitive area where the location information of this node must be highly secured, the node has to send an urgent sensitive area status (U = 1). The urgent sensitive area status is a message from node N to change the DID, even if its DID lifetime is not about to expire to inform the gateway that the sensitive area status has started. Once the gateway receives the message and approves the confirmation as mentioned above, G will force all the neighboring nodes to change their DID by sending an approval message (APP = 1). Next, node N and all of its neighboring nodes will enter a silence period status, change their DIDs, and not respond to any messages until they exit the sensitive area. Once node N exits the sensitive area, a message with U = 0 will be sent to the gateway. All the nodes will return to their cyber-activities with the new DIDs. If there are no cooperating nodes, the gateway will extend the DID lifetime for node N and send a message to the node with APP = 0. Performance Metrics We propose and use linkability of the source node to measure how successful the attacker is in tracking the source node. In addition, we use average anonymity of the source node metrics to analyze the privacy degree of the location and identity information and to assess the effectiveness of our proposed work: 1. Average anonymity of the source node per DID change (Average AS). The Average AS per DID change is defined to be the ratio of the total number of participating nodes in the DID changes to the number of changes. The probability of linkability (PLA). We define PLA to be the probability that the attacker will successfully link the source node with its new DID after the silence period. 3. Average probability of linkability (Average PLA). We define Average PLA as the ratio of the total values of PLA for all the changes to the number of changes. 1. Average AS: AS is measured by the number of participating nodes in a DID change. AS = NPN Based on the above formula, anonymity increases by increasing the number of cooperating nodes. The more nodes that enter the silence period (cooperate nodes) and change their DID with the source node, the lower the probability that the attacker will successfully link the source node DID with its old one. The Average AS metric measures the anonymity of the source node N per DID change. In the IoBT environment, it is important to increase the anonymity of the source to protect and secure sensitive information. Thus, we propose to measure Average AS in different cases based on the sensitivity of the node's area. The first case is when the source node enters a non-sensitive area, the second case is when the source node enters a sensitive area where it is important to increase the anonymity of the source node. This feature further enhances the anonymity of the source node. The mathematical models for both cases are as follows: • Average AS for non-sensitive area (Average AS NS ): Here, we derive a mathematical expression for Average AS NS . Let i donate the DID change number, NPN i denote the number of participating nodes for change i, j the total number of changes, and AS i be the anonymity of the source node for DID change i. AS i = NPN i In addition, total AS for all the DID changes j is: Therefore, Average AS for a non-sensitive area (Average AS NS ) is given by Equation (4) below: Average AS NS = AS total Total number o f changes Thus, Average AS for sensitive area (Average AS S ) is given by Equation (5) below: A larger value of Average AS means a higher privacy level. 2. PLA. As we mentioned above, we propose to measure PLA in different cases based on the sensitivity of the node's area. The first case is when the source node enters a non-sensitive area. The second case is when the source node enters a sensitive area where it is important to further decrease the probability of the attacker to successfully link the source node with its new DID. The mathematical models for both cases are as follows: • PLA for non-sensitive area (PLA NS ): Here, we derive an expression for PLA NS . Since the source node along with the participating nodes NPN i for change i will synchronously change their DID during the silence period, there will be NPN i + 1 new DIDs. Therefore, the probability that the attacker will succeed in linking the old DID of the source node with its new DID after the silence period for a non-sensitive area for change i is given by Equation (6). where 1 refers to the source node. • The Average PLA for the non-sensitive area (Average PLA NS ) per change is given by Equation (7) below: • PLA for sensitive area (PLA S ): For the sensitive area case, since all immediate neighboring nodes (M i ) of the source node are forced to participate and enter a silence period, so Thus, the PLA for the sensitive area (PLA S ) for change i is given by Equation (8) below: • The Average PLA for the sensitive area (Average PLA S ) per change is given by Equation (9) below: A smaller value of PLA means a higher privacy level. Security Analysis We evaluate the proposed scheme's security strength in terms of its ability to resist potential security attacks as below: • Linkability Attack The linkability attack uses the transmitted information to link the dummy ID with the target node. To resist this kind of attack, our method relies on the use of a silence period to prevent linkability and tracking during the identifierchanging process. All the participating nodes and the source node enter the silence period synchronously. Since the DIDs of the participating nodes and the source node after the silence period are different from the ones before the silence period, the attacker will be confused and its chance for successful tracking of the source node (target) will be reduced. In addition, our proposed sensitive area feature further restricts linkability attacks to link the target node with its dummy ID beyond existing schemes as it forces all the immediate nodes to participate. • Eavesdropping Attack In the eavesdropping attack, the attacker listens to the communication between the nodes in the network to obtain the desired information. To resist this kind of attack, our method uses a pool of temporary DIDs for each node for communication purposes. Additionally, a pseudonym location is used instead of the real one to protect the real location. Therefore, the attacker will not be able to obtain any useful information about the node's real ID and its location. In addition, our proposed sensitive area feature further restricts eavesdropping attacks to link the target node with its dummy ID beyond existing schemes as it forces all the immediate nodes to participate. Simulation Analysis To test our model, a MATLAB simulation was developed for the proposed scheme by using the parameters in Table 3. Table 3. The parameters. The Parameter The Values In the simulation, we divided the network into grid cells, and we assigned a unique identifier number to each grid cell starting from n = 2 to n + 1 to avoid the case when n = 0 or n = 1. Once a simulation starts, the gateways will be created and randomly distributed in the grid within the first two seconds. Then, the nodes will be created and randomly distributed in the grid within five seconds. More details about the simulation entities are included below: • Grid: The grid class has several properties, including grid cell size, grid cell length, gateway objects, node objects, and an interrupt list. The grid cell size simply refers to the grid cell length and width in terms of pixels, where each pixel represents a certain region in the actual physical grid cell. The gateway and node objects are other class instances affiliated with the grid simulation. The interrupt list is a list of time interrupts that refer to all future expected events at which the simulation pauses, monitors occurring events, and updates the status of the grid cell accordingly. The class has a constructor that initializes all parameters of the grid at the start of the simulation according to user preferences. The class uses an update method that is called when a time interrupt occurs. This method updates the status of the grid, including the states of the nodes and gateways contained within the grid. • Gateways: The gateway class also has distinct properties, including ID, position, velocity, registration tables, and associated node DIDs. The ID is a distinct code used to distinguish between different gateways. The position and velocity are kinematic measures for the motion of the actual gateway and are both measured using the international system of units SI (meters, meters/second, etc.). Each gateway has several nodes it is supposed to serve, where each of these nodes has a pool of DIDs that are also registered in each gateway's memory. The gateway also has a constructor that initializes the values of all of these properties prior to the simulation run. The gateway has a set of methods it uses to achieve its intended goals, including the registration service function, which serves nodes that issue a registration request. This method initializes the registration table of the node, which includes data such as the node ID, associated gateway ID, gateway grid cell number, and registration timestamp. The gateway also has another method that serves DID change requests. The gateway first checks if the message is infected by comparing the node's confirmation message's registration table against the registration table recorded in its memory. If the registration tables match, the gateway will check the number of voting nodes. If most associated nodes' DIDs are close to be expired, the gateway approves the request (APP = 1). Otherwise, the gateway does not approve the request, and the node obtains a new expiration deadline. The gateway also has a kinematic state update method that updates the gateway's kinematic position and velocity. • Nodes: The node object has several properties, including ID, DID lifetime, time of creation, position velocity, turn-off flag, and expiration flag. The DID lifetime is set to sixty seconds from the moment of creation. The time of creation itself is the time at which the node shows up in the simulation, which is randomly set for each run within the first five seconds of run time. The turn-off flag is raised to true when the node enters a sensitive region; otherwise, it is set to false. The expiration flag indicates that the node's DID is close to expire and needs to be changed. The node methods include an update method, which updates the kinematic states of the node, as well as node flags. Simulation Results To evaluate the performance of the proposed work, we use two metrics related to measuring the privacy level of the location information and identity as follows: Average anonymity of the source node (Average AS) and Average probability of linkability (Average PLA) per DID change. The simulation was run for a network with two gateways and number of nodes = 10, 20, and 30. The simulation results are discussed below: • Average anonymity of the source node per DID change (Average AS): In our simulation, to measure the further enhancement introduced by applying the sensitive area concept, we measured Average AS for the source node per DID change in two cases: when a sensitive area concept was applied, AS S , and when a sensitive area concept was not applied, AS NS . Figures 8-10 show AS NS and AS S results with 10, 20, and 30 nodes in the network, respectively. For AS NS results, since AS equals the number of NPN as we mentioned above, more participating nodes will increase the source node anonymity. For AS S results, all the immediate neighboring nodes are forced to participate and change their DIDs whether their DIDs lifetime is about to expire or not. Thus, the number of nodes entering the silence period will be equal to or larger than the number of nodes entering the silence period in the AS NS case, which means AS for the source node is further enhanced. Table 4 shows the Average AS NS and Average AS S per DID change. It is clear that applying the sensitive area concept further enhances the average anonymity of the source node for all different numbers of nodes in the network. To the best of our knowledge, our work is the only work that focuses on the IoBT environment, which has different characteristics and strong security requirements. Moreover, the security of the node location once entering a sensitive area is critical. Thus, we introduced the sensitive area concept. To investigate its impact, we evaluated the proposed scheme with and without it. Our results presented above showed that having this feature further enhances the Average AS and PLA per DID change. Conclusions IoBT networks are very different from conventional IoT networks because of challenges specific to the battlefield, such as lack of infrastructure, the heterogeneity of equipment, and attacks. Real-time location data collection is essential for battle efficiency in war scenarios and is dependent on network connectivity and information exchange when an adversary is present. The mission depends on exchanging and sending location information to achieve connectivity and ensure the security of soldiers and equipment. Transmissions will include information about soldier/equipment locations, identities, and trajectories. If a malicious attacker obtains this information , then it reconstructs the whole trajectory of a target node and monitors its movements. The focus of this paper was to secure location and identity information in IoBT networks. To protect the source node identity and to minimize linkability and tracking, dummy IDs and silence period concepts are used in our scheme. In addition, it is more critical to further enhance the anonymity of the source in sensitive areas of the battlefield. Thus, we proposed the sensitive area location privacy enhancement concept. In this case, when the source node enters a sensitive area the gateway forces all the immediate neighboring nodes to participate by entering the silence period to change their DIDs. Moreover, to protect the location information, an additional security layer is proposed to create a pseudonym location for a source node. A Matlab simulation was developed to evaluate our scheme in terms of average anonymity and average probability of linkability of the source node. The results obtained demonstrated the utility of the concepts used in our scheme in enhancing the security of location and identity information. In addition, they showed the significance of applying the sensitive area concept in IoBT networks as it enhances the anonymity and decreases the linkability of the source node. Conflicts of Interest: The authors declare no conflict of interest.
7,879.2
2023-03-01T00:00:00.000
[ "Computer Science" ]
Cathepsins in the Pathophysiology of Mucopolysaccharidoses: New Perspectives for Therapy. Cathepsins (CTSs) are ubiquitously expressed proteases normally found in the endolysosomal compartment where they mediate protein degradation and turnover. However, CTSs are also found in the cytoplasm, nucleus, and extracellular matrix where they actively participate in cell signaling, protein processing, and trafficking through the plasma and nuclear membranes and between intracellular organelles. Dysregulation in CTS expression and/or activity disrupts cellular homeostasis, thus contributing to many human diseases, including inflammatory and cardiovascular diseases, neurodegenerative disorders, diabetes, obesity, cancer, kidney dysfunction, and others. This review aimed to highlight the involvement of CTSs in inherited lysosomal storage disorders, with a primary focus to the emerging evidence on the role of CTSs in the pathophysiology of Mucopolysaccharidoses (MPSs). These latter diseases are characterized by severe neurological, skeletal and cardiovascular phenotypes, and no effective cure exists to date. The advance in the knowledge of the molecular mechanisms underlying the activity of CTSs in MPSs may open a new challenge for the development of novel therapeutic approaches for the cure of such intractable diseases. Introduction Cathepsins (CTSs) are a family of proteases expressed in all living organisms. In humans, CTSs comprise 15 proteolytic enzymes that are classified in three distinct groups based on the key amino acid within their active site, namely serine (CTS A and G), cysteine (CTS B, C, H, F, L, K, O, S, V, X, W), and aspartate (CTS D and E) [1]. These proteases, which mostly require mild acidic conditions for their optimal activity, are all synthesized as proenzymes. Although CTSs are mainly localized in the lysosomes where the acidic environment facilitates their proteolytic activity, they are also found in the cytoplasm, nucleus, and extracellular space where they participate in extracellular matrix (ECM) protein degradation, cell signaling, protein processing, and trafficking through the plasma and nuclear membranes and between intracellular organelles ( Figure 1) [2][3][4][5][6][7]. While some CTSs are ubiquitously expressed in the whole body, some are expressed in a more restricted pattern, suggesting specific cellular functions for distinct CTSs. [2]. CTSs can also be found in caveolae triggering cell-surface proteolytic events associated with cell migration [3], or in the endolysosomal [4] and autolysosomal compartments where they process the compartment's cargo [5]. CTSs in the nuclei play an important role in cell cycle regulation [6], while in the cytosol mediate mitochondrial permeabilization and apoptosis through cleavage of Bid and release of Bax [7]. Some of the cellular components displayed in the figure have been adapted from Smart Servier Medical Art under Creative Commons Attribution 3.0 Unported License. CTSs have been shown to play essential roles in coagulation, digestion, hormone liberation, adipogenesis, peptide synthesis, immune response, and many other vital processes [1,8]. Abnormal expression and/or activity of CTSs have been associated with a variety of human diseases, including inflammatory and cardiovascular diseases, neurodegenerative disorders, diabetes, obesity, cancer, kidney dysfunction, and many others (Table 1). [37,45] [ 46,47] [48] [2]. CTSs can also be found in caveolae triggering cell-surface proteolytic events associated with cell migration [3], or in the endolysosomal [4] and autolysosomal compartments where they process the compartment's cargo [5]. CTSs in the nuclei play an important role in cell cycle regulation [6], while in the cytosol mediate mitochondrial permeabilization and apoptosis through cleavage of Bid and release of Bax [7]. Some of the cellular components displayed in the figure have been adapted from Smart Servier Medical Art under Creative Commons Attribution 3.0 Unported License. CTSs have been shown to play essential roles in coagulation, digestion, hormone liberation, adipogenesis, peptide synthesis, immune response, and many other vital processes [1,8]. Abnormal expression and/or activity of CTSs have been associated with a variety of human diseases, including inflammatory and cardiovascular diseases, neurodegenerative disorders, diabetes, obesity, cancer, kidney dysfunction, and many others (Table 1). [14,100] The activity and stability of CTSs are tightly regulated by glycosaminoglycans (GAGs), a class of linear, negatively charged polysaccharides that comprise the non-sulfated hyaluronic acid (HA) and the sulfated chondroitin sulfate (CS), dermatan sulfate (DS), keratan sulfate (KS), heparin and heparan sulfate (HS). The protease-GAG interactions may enable autocatalytic activation of CTSs, promote conformational changes in the CTS structures that may increase their affinity for the substrate, thus enhancing their biological activity, and finally, protect proteases from alkaline pH-induced inactivation [101][102][103]. Most of the GAGs are covalently attached to a core protein, forming proteoglycans that are abundantly found at the cell surface and ECM [104,105]. Accumulation of undigested GAGs occurs in the lysosomes as well as on the cell surface and ECM in patients affected by Mucopolysaccharidoses (MPSs). These are a group of lysosomal storage diseases (LSDs) caused by mutations in genes encoding for lysosomal enzymes involved in GAG degradation [106]. Seven types of MPSs (I, II, III, IV, VI, VII and IX) are known to differ in the type of the accumulated GAG, their prevalence and the severity of the clinical manifestations [106][107][108]. The patients exhibit neurological disorders, skeletal and joint defects, hearing and vision impairment as well as cardiovascular and respiratory disease, and premature death [108]. Current therapeutic options available for MPSs include enzyme replacement therapy (ERT), hematopoietic stem cell transplantation (HSCT), substrate reduction therapy (SRT), chaperone therapy, and gene therapy [109,110]. Despite a definite improvement for some clinical manifestations, most of these therapies are not curative, but they only ameliorate some symptoms of the disease. Indeed, the treatment of neurological disorders, avascular cartilage lesions, and cardiac dysfunctions in MPS patients still represents an unmet clinical need [110]. In this review, we summarized the current knowledge about four LSDs due to CTS gene mutations, namely galactosialidosis, neuronal ceroid lipofuscinoses (NCLs) type 10 and type 13, and pycnodysostosis. More importantly, we highlighted the involvement of CTSs in the physiopathology of MPSs that has been scarcely considered thus far, and finally, we reviewed the various types of CTS inhibitors currently available for therapeutic applications. A deeper understanding of the molecular mechanisms underlying the role of CTSs in the onset and progression of MPSs may provide a new basis for the development of novel approaches for the treatment of such diseases. Cathepsin Deficiency Causing Lysosomal Storage Diseases Lysosomal storage diseases (LSDs) are a family of about 70 disorders caused by disruption of lysosomal homeostasis due to inherited gene mutations. A common feature for all LSDs is the abnormal storage of macromolecular substrates or monomeric compounds inside the endosomal/lysosomal compartment. LSDs are caused by both deficiency of lysosomal enzymes and defects in non-enzymatic soluble lysosomal proteins; however, the former accounts for most LSDs [111]. LSD combined incidence is estimated at 1:5000 live births [112]. The degree of protein function, the biochemistry of the stored material, and the affected cell type determine the clinical onset and symptoms exhibited by LSD patients [113]. The multiple biological functions of CTSA translate into a broad spectrum of clinical manifestations in patients affected by GSL, which are currently classified in three different forms: early infantile, late infantile, and juvenile/adult form [43]. Recurrent clinical features in the three GSL types include coarse facies, hepatosplenomegaly, dysostosis multiplex, growth retardation associated with muscular atrophy, heart involvement with cardiomegaly and thickening of the mitral and aortic valves, hearing loss, and neurological disorders [43,120,121]. To date, 28 different CTSA gene mutations have been linked to GSL, including deletions, splicing, and missense mutations [114][115][116][117][118]. Only one mutation (p.Val150Met) has been predicted to affect CTSA catalytic function, while the others are thought to most likely affect protein stability and folding [114]. Further investigation needs to clarify the link between the different mutations and the effect over CTSA, whether functional or structural. There are no effective treatments currently available for GSL patients other than supportive care. Neuronal ceroid lipofuscinoses (NCLs), also known as Batten disease, are a clinically genetically heterogeneous group of neurodegenerative LSDs [43,121]. All NCL phenotypes exhibit early impairment of the vision, progressive decline in cognitive and motor functions, dementia, epilepsy, seizures, and, ultimately, premature death. At the cellular level, NCLs show intracellular accumulation of ceroid lipofuscins in the neurons of the central nervous system (CNS), resulting in different degrees of neurodegeneration [122]. Various types of NCLs are known due to over 430 mutations in 14 different genes (called CLNs), and they are classified into four groups based on the protein that the gene encodes such as soluble and transmembrane proteins localizing to the endoplasmic reticulum or the endosomal/lysosomal compartment [123]. CLN10 and CLN13 are included into the NCL-related group due to mutations in the genes that encode lysosomal soluble proteins/enzymes [43]. In particular, CLN10 (OMIM ID: 610127) is caused by mutations in the CTSD gene due to autosomal recessive inheritance [42]. According to the ClinVar Database, 21 mutations have been identified related to CLN10 and affecting the CTSD gene; they include 19 single nucleotide variants, one insertion, and one duplication. Among the 21 mutations, only nine mutations have been confirmed to be pathogenic and linked to the development of CLN10: six missense mutations (p.Phe229Ile, p.Trp383Cys [42], p.Gly149Val, p.Arg399His [124], p.Ser100Phe [125], p.Glu69Lys [126]), a nonsense mutation (c.764dup, p.Tyr255Ter [127]), an insertion (c.268_269insC, p.Gln90fs [128]), and one deletion (p.Phe229del [129]). All different mutations result in neuropathogenesis whose extension is determined by the degree of CTSD gene function loss. Therefore, while complete loss of CTSD activity translates in an early infantile form of CLN10 with patients dying within hours to weeks after birth, patients with residual CTSD activity develop late infantile, juvenile, or adult CLN10 with milder phenotypes [43,121,124]. It is not yet clear how CTSD deficiency causes neuropathies; however, experimental evidence suggests defective autophagy might be in part responsible, as CTSD is essential in degrading cellular components during macroautophagy [130]. Several papers have demonstrated a contribution of glial dysfunction and the involvement of various brain regions in the pathogenesis of NCLs [131]. A recent study has revealed a previously unrecognized role for CTSD in selectively modulating inhibitory synaptic vesicle trafficking and synaptic transmission, showing mechanistic evidence that GABAergic presynaptic endosomal dysfunction might account for the synaptic pathology observed in CTSD deficiency-related NCL diseases [132]. Enzyme replacement therapy (ERT) with recombinant pro-CTSD corrects defective proteolysis and autophagy in cellular and murine models of CNL10 [133]. However, to date, no therapy exists for the disease. Mutations in the CTSF gene result in NCL type 13 (CLN13, OMIM ID: 615362), an adult-onset form of NCL, also known as type B Kufs disease [43,[134][135][136][137]. Patients with CLN13 exhibit mental and motor deterioration in late adulthood [51]. To date, nine mutations with recessive inheritance are known to cause CLN13: six missense mutations (p.Gln321Arg, p.Gly458Ala, p.Ser480Leu, p.Tyr231Cys, p.Ile404Thr, and p.Cys326Phe), a nonsense mutation c.416C > A (p.S139*), a frameshift mutation (p.Ser319Leufs*27), and a mutation preventing the correct splicing of CTSF mRNA (c.213 + 1G>C) [134][135][136][137]. It has been shown that disease-causing CTSF mutants fail to cleave the lysosomal integral membrane protein type-2 (LIMP-2/SCARB2) required for normal biogenesis and maintenance of lysosomes and endosomes [138,139]; however, the exact mechanism by which CTSF deficiency translates in the clinical onset of CLN13 remains elusive. The biochemical and molecular mechanisms underlying NCLs have not been addressed yet. However, several cellular and animal models [140] of the diseases provided useful tools to study the pathogenesis of such devastating neurological disorders and to test novel therapeutic approaches as well [51]. Pycnodysostosis (PKND, OMIM ID: 265800) is an autosomal recessive LSD mainly affecting skeletal structures caused by mutations in the gene encoding CTSK [43,72,141,142]. To date, 48 different CTSK mutations have been reported including missense, nonsense, frameshift, splice-site mutations, and small insertions and deletions [43,72,142]. All the genetic modifications result either in the complete loss of the protein, defective folding and impaired enzyme activity, or faulty intracellular trafficking of the enzyme to the endo/lysosomal compartment. PKND is a specific form of osteopetrosis (increased bone density), and patients exhibit decreased bone resorption resulting in osteosclerosis, without affecting bone formation [141]. Thus, in patients with PKND, markers of bone formation such as type I collagen carboxy-terminal propeptide and osteocalcin are normal, whereas markers of bone resorption (cross-linked N-and C-telopeptides of type I collagen) are significantly decreased [143]. In vitro studies showed that mutant CTSK proteins do not degrade type I collagen, which constitutes 95% of the organic bone matrix [144]. Moreover, osteoclasts and fibroblasts from PKND specimens showed accumulation of undigested collagen fibrils in their endosomal/lysosomal compartments, reflecting the defective bone reabsorption [145]. Almost half of PKND patients have growth hormone deficiency with pituitary hypoplasia and low serum insulin-like growth factor-1 (IGF-1) levels [146,147]. However, opposite findings have been reported in in vitro cell studies using CTSK inhibitors in osteoclasts, demonstrating an increase in IGF-1 due to an impairment in the degradation of the bone matrix-secreted IGF-1 [148]. This paradox could be partially explained because PKND patients present defective osteoclastic resorption, which is responsible for the release of bone matrix embedded IGF-1. Pycnodysostosis does not correlate with increased mortality; however, it can cause significant morbidity such as recurrent fractures, osteolysis of the distal phalanges, craniosynostosis, respiratory sleep disorders, short stature, and dental problems [43]. To date, no therapy is effective for the cure of PKND, although growth hormone treatment has been shown to improve growth rates and final heights in patients with PKND [149]. Targeted enzyme or gene replacement therapies are being investigated for the cure of PKND [43]. Cathepsin Involvement in the Pathophysiology of Mucopolysaccharidoses In MPSs, the lysosomal accumulation of undigested GAGs is considered the "primum movens" of the subsequent functional cell impairment; however, evidence demonstrates that the accumulation of storage material does not occur only in the lysosomes, but also on the cell surface and ECM where GAGs form proteoglycans through their covalent binding to a core protein [105,106,109]. The accumulation of storage material in non-lysosomal compartments accounts for impaired cell signaling and trafficking, protein unfolding, abnormal autophagy, alterations of intracellular calcium homeostasis, lysolipid accumulation, and modifications in other cellular processes that ultimately lead to the MPS phenotypes [150][151][152][153][154][155][156][157][158][159]. A variety of evidence demonstrate that abnormal expression and/or activity of both lysosomal and extra-lysosomal CTSs correlate with MPS major clinical manifestations such as neuropathology, bone and joint defects, and cardiovascular disorders (Table 3). [168] In particular, the cysteine CTSB, involved in the degradation of collagen [176] and responsible for heart dilatation [19], displayed a marked increase of its activity in MPS I mouse model, suggesting that the progressive heart failure and valve disease observed in these mice may be dependent on CTSB overexpression [160]. The in vivo treatment of MPS I mice with a CTSB inhibitor reduced aortic dilatation and heart valve thickening, and led to an improvement of cardiac function, suggesting that CTSB inhibition may have a potential benefit in the disease [161]. Elevated activity of CTSB was detected in the MPS VII dog model showing abnormalities in the collagen structure of mitral valve [162]. When affected dogs received an intravenous injection of a retroviral vector expressing canine β-glucuronidase (the deficient enzyme in MPS VII), a reduced CTSB activity was observed, which correlated with an improved signal for structurally intact-collagen. Furthermore, in both mouse and dog models of MPS I and MPS VII, aortic dilatation resulted in being associated with an up-regulation of the elastase CTSS as well [171][172][173]. Indeed, neonatal intravenous injection of a retroviral vector expressing α-L-iduronidase normalized CTSS mRNA levels and prevented aortic disease in MPS I mice [171]. Moreover, intravenous injection of a retroviral vector expressing β-glucuronidase to MPS VII dogs reduced RNA levels of CTSS and delayed the development of aortic dilatation [173]. Cardiac involvement, although firstly reported for MPS I, II, and VI affected patients, has been reported in all MPS patients [177]. Indeed, cardiac disease has been described in MPS III patients as well as in patients affected by the other MPS subtypes [178,179]. Multiple evidence demonstrated the involvement of CTSs in many cardiovascular diseases, including atherosclerosis, cardiac hypertrophy, cardiomyopathy, myocardial infarction, and hypertension, some of which are common clinical manifestations in MPSs [8,10,[17][18][19]36,56,61,66,76,80,81]. In particular, CTSB results in being up-regulated in cardiomyocytes in response to hypertrophic stimuli both in vivo and in vitro [180]. Furthermore, CTSB was associated with an increased risk of cardiovascular events in patients with stable coronary heart disease [18]. A specific CTSB inhibitor, namely CA-074Me, reduced cardiac dysfunction, remodeling, and fibrosis in a rat model of myocardial infarction [181]. Studies aimed to explore the potential of CTS inhibitors for the treatment of cardiovascular diseases are ongoing [19], and targeting CTSs-based therapy might provide new avenues for the treatment of MPSs as well. Beside cardiac disease, some MPS subtypes are characterized by central nervous system (CNS) degeneration for which there are currently no resolutive treatments [106,109]. The involvement of CNS in the disease manifests as mental retardation, intellectual disabilities, behavioral disorders, sleep disturbances, progressive neurodegeneration, and early death. Neurodegeneration occurs in the severe forms of MPS I, and it is prominent in MPS III affected patients [106,108,182]. Elevated transcripts of CTSD, CTSS, and CTSZ were detected in the cortex of MPS I and MPS IIIB mouse models [166]. Abnormal activity of CTSD in the cerebral cortex correlated with locomotion disorders and neuropathology in MPS I mice [167]. Up-regulation of CTSB was observed in the brain of MPS IIIA mice [163]. Overexpression of CTSB and CTSD was also found in the proteomic profile of MPS I mouse brain tissues [164]. Interestingly, enhanced expression and activity of CTSB resulted in being associated with increased deposition of amyloid plaques in the MPS I mouse brain, and the existence of a novel CTSB-associated amyloidogenic pathway leading to neurodegeneration was highlighted [165]. Since CTSB is a crucial regulator of the NLRP3 inflammasome, it likely contributes to the inflammasome-dependent pathway involved in MPS neuroinflammation [183]. On the other hand, the cysteine CTSB and the aspartate CTSD result to be up-regulated in a variety of neurological disorders [184][185][186]. The protease CTSS is preferentially expressed in cells of the macrophage/monocyte lineage, and inflammation stimulates its secretion from the microglia and macrophages [185,187]. The involvement of microglial CTSB, CTSD, and CTSS in neurodegenerative diseases supports the view that microglia-driven neuroinflammation contributes to the progression of neurodegeneration in MPS I and IIIB [183,188,189]. Indeed, molecular evidence of microgliosis has been well established in mouse and dog models of MPS I and MPS III A, B, and C subtypes [166,182,188,190]. Inhibition of CTSB has been shown to prevent neuronal death and behavioral disorders in a patient affected by the Niemann-Pick disease type A and in a mouse model of the disease [191]. Although further studies are needed to fully elucidate the pathophysiological role of CTSs in the CNS, the above findings strongly suggest that the specific inhibition of microglial CTSs might lead to neuroprotective outcomes in MPS phenotypes characterized by activated pro-inflammatory microglia. Neurological phenotypes are also common in other MPS types than MPS I and III. Indeed, patients affected by the severe forms of Hunter syndrome (MPS II), which account for about 75% of the cases, exhibit impairment of cognitive skills, mental retardation, intense neurobehavioral symptoms, and death in the second decade of life [192]. In agreement with the previous findings in MPS I and III subtypes, a transcriptome analysis of the brain from the MPS II mouse model showed CTSD up-regulation in the cerebral cortex of affected mice [170]. The same study also highlighted dysregulation of CTSA, CTSC, CTSH, CTSL, and CTSS gene expression in MPS II mouse brain, although with different trends (up/down-regulation) in the cerebral cortex and the midbrain/diencephalon/hippocampus areas. Variation in CTS gene expression between different brain regions was also observed in the MPS VII mouse model, thus suggesting that different neuropathologic mechanisms may predominate in the different areas of brains [174]. Furthermore, a transcriptome analysis of MPS VII mouse brain showed that CTSA, CTSB, CTSC, CTSD, CTSH, CTSS, and CTSZ were highly up-regulated in all brain regions, while CTSK was only changed in the brain stem and was down-regulated. Integrated analysis of proteome and transcriptome changes confirmed CTSS and CTSZ dysregulation in the MPS VII mouse hippocampus [175]. MPS VII affected patients present a broad clinical spectrum of symptoms from severe to milder phenotypes; however, most of them display intellectual disabilities together with delayed speech development, hearing impairment, and behavioral disturbances [193]. The accumulation of undigested GAGs in the lysosomes of connective tissue cells and chondrocytes is responsible for musculoskeletal abnormalities commonly observed in almost all MPS subtypes [100][101][102][103][104][105][106][194][195][196]. However, in MPSs, the pathogenesis of the skeletal and joint disease, including growth impairment, may involve complex molecular mechanisms underlying alterations of cartilage and bone metabolism, as well as inflammatory pathways [197]. Indeed, metabolic inflammation is a significant cause of osteoarticular symptoms in MPS disorders [183,198]. On the other hand, CTSs have long been involved in skeletal and bone health and disease [199]. Recently, up-regulation of CTSA, CTSH, and CTSZ has been detected through transcriptomic and proteomic analyses in a rat model of spinal cord injury [200]. In VCP (valosin containing protein) knock out mice, up-regulation of CTSB and CTSD in skeletal muscle correlated with activation of the transcription factor EB [201]. The cysteine protease CTSK, which has long been known as a molecular marker of differentiated osteoclasts and is directly involved in the degradation of bone matrix proteins [201][202][203], plays a crucial role in skeletal pathologies frequently observed in MPSs and other LSDs as well [168]. In the murine model of MPS I, the accumulation of GAGs in bones had an inhibitory effect on CTSK activity, resulting in impaired osteoclast activity and decreased cartilage resorption, thus contributing to the bone pathology seen in the disease [169]. This finding makes CTSK a candidate therapeutic target for MPS types where current therapies have a limited effect on skeletal conditions. Selective CTSG inhibitors have been recently designed with the potential to improve chronic inflammatory diseases [205]. A variety of CTSC inhibitors have been developed and evaluated in preclinical/clinical trials to regulate serine protease activity in inflammatory and immunologic conditions [27,28]. Pepstatin A is a potent inhibitor for CTSD [227]. Although Pepstatin A can target other aspartyl proteases than CTSD, it is 26,000 times more specific for CTSD (Ki = 0.5 µmol/L) than for its next target renin (Ki = 13000 µmol/L). Pepstatin A has been proved effective in slowing down chronic kidney disease progression [30,32] and fatty liver disease [209] in experimental animal models. Due to the low bioavailability of this peptidic inhibitor, efforts have been made to design Pepstatin A analogues more suitable for the treatment of human diseases [228]. Highly specific and potent small-molecule inhibitors of CTSD other than Pepstatin A have been developed for the treatment of non-alcoholic fatty liver disease [229], as well as CTSD targeting by natural products has shown to be beneficial in cancer chemoprevention [216]. The aspartic protease CTSE plays an essential role in antigen processing within the class II MHC pathway [48], therefore broadly inhibiting CTSE can lead to undesirable side effects. Selective inhibitors of CTSE and CTSB, both involved in the polarization of microglia/macrophages in neurotoxic phenotypes leading to hypoxia/ischemia, are being tested as pharmacological agents for the treatment of ischemic brain injury [212]. Moreover, CTSS inhibitors have shown neuroprotective and anti-inflammatory effects in preclinical studies for the treatment of neurodegenerative diseases [184], although CTSS essential role in CNS homeostasis might limit its therapeutic applications [204]. However, molecular modeling-assisted design of CTSS inhibitors has provided novel scaffolds for improved CTSS inhibition [217]. Experimental evidences suggest that inhibition of CTSS attenuates the progression of atherosclerosis during chronic kidney disease [84], improves sugar levels during type2 diabetes [208], and prevents autoantigen presentation and autoimmunity [229]. CTSK inhibitors have been proved successful improving osteoporosis [72,73,230]; however, concerns emerged over off-target effects of the inhibitors against other CTSs and CTSK inhibition at nonbone sites (i.e., skin, and cardiovascular and cerebrovascular sites). Recently, novel selective inhibitors for CTSK have been developed, showing beneficial effects on bone and cartilage in preclinical osteoarthritis models with a safety profile [71,206]. The use of CTS inhibitors in cellular and animal models have contributed to deepening our understanding of the mechanisms of action and biological functions of these proteolytic enzymes. Conclusions Neuropathology, skeletal and joint defects, and cardiac disorders are among the most prominent clinical manifestations of MPSs, which are refractory to the current therapies [106,108,109]. Although no studies are available in humans, investigations in animal models have shown a beneficial effect of CTS inhibition, especially in ameliorating cardiac disease in MPSs and other LSDs [161,162,171,173]. Since CTSs have also been shown to play a role in the onset and progression of neuropathology and skeletal disorders in MPSs, affected patients might gain benefits from treatments with CTS targeting-based drugs. Therefore, it would be of great interest to test the effectiveness of the new generation of highly selective CTS inhibitors in MPSs. They could be used alone or in combination with the current therapeutic approaches to improve the quality and duration of life of these patients. However, there is a need for further investigations on the effective role of distinct CTSs in the pathophysiology of MPSs to recognize them as key players in the fight against such incurable diseases.
5,818.4
2020-04-01T00:00:00.000
[ "Biology", "Medicine" ]
Endomembranes promote chromosome missegregation by ensheathing misaligned chromosomes Ferrandiz et al. found that misaligned chromosomes, arising from errors in mitosis, can become “ensheathed” in endomembranes. Ensheathing biases chromosomes toward missegregation, leading to aneuploidy and micronucleus formation. The authors use a novel method to remove the ensheathing membranes, which allows the spindle to rescue the chromosome. Introduction Accurate chromosome segregation during mitosis is essential to prevent aneuploidy, a cellular state of abnormal chromosome number (Duijf and Benezra, 2013). Errors in mitosis that lead to aneuploidy can occur via different mechanisms. These mechanisms include mitotic spindle abnormalities (Ghadimi et al., 2000), incorrect kinetochore-microtubule attachments (Cimini et al., 2001), dysfunction of the spindle assembly checkpoint (Kalitsis et al., 2000), defects in cohesion (Daum et al., 2011), and failure of cytokinesis (Fujiwara et al., 2005). Some of these error mechanisms result in the missegregation of whole chromosomes, a process termed chromosomal instability (CIN). The majority of solid tumors are aneuploid, with higher rates of CIN, and so understanding the mechanisms of chromosome missegregation is an important goal of cancer cell biology. In addition, chromosome missegregation is associated with micronucleus formation, which is linked to genomic rearrangements that may drive tumor progression (Crasta et al., 2012;Ly et al., 2017;Liu et al., 2018). While the mitotic spindle has logically been the focus of efforts to understand chromosome missegregation, there has been less attention on other features of mitotic cells such as intracellular membranes. In eukaryotic cells, entry into mitosis constitutes a large-scale reorganization of intracellular membranes. The nuclear envelope (NE) breaks down, while the ER and Golgi apparatus disperse to varying extents (Hepler and Wolniak, 1984;Warren, 1993). These organelle remnantscollectively termed "endomembranes"-are localized toward the cell periphery, while the mitotic spindle itself is situated in an "exclusion zone" that is largely free of membranes and organelles (Bajer, 1957;Porter and Machado, 1960;Nixon et al., 2017). The endomembranes beyond the exclusion zone are densely packed, although the details of their ultrastructure vary between cell lines (Puhka et al., 2007;Lu et al., 2009Lu et al., , 2011Puhka et al., 2012;Champion et al., 2017). This arrangement means that, although mitosis is open in mammalian cells, the spindle operates within a partially closed system. Several lines of evidence suggest that endomembranes must be cleared from the exclusion zone for the mitotic spindle to function normally (Vedrenne et al., 2005;Schlaitz et al., 2013;Champion et al., 2019;Kumar et al., 2019;Merta et al., 2021). In addition, it is thought that this arrangement is required to concentrate factors needed for spindle formation (Schweizer et al., 2015). This study was prompted by a simple question: What happens to misaligned chromosomes that find themselves beyond the exclusion zone? We show that such chromosomes become "ensheathed" in multiple layers of endomembranes. Chromosome ensheathing delays mitosis and increases the frequency of chromosome missegregation and subsequent formation of micronuclei. Using an induced organelle relocalization strategy, we demonstrate that clearance of endomembranes allows the rescue of chromosomes that were destined for missegregation. Our findings indicate that endomembranes are a risk factor for CIN if the misaligned chromosomes go beyond the exclusion zone boundary during mitosis. Results Misaligned chromosomes outside the exclusion zone are ensheathed in endomembranes During mitosis, the spindle apparatus is situated in a membranefree exclusion zone. Outside the exclusion zone, the ER and NE-collectively called endomembranes-surround the mitotic spindle. We investigated the organization of endomembranes in mitotic cells using light microscopy and EM. First, we carried out live-cell imaging of mitotic RPE-1 cells that stably coexpress GFP-Sec61β and Histone H3.2-mCherry, stained with SiRtubulin to mark the ER, DNA, and microtubules. These images revealed a mitotic spindle-sized exclusion zone from which GFP-Sec16β signal was absent ( Fig. 1 A). Second, serial block face scanning electron microscopy (SBF-SEM) of mitotic RPE-1 cells showed that the ellipsoid exclusion zone is largely devoid of endomembranes, including mitochondria and other organelles. Outside the exclusion zone, endomembranes are tightly packed, and the border between these two regions is clearly delineated and could be segmented (Fig. 1 B). Misaligned chromosomes are those that fail to attach or lose their attachment to the mitotic spindle. What happens to misaligned chromosomes that end up among the endomembranes beyond the exclusion zone? HeLa cells have high rates of chromosome misalignment, and live-cell imaging showed that misaligned chromosomes could be situated beyond the exclusion zone ( Fig. 1 C). Reconstruction of SBF-SEM data from HeLa cells showed that three to four layers of endomembranes ensheath the chromosomes beyond the exclusion zone ( Fig. 1 D and Videos 1 and 2). We use the term ensheathed to describe how these chromosomes are surrounded by endomembranes but not fully enclosed in any one layer, as though in a vesicle. To study chromosome ensheathing in diploid cell lines, we needed to artificially increase the frequency of misaligned chromosomes in mitosis. Our main model was RPE-1 cells pretreated with 150 nM GSK923295, a centromere protein E (CENP-E) inhibitor (Wood et al., 2010), before washing out the drug for 1 h (Fig. 2 A). In parallel, we also used a system of targeted Y-chromosome spindle detachment in DLD-1 cells Fig. S1). Using live-cell imaging in both cell types, we observed that misaligned chromosomes beyond the exclusion zone are submerged in endomembranes (Figs. 2 B and S1 E). Next, we used an image analysis method to determine the location of kinetochores in 3D space and map these positions relative to the exclusion zone boundary (see Materials and methods;Fig. 2,C and D;and Fig. S1,F and G). Kinetochores of chromosomes that were not aligned at the metaphase plate therefore fell into two categories: those that were surrounded by GFP-Sec61β signal, termed ensheathed, and those that were not, termed free (Fig. 2, B and D). Spatial analysis revealed that the kinetochores of ensheathed chromosomes were beyond the exclusion zone, whereas kinetochores of free chromosomes lay at the boundary in RPE-1 cells (Fig. 2 D). In DLD-1 cells, the distinction was even more clear, with the kinetochores of free chromosomes positioned inside the exclusion zone S1F. The exclusion zone therefore approximately defines chromosome misalignment, with those chromosomes beyond the exclusion zone likely to be ensheathed by endomembranes. However, imaging GFP-Sec61β was required to verify that a chromosome was fully ensheathed. We again used SBF-SEM to observe how chromosomes beyond the exclusion zone interact with endomembranes in RPE-1 cells. Cells observed by fluorescence microscopy to have at least one ensheathed chromosome were selected for 3D EM analysis (Fig. 2 E). Segmentation of these datasets confirmed that the chromosome was fully beyond the exclusion zone boundary (Fig. 2 F and Video 3) and was ensheathed in several layers of endomembranes (Fig. 2 E). The observation of ensheathed chromosomes raised immediate questions about their fate and whether ensheathing leads to aberrant mitosis. Confocal micrographs to show that these misaligned chromosomes (SiR-DNA, red) are either outside the exclusion zone delineated by GFP-Sec61β (green), termed ensheathed, or at the boundary and inside the exclusion zone, termed free. Scale bars, 10 µm; 1 µm (inset). (C) Spatially averaged 3D view of all CENP-C-positive kinetochores in the dataset; see Materials and methods). Small gray points represent kinetochores at the metaphase plate. Colored points represent misaligned chromosomes that were ensheathed (orange) and those that were not (free, blue). Spindle poles are shown in black. (D) Box plot to show the relative position of each kinetochore relative to the exclusion zone boundary. Chromosome misalignment was induced by pretreatment with GSK923295 (150 nM). Ratio of kinetochores within the exclusion zone are <0 and those within the ER are >0 on a log2 scale. Dots represent kinetochore ratios from 31 RPE-1 cells at metaphase. Boxes show IQR, bar represents the median, and whiskers show 9th and 91st percentiles. Inset: Schematic diagram to show how the position of kinetochores relative to the exclusion zone boundary was calculated. C is the centroid of aligned kinetochores, P is a kinetochore, and Q is the point along the 3D path (CP) that intersects the exclusion zone boundary. The ratio of CP to CQ is taken for each kinetochore (aligned kinetochores, gray; free, blue; and ensheathed, orange). Ensheathed chromosomes delay mitotic progression To determine the impact of ensheathed chromosomes on cell division, we first analyzed mitotic progression in RPE-1 cells stably expressing GFP-Sec61β with induction of ensheathed chromosomes using GSK923295 pretreatment. Cells that had at least one ensheathed chromosome showed prolonged mitosis (median NE breakdown [NEB]-to-anaphase timing of 66 min compared with 27 min in GSK923295 pretreated cells in which all chromosomes were aligned). The time to align the majority of chromosomes (NEB-to-metaphase) was delayed for cells with either a free or an ensheathed chromosome, but cells with an ensheathed chromosome had an additional delay to progress to anaphase (Fig. 3 A). Given these delays, we next confirmed that the spindle assembly checkpoint was active in these cells. The amount of Mad2 and Bub1 detected by immunofluorescence at CENP-C-positive kinetochores of free or ensheathed chromosomes was similar and was four-fold higher than at kinetochores of aligned chromosomes (Fig. 3, B and C; and Fig. S2, A and B, for DLD-1 cells). Using live-cell imaging, we found that GFP-Mad2 was recruited to kinetochores of ensheathed chromosomes (Fig. 3,D and E;and Video 4). Semiautomated 4D tracking of chromosomes allowed us to monitor their GFP-Mad2 status over time, relative to anaphase onset. These data revealed that GFP-Mad2 is lost from ensheathed chromosomes with similar kinetics to the signals at misaligned chromosomes that successfully congress to the metaphase plate ( Fig. 3 E). The failure of ensheathed chromosomes to congress is likely due to a lack of microtubule attachment, suggesting that endomembranes inhibit chromosome-microtubule interactions. We confirmed that ensheathed chromosomes have no stable end-on kinetochore-microtubule attachments by detecting colocalization of kinastrin, a marker for stable end-on attachment (Dunsch et al., 2011), with kinetochores of aligned and misaligned chromosomes (Fig. S3, A-C). Live-cell imaging of RPE-1 cells stably coexpressing Histone H3.2-mCherry and GFP-Sec61β, stained with SiR-Tubulin, showed that ensheathed chromosomes that failed to congress had no detectable microtubule contacts; free chromosomes that had microtubule contacts could be rescued and aligned at the metaphase plate, albeit after a delay (Fig. S3,D and E). These results suggest that ensheathed chromosomes hinder mitotic progression in a spindle assembly checkpoint-dependent manner. Lack of microtubule contact is sensed by the spindle assembly checkpoint, but ultimately, the checkpoint is extinguished in the absence of congression after a long delay. The cells then proceed to anaphase, resulting in missegregation of the ensheathed chromosome. Ensheathed chromosomes promote formation of micronuclei To understand the fate of cells with an ensheathed chromosome, we next examined mitosis in control or GSK923295-pretreated RPE-1 cells stably expressing GFP-Sec61β using live-cell spinning disc microscopy ( Fig. 4 A). In cells with an ensheathed chromosome, we observed the long delay in mitosis relative to control cells, and that mitosis was often resolved by missegregation and formation of a micronucleus (Figs. 4 A and S2 C for DLD-1 cells). These experiments suggested that ensheathed chromosomes are potentially a precursor to micronuclei. We therefore followed the fate of mitotic cells by long-term live-cell imaging to understand the likelihood of mitotic outcomes. Our sample of cells pretreated with GSK923295 included the three metaphase classes: aligned (25.8%), free (5.4%), and ensheathed (65.6%). The most frequent fate of cells with an ensheathed chromosome was micronucleus formation (39%). Of the 47 cells that formed a micronucleus after division in the dataset, 46 were from the ensheathed class ( Fig. 4 B). This promotion of micronucleus formation was significant in cells with an ensheathed chromosome compared to free (P = 1.3 × 10 −3 , Fisher's exact test). A smaller proportion of cells with an ensheathed chromosome exited mitosis normally, albeit with a delay (34%), with the remainder showing other defects or death (20% or 8%). Cells pretreated with GSK923295, that had aligned all their chromosomes, had similar fates to parental and control cells (Fig. 4 B; and Videos 5 and 6). These fate-mapping experiments suggest that ensheathing of chromosomes by endomembranes promotes the formation of micronuclei. Micronuclei formed from ensheathed chromosomes have a disrupted NE Micronuclei can undergo a collapse of their NE, which manifests as ER tubules invading the micronuclear space (Hatch et al., 2013). We therefore asked if micronuclei that formed from ensheathed chromosomes were similarly defective. Using confocal imaging of RPE-1 cells stably coexpressing GFP-Sec61β and either mCherry-BAF or LBR-mCherry that were fixed 8 h after washout of GSK923295 to examine micronucleus integrity, we found that the majority of micronuclei have ER inside the micronucleus (Fig. 5). The fluorescence of GFP-Sec61β was higher at the micronucleus compared with the main nucleus ( Fig. 5 B). Moreover, the levels of either mCherry-BAF or LBR-mCherry were correlated with GFP-Sec61β. To confirm that these micronuclei had disrupted NEs, we stained for H3K27Ac, a modification to Histone H3 that is removed by exposure to the cytoplasm (Mammel et al., 2021). Intact micronuclei had H3K27Ac signals similar to those of the corresponding main nucleus, whereas in micronuclei that were disrupted, the signal was lost (Fig. 5 A). The ratio of H3K27Ac signal at the micronucleus compared with the main nucleus was anticorrelated with the ratios of GFP-Sec61β, mCherry-BAF, and LBR-mCherry ( Fig. 5 B). Since the majority of micronuclei formed after pretreatment of RPE-1 cells with GSK923295 are derived from ensheathed chromosomes (Fig. 4 B), these data suggest that the ensheathing process may contribute to the formation of defective micronuclear envelope. However, due to the low rates of missegregation of free chromosomes, it was not possible to conclude whether disruption was specific to chromosome ensheathing. Induced relocalization of ER enables the rescue of ensheathed chromosomes Does ensheathing of misaligned chromosomes cause chromosome missegregation? To answer this question, we sought a way to clear the mitotic ER and test whether this enabled subsequent rescue of misaligned chromosomes to the metaphase plate. To clear the mitotic ER, we used an induced relocalization strategy ( Fig. 6 A). Induced relocalization of small organelles has been demonstrated for Golgi, intracellular nanovesicles, and endosomes, typically using heterodimerization of FKBP-rapamycin-FRB with the FKBP domain fused to the organelle and the FRB domain at the mitochondria (Dunlop et al., 2017;Hirst et al., 2015;Larocque et al., 2020;van Bergeijk et al., 2015). We reasoned that a large organellar network, such as the ER, may be cleared by inducing its relocalization to the cell boundary. Our strategy therefore comprised an ER-resident hook (FKBP-GFP-Sec61β) and a plasma membrane anchor (stargazin-mCherry-FRB) with application of rapamycin predicted to induce the Cumulative frequencies for NEB to metaphase (NEB-Meta) and metaphase to anaphase (Meta-Ana) are shown. RPE-1 stably expressing GFP-Sec61β were treated with 150 nM GSK923295 for 3 h before washout. Three classes of metaphase were seen: all chromosomes aligned (Aligned, n = 29), cells with one or more free chromosomes (Free, n = 11), and cells with one or more ensheathed chromosome (Ensheathed, n = 107). Timing of untreated parental (Parental, n = 69) and stable RPE-1 (Control, n = 52) cells is also shown. Inset in Meta-Ana shows same data on an expanded time scale. Comparison of NEB-Meta and Meta-Ana timing distributions for ensheathed vs. control, P = 1.9 × 10 −57 and 7.8 × 10 −23 , Kolmogorov-Smirnov test. (B) Micrographs of immunofluorescence experiments to detect Bub1 or Mad2 (SAC, red) at kinetochores (CENP-C, blue) in cells stably expressing GFP-Sec61β (green); DAPI-stained DNA is shown in gray. Scale bars, 10 µm; 2 µm (insets). (C) Quantification of Bub1 and Mad2 immunofluorescence at kinetochores marked by CENP-C. Ensheathed chromosomes were classified using the GFP-Sec61β signal. Dots represent kinetochores, boxes show IQR, bar represents the median, and whiskers show 9th and 91st percentiles (Bub1: n A = 132, n F = 30, n E = 37; (Mad2: n A = 103, n F = 20, n E = 31). (D) Stills from live-cell imaging experiments to track Mad2 levels at kinetochores of ensheathed chromosomes. A GSK923295-pretreated RPE-1 cell is shown, stably coexpressing GFP-Mad2 (green) and mCherry-Sec61β (red); DNA is stained using SiR-DNA (blue). Time relative to anaphase is shown in minutes. Insets show 2× zoom of the indicated ROI. Scale bars, 10 µm; 2 µm (insets). (E) Quantification of live Mad2 imaging experiments. Kaplan-Meier plot to show congression times of the last misaligned chromosome to align. Measurement of mCherry-Sec61β (mean ± SD) and GFP-Mad2 is shown for the misaligned that congressed and those that were missegregated (misseg). A linear regression fit with 95% confidence intervals is shown for GFP-Mad2. All plots are shown in time (minutes) relative to anaphase onset. Total cells with misaligned chromosomes, n = 72; cells where all chromosomes congressed, n = 56; and where there was missegregation, n = 16. relocalization of ER to the plasma membrane ( Fig. 6 A). HCT116 cells were used for these experiments, as they are near diploid and easy to transfect and showed a fate and mitotic response to GSK923925 pretreatment similar to those of RPE-1 (Fig. S4). We found that the clearance of ER in mitotic cells with this strategy was efficient, occurring in 89.2% of HCT116 cells expressing the system after treatment with 200 nM rapamycin. Onset was variable, with a median time to maximum clearance of 15 min (interquartile range [IQR], 12-24 min; Fig. 6 B). Importantly, induced relocalization of FKBP-GFP-Sec61β to the plasma membrane represented the clearance of ER and not the extraction of the protein. First, immunostaining of two other endogenous ER-resident proteins, KDEL and calnexin, also showed relocalization to the plasma membrane (Fig. 6 C). Second, SBF-SEM imaging allowed us to observe the relocalization of ER to the plasma membrane (Fig. 6 D). Here, the expansion of the exclusion zone and the direct attachment of hundreds of ER tubules to the plasma membrane could be unambiguously visualized. We next tested whether ER clearance could be used as an intervention in cells with ensheathed chromosomes. To do this, HCT116 cells expressing FKBP-GFP-Sec61β and stargazin-mCherry-FRB, pretreated with 150 nM GSK923295 to induce ensheathed chromosomes, were imaged as 200 nM rapamycin was applied to induce clearance of the ER. In control cells where no rapamycin was applied, the cells were arrested in mitosis for prolonged periods. In cells where the ER had been cleared, congression of the ensheathed chromosome was clearly seen after clearance had occurred (Fig. 7 A and Video 7). We used automated image analysis to track the 3D position of the misaligned chromosome over time, in an unbiased manner (Fig. 7, B-C). Congression of the ensheathed chromosome within 80 min was seen in 86.7% of cells with induced ER clearance. In control cells, the majority (66.7%) were unable to resolve the ensheathed chromosome in the same time (Fig. 7, A-C). These data suggest that ER clearance is an effective intervention in cells with ensheathed chromosomes and points to a causal role for endomembranes in chromosome missegregation. Discussion This study demonstrates that misaligned chromosomes located beyond the exclusion zone are liable to become ensheathed by endomembranes. The fate of cells with ensheathed chromosomes is biased toward missegregation, aneuploidy, and micronucleus formation. We showed that if the ER was cleared by induced relocalization in live mitotic cells, these chromosomes could be rescued by the mitotic spindle, an intervention which suggests that chromosome ensheathing by endomembranes is a risk factor for chromosome missegregation and subsequent aneuploidy. Chromosomes can become misaligned during mitosis for a number of reasons, but we show here that those that transit out of the exclusion zone become ensheathed in endomembranes. We demonstrated this with four different cell models: RPE-1 or HCT116 cells pretreated with a CENP-E inhibitor, DLD-1 cells with targeted disconnection of the Y-chromosome, and HeLa cells with spontaneously arising misaligned chromosomes. In each case, misaligned chromosomes beyond the exclusion zone typically became ensheathed in endomembranes. Although the morphology of mitotic endomembranes varies between cell lines (Puhka et al., 2007;Lu et al., 2009Lu et al., , 2011Puhka et al., 2012;Champion et al., 2017), all ensheathed chromosomes were draped in several layers of endomembranes. We use the term ensheathed to describe how these chromosomes are surrounded by endomembranes but not fully enclosed in any one layer as though in a vesicle. The ensheathing membrane follows the contours of the chromosome closely. Our SBF-SEM analysis did not uncover any obvious electron-dense connections between the ensheathed chromosome and its surrounding membranes, although a previous report indicated that exogenous DNA clusters may physically interact with mitotic ER (Wang et al., 2016). A major finding of our work is that ensheathing promotes missegregation and micronucleus formation. Our 3D EM images of ensheathed chromosomes show that microtubules face a difficult task to negotiate several layers of endomembranes to make the contact between kinetochore and spindle that is necessary for rescue and alignment. In cases where contact is made, endomembranes are also likely to impair the congression of the chromosome, as suggested by a recent study in which excess ER was shown to slow chromosome motions (Merta et al., 2021). Since endomembranes are a risk factor for missegregation, their precise organization-for example the sheet-to-tubule ratio of the ER-may influence the likelihood for missegregation (Champion et al., 2017). The lack of attachment is sufficient to prolong spindle assembly checkpoint signaling and delay mitosis. Ultimately, the cells progress to anaphase and missegregate, likely due to checkpoint exhaustion after prolonged metaphase (Uetake and Sluder, 2010;Yang et al., 2008). Whatever the mechanism, the role of endomembranes in promoting missegregation may be important for tumor progression. It is possible that in tumor cells that are aneuploid, endomembranes may contribute to the higher rates of CIN observed (Funk et al., 2016;Nicholson and Cimini, 2015). In non-transformed cells, misaligned chromosomes that arise spontaneously are more often of the free class, suggesting that the ensheathing mechanism described here is most relevant in a cancer context. The fate of cells with ensheathed chromosomes was biased toward missegregation and formation of micronuclei. Interestingly, a previous study found that artificially tethering endomembranes to aligned chromosomes within the exclusion zone caused mitotic errors, although the outcome was dependent on at what stage tethering was induced (Champion et al., 2019). Tethering before mitotic entry resulted in segregation errors and multilobed nuclei, whereas tethering during metaphase had little consequence. Although conceptually similar, the ensheathing process reported here is a natural consequence of a misaligned chromosome becoming entangled in endomembranes. Key differences include the position of the ensheathed chromosome, the lack of microtubule attachments, no direct membrane-chromosome tethering, and multiple vs. single endomembrane layers; these likely explain the different observed mitotic phenomena. We found that the micronuclei that result from ensheathed chromosomes had disrupted envelopes 8 h after release from CENP-E inhibition. Rupture of micronuclei has been shown to lead to DNA damage and activation of innate immune and cell invasion pathways Hatch et al., 2013;Mammel et al., 2021;Bakhoum et al., 2018). The presence of ER in the micronuclear space of disrupted micronuclei indicates that ensheathing may increase the likelihood of rupture. We speculate that this may occur by endomembranes physically interfering with envelope reformation at the micronucleus, although it is possible that ER is present in the micronuclear space as a consequence, rather than a cause, of disruption. Mitosis in human cells is open, yet we have known for >60 yr that the spindle exists in a membrane-free ellipsoid exclusion zone (Bajer, 1957;Porter and Machado, 1960;Nixon et al., 2017). It seems intuitive that the spindle must operate in a membranefree area to avoid errors, but recent work suggests that the exclusion zone is actively maintained and that this arrangement is important for concentrating factors for spindle assembly (Schweizer et al., 2015) or for maintenance of spindle structure (Kumar et al., 2019;Schlaitz et al., 2013). We found that ER clearance, via an induced relocalization strategy, could be used as an intervention to improve the outcome for mitotic cells with ensheathed chromosomes. Induced relocalization of small clearance. An automated segmentation procedure was used to monitor ER localization in mitotic cells. The time at which the largest decrease in ER localization occurred was taken (n = 35−37, see Materials and methods). Random occurrence is shown for comparison. The median (IQR) ER clearance time in rapamycin-treated cells was 15 (12-24) min; rapamycin is applied after the first frame (T = 0). (C) Induced relocalization of FKBP-GFP-Sec61β to the plasma membrane causes ER clearance. Typical immunofluorescence micrographs of mitotic HCT116 cells pretreated with GSK923295, expressing FKBP-GFP-Sec61β (green) and Stargazin-mCherry-FRB (blue), treated or not with rapamycin (200 nM). Cells were stained for ER markers KDEL or Calnexin as indicated (red), DNA was stained with DAPI (gray). Insets are 2× expansions of the ROI shown. Scale bars, 10 µm; 1 µm (insets). (D) SBF-SEM imaging of control or ER-cleared (rapamycin) mitotic HCT116 cells. A single slice is shown with segmentation of ER (green), plasma membrane (yellow), mitochondria (blue), and chromosomes (red). Scale bars, 5 µm; 1 µm (insets). Insets are 2× expansions of the indicated ROI shown without segmentation; green arrowheads indicate ER attachment to the plasma membrane. organelles has previously been demonstrated (Dunlop et al., 2017;Hirst et al., 2015;Larocque et al., 2020;van Bergeijk et al., 2015), but the movement of a large organellar network by similar means had not been attempted previously. Surprisingly, ER clearance in mitotic cells was efficient, although it was much slower than the relocalization of intracellular nanovesicles, taking tens of minutes rather than tens of seconds (Larocque et al., 2020). We speculate that the efficiency of clearance is due to cooperativity of relocalization, since the FKBP-GFP-Sec61β molecules are dispersed in the ER, which is interconnected. These experiments were important to show that ensheathing was causal for chromosome missegregation. We note that this method has many future applications: to selectively perturb mitotic structures, at defined times, during cell division. For example, ER clearance and concomitant expansion of the exclusion zone is an ideal manipulation to probe the function of this enigmatic cellular region. RPE-1 GFP-Sec61β stable cell line was generated by Fugene-HD (Promega) transfection of pAc-GFPC1-Sec61β. DLD-1-WT mCherry-Sec61β and DLD-1-C-H3 mCherry-Sec61β stable cell lines were generated by GeneJuice (Merck Millipore) transfection of mCherry-Sec61β into the respective parental lines. Individual clones were isolated by G418 treatment (500 μg ml −1 ) and validated using a combination of Western blot, FACS, and fluorescence microscopy. Stable coexpression of Histone H3.2-mCherry, mCherry-BAF, or LBR-mCherry with GFP-Sec61β in RPE-1 cells was achieved by lentiviral transduction of cells stably expressing GFP-Sec61β. For stable expression of GFP-Mad2 with mCherry-Sec61β, dual lentivirus transduction was used. Individual cells positive for GFP and mCherry signal were sorted by FACS, and single cell clones were validated by fluorescence microscopy. Note that the transgenic expression of GFP-Sec61β is associated with downregulation of endogenous Sec61β (Fig. S5 A). Transient transfections of HCT116, RPE-1, and HeLa were done using Fugene-HD or GeneJuice according to the manufacturer's instructions. For lentiviral transduction, HEK293T packaging cells were incubated in DMEM supplemented with 10% FBS, 2 mM L-glutamine, and 25 µM chloroquine diphosphate (C6628; Sigma-Aldrich) for 3 h. Transfection constructs were prepared at 1.3 pM psPAX2, 0.72 pM pMD2.G, and 1.64 pM transfer plasmid (encoding the tagged protein to be expressed) in Opti-Pro SFM. Polyethylenimine dilution in OptiPro SFM was prepared separately at 1:3 ratio with DNA (wt/wt, DNA: polyethylenimine) in the transfection mixture. Transfection mixes were combined, incubated at room temperature for 15-20 min, and then added to the packaging cells. Cells were incubated for 18 h, after which the medium was replaced with DMEM supplemented with 10% FBS and 100 U ml −1 penicillin/ streptomycin. Viral particles were harvested 48 h after transfection. Viral supernatant was centrifuged and filtered before applying to target cells. Target cells were infected through incubation in medium containing 8 μg ml −1 polybrene (408727; Sigma-Aldrich) for 16-20 h. Medium was replaced with complete medium, and cells were screened after 24 h. All incubations were in a humidified incubator at 37°C and 5% CO 2 . To induce misaligned chromosomes in RPE-1 or HCT116 cell lines, cells were incubated in complete medium containing 150 nM GSK923295 (Selleckchem) for 3 h before release of cells from treatment. For fixed cell experiments, release was for 1 h. To induce the auxin-degron system in DLD-1 cells, 500 µM indole-3-acetic (A10556; Thermo Fisher Scientific) and 500 μg ml −1 doxycycline (D9891; Sigma-Aldrich) were added to the medium, and cells were incubated for 24 h. ER clearance was induced through application of rapamycin (Alfa Aesar) to a final concentration of 200 nM, to HCT116 cells expressing FKBP-GFP-Sec61β and stargazin-mCherry-FRB. For fixed cell experiments, rapamycin treatment was for 30 min. For FISH of DLD-1 WT and DLD-1-C-H3 cells, the degron system was induced, and cells were synchronized by doubled thymidine (2.5 mM) treatment. Samples were fixed in Carnoy's fixative (3:1 vol/vol methanol:glacial acetic acid) for 5 min at room temperature, rinsed in fixative before addition of fresh fixative, and incubated for a further 10 min. Samples were rinsed in distilled water before FISH probe denaturation and hybridization following the manufacturer's protocol (Xcyting Centromere Enumeration Probe, XCE Y green, D-0824-050-FI; MetaSystems Probes). To dye chromosomes or microtubules in fixed-or live-cell imaging, cells were incubated for 30 min with 0.5 µM SiR-DNA or SiR-Tubulin (Spirochrome), respectively. Microscopy For mitotic progression and fate experiments, the DeltaVision system described above was used. For live-cell imaging of HeLa cells, a spinning disc confocal system (UltraView VoX; PerkinElmer) with a 60×, 1.40 NA, oil, Plan Apo VC objective (Nikon) was used. Images were captured using an ORCA-R2 digital charge-coupled device camera (Hamamatsu) after excitation with 488-and 561-nm lasers and 405/488/561/640-nm dichroic and 525/50, 615/70 filter sets. Images were captured using Volocity 6.3.1. All microscopy data were stored in an OMERO database in native file formats. SBF-SEM To prepare samples for SBF-SEM, RPE-1 GFP-Sec61β cells on gridded dishes were first incubated with 150 nM GSK923295 (Selleckchem) for 3 h to induce misaligned chromosomes, before release of cells from treatment and incubation for ∼30 min with 0.5 µM SiR-DNA (Spirochrome) to visualize DNA. HeLa cells on gridded dishes were not treated and were not stained. Using live-cell light microscopy, cells with an ensheathed chromosome were selected for SBF-SEM. Fluorescent and bright-field images of the selected cell were captured, and the coordinate position was recorded. Cells were washed twice with phosphate buffer (PB) before fixing (2.5% glutaraldehyde, 2% paraformaldehyde, 0.1% tannic acid [low molecular weight] in 0.1 M phosphate buffer, pH 7.4) for 1 h at room temperature. Samples were washed three times with PB and then postfixed in 2% reduced osmium (equal volume of 4% OsO 4 prepared in water and 3% potassium ferrocyanide in 0.1 M PB solution) for 1 h at room temperature, followed by a further three washes with PB. Cells were then incubated for 5 min at room temperature in 1% (wt/ vol) thiocarbohydrazide solution, followed by three PB washes. A second osmium staining step was included, incubating cells in a 2% OsO 4 solution prepared in water for 30 min at room temperature, followed by three washes with PB. Cells were then incubated in 1% uranyl acetate solution at 4°C overnight. This was followed by a further three washes with PB. Walton's lead aspartate was prepared adding 66 mg lead nitrate (TAAB) to 9 ml 0.03 M aspartic acid solution at pH 4.5, and then adjusting to final volume of 10 ml with 0.03 M aspartic acid solution and to pH 5.5 (pH adjustments with KOH). Cells were incubated in Walton's lead aspartate for 30 min at room temperature and then washed three times in PB. Samples were dehydrated in an ethanol dilution series (30, 50, 70, 90, and 100% ethanol, 5-min incubation in each solution) on ice, and then incubated for a further 10 min in 100% ethanol at room temperature. Finally, samples were embedded in an agar resin (AGAR 100 R1140; Agar Scientific). Data analysis Kinetochore position analysis was in two parts. First, the positions of kinetochores and spindle poles in hyperstacks were manually mapped using Cell Counter in Fiji. The kinetochore point sets were classified into three categories: those aligned at the metaphase plate and those that were misaligned, with the latter group subdivided into kinetochores of chromosomes that were ensheathed and those that were not (free). Second, the ER channel of the hyperstack was segmented in Fiji to delineate the exclusion zone. Next, the Cell Counter XML files and their respective binarized ER stacks were read by program written in Igor Pro (WaveMetrics). To analyze the position of points relative to the exclusion zone in each cell, the ratio of two Euclidean distances was calculated (see Eq. 1). Where C is the centroid of all aligned kinetochores, P i is the position of a kinetochore and Q i is the point on the path from C through P, where the exclusion zone/ER boundary intersects with the path. The ratio of these two distances gave a measure of how deep the point was placed inside or outside the exclusion zone (0 being on the boundary and 1 being as far outside of the exclusion zone as from the centroid to the boundary, on a log 2 scale). For analysis of live-cell GFP-Mad2 and mCherry-Sec61β imaging, a semiautomated 4D tracking procedure was used. Briefly, the DNA channel from these videos was used for segmentation of chromosomes and metaphase plate as discrete 3D objects over time. The centroid-to-centroid distance was found for each chromosome relative to the plate (congression was taken as the merging of chromosome and plate objects), and the time of anaphase onset was determined. Fluorescence signals were taken from each chromosome object using a 3-pixel expansion of the region of interest (ROI). For mCherry-Sec61β, the mean voxel density was used. For GFP-Mad2, the maximum pixel intensity at each z position was taken from the expanded ROI and averaged per time point; this method gave a more accurate measure of Mad2 recruitment than the mean voxel density. Signals from each channel are expressed as a ratio of chromosome to plate. Mad2 signals were grouped by whether the chromosome congressed, and then measurements from all chromosomes relative to anaphase were used to fit a line by linear regression. Only the last chromosome to congress (or not) was analyzed per cell. Data processing was via Fiji/ImageJ followed by analysis in Igor Pro. Automated kinetochore-kinastrin colocalization was using a script that located the 3D position of kinetochores (CENP-C) and kinastrin puncta from thresholded images using 3D Object Counter in Fiji. These positions were loaded into Igor, and the Euclidean distance to the nearest kinastrin punctum from each kinetochore was found. ER clearance experiments were quantified using two automated procedures. First, ER, DNA, and plasma membrane were segmented separately, the plasma membrane segments were used to define the cell, and the total area of segmented ER within this region was measured for all z-positions over time using a Fiji macro. Data were read by Igor, and the ER volume over time was calculated. ER clearance manifested as a rapid decrease in ER volume, but the onset was variable. The derivative of ER volume over time was used to find the point of rapid decrease, and this point was used to define the time to ER clearance. Random fluctuations in otherwise constant ER volume over time also resulted in minima that occurred randomly. This process was modeled and plotted for comparison with the control group, where no clearance was seen. Second, the segmented DNA was classified into misaligned chromosome and main chromosome mass by a user blind to the conditions of the experiment. 3D coordinates of these two groups were fed into Igor, where the centroids and boundaries of the chromosome and main chromosome mass were defined. The closest Euclidean distance between the centroid of the chromosome and edge of the main chromosome mass was used as the distance. Misalignment, shown as a colorscale, is this distance normalized to the starting distance. Figures were made with Fiji, R, or Igor Pro and assembled using Adobe Illustrator. Statistical testing Comparison of mitotic timing distributions was done using a Kolmogorov-Smirnov test (P values are P n [ε]). The effect of presence of ensheathed chromosome on mitotic fate (frequency of micronucleus formation) was examined using Fisher's exact test with no correction. Chromosome congression times were not normally distributed, and so the effect of ER clearance was determined using Wilcoxon rank test. Exact P values for all tests are quoted, rather than using arbitrary levels of significance. Online supplemental material Fig. S1 shows ensheathed chromosomes in DLD-1 cells. Fig. S2 shows spindle assembly checkpoint activation and micronucleus formation in DLD-1 cells. Fig. S3 shows lack of microtubule attachments of ensheathed chromosomes. Fig. S4 shows mitotic timing and fate of HCT116 cells pretreated with CENP-E inhibitor. Fig. S5 shows stable transgene expression in RPE1 cells. Video 1 shows a 3D reconstruction of an ensheathed chromosome in a HeLa cell. Video 2 shows a 3D reconstruction of an ensheathed chromosome in a HeLa cell. Video 3 shows a 3D reconstruction of an ensheathed chromosome in an RPE-1 cell. Video 4 shows an example of GFP-Mad2 at an ensheathed chromosome. Video 5 shows an example of mitotic outcome of a cell with aligned chromosomes. Video 6 shows an example of mitotic outcome of a cell with an ensheathed chromosome. Video 7 shows an example of ER clearance and subsequent rescue of an ensheathed chromosome. Data availability All code used in the manuscript is available at https://github. com/quantixed/Misseg.
8,684.2
2021-04-23T00:00:00.000
[ "Biology" ]
Improving the Quality of Measurements Made by Alphasense NO2 Non-Reference Sensors Using the Mathematical Methods Conventional NO2 monitoring devices are relatively cumbersome, expensive, and have a relatively high-power consumption that limits their use to fixed sites. On the other hand, they offer high-quality measurements. In contrast, the low-cost NO2 sensors offer greater flexibility, are smaller, and allow greater coverage of the area with the measuring devices. However, their disadvantage is much lower accuracy. The main goal of this study was to investigate the measurement data quality of NO2-B43F Alphasense sensors. The measurement performance analysis of Alphasense NO2-B43F sensors was conducted in two research areas in Poland. Sensors were placed near fixed, professional air quality monitoring stations, carrying out measurements based on reference methods, in the following periods: July–November, and December–May. Results of the study show that without using sophisticated correction methods, the range of measured air pollution concentrations may be greater than their actual values in ambient air—measured in the field by fixed stations. In the case of summer months (with air temperature over 30 °C), the long-term mean absolute percentage error was over 150% and the sensors, using the methods recommended by the manufacturer, in the case of high temperatures could even show negative values. After applying the mathematical correction functions proposed in this article, it was possible to significantly reduce long-term errors (to 40–70% per month, regardless of the location of the measurements) and eliminate negative measurement values. The proposed method is based on the recalculation of the raw measurement, air temperature, and air RH and does not require the use of extensive analytical tools. Introduction Air pollution is a complex problem posing multiple challenges in terms of management and mitigation of harmful pollutants [1]. Life quality and human health are affected by air pollution, especially in urban areas, where most of the population lives [2,3]. Europe's most problematic pollutants in terms of health are PM, NO 2 , and ground-level O 3 [4]. Nitrogen dioxide (NO 2 ) is one of the major pollutant gases, and its emanation is mainly caused by traffic [5]. Enhancing the spatial and temporal resolution of air pollution monitoring is nowadays one of the emerging challenges [6]. With the development of low-cost air quality sensors lot of research has been focused on achieving accurate, robust, and reliable air quality data [7,8]. The equipment was evaluated according to its performance in different environments, seasons, and meteorological conditions [9,10]. Critical issues faced by researchers are mostly correction methods [11,12] and the long-term stability of the sensors [13]. To characterize interferences and improve correction functions, both laboratory and ambient tests were conducted [14][15][16][17][18]. Materials and Methods Alphasense NO2 (NO2-B43F) sensors are popular, low-cost sensors for measuring the concentration of nitrogen dioxide in the ambient air using the electrochemical method (4electrodes). The working principle of the NO2 electrochemical sensor is based on electrochemical reactions [5]. When the air passes through, it creates a reaction in the electrochemical cell. The surface of the working electrode is the site for the first half-reaction (oxidation), generating an electronic charge, balanced by the second half-reaction (reduction) that occurs at the counter electrode [25]. This type of sensor provides high selectivity, low limit of detection, low power consumption, and linear response to the target gas [26,27]. According to the manufacturer's description, together with a dedicated electronic board, they enable the measurement of even small concentrations of nitrogen dioxide (ppb level). To test the sensors measuring devices were built. The main element was a small measuring chamber in which two Alphasense sensors were placed. To take the air directly from the surroundings, a small fan was installed at the inlet to the measuring chamber. The measuring chamber (with direct air intake from the environment) was placed in a larger housing, which contained electronic components necessary to power the sensors and microcontroller to acquire the results (voltages from electrodes acquired from the sensor's transmitter board) and transfer them to the server. The inlet to the measuring chamber was directed downwards (analogous to the air outlet). As a result, the measuring chamber was not susceptible to wind force and rain/snow. The measurement performance analysis of Alphasense (NO2-B43F) sensors was conducted in two research fields. Sensors were placed at the air quality monitoring stations, carrying out measurements based on reference methods. The Materials and Methods Alphasense NO 2 (NO2-B43F) sensors are popular, low-cost sensors for measuring the concentration of nitrogen dioxide in the ambient air using the electrochemical method (4-electrodes). The working principle of the NO 2 electrochemical sensor is based on electrochemical reactions [5]. When the air passes through, it creates a reaction in the electrochemical cell. The surface of the working electrode is the site for the first halfreaction (oxidation), generating an electronic charge, balanced by the second half-reaction (reduction) that occurs at the counter electrode [25]. This type of sensor provides high selectivity, low limit of detection, low power consumption, and linear response to the target gas [26,27]. According to the manufacturer's description, together with a dedicated electronic board, they enable the measurement of even small concentrations of nitrogen dioxide (ppb level). To test the sensors measuring devices were built. The main element was a small measuring chamber in which two Alphasense sensors were placed. To take the air directly from the surroundings, a small fan was installed at the inlet to the measuring chamber. The measuring chamber (with direct air intake from the environment) was placed in a larger housing, which contained electronic components necessary to power the sensors and microcontroller to acquire the results (voltages from electrodes acquired from the sensor's transmitter board) and transfer them to the server. The inlet to the measuring chamber was directed downwards (analogous to the air outlet). As a result, the measuring chamber was not susceptible to wind force and rain/snow. The measurement performance analysis of Alphasense (NO2-B43F) sensors was conducted in two research fields. Sensors were placed at the air quality monitoring stations, carrying out measurements based on reference methods. From the research perspective, the city of Nowy Sącz is interesting due to its frequently occurring high air pollutant concentrations and varied weather conditions [28]. From the research perspective, the city of Nowy Sącz is interesting due to its frequently occurring high air pollutant concentrations and varied weather conditions [28]. From the research perspective, the city of Nowy Sącz is interesting due to its frequently occurring high air pollutant concentrations and varied weather conditions [28]. The town is situated in a diversified area-the lowest point of the city is located at an altitude of 272 m above sea level, and the highest point-475 m above sea level. There are typically urban areas with tenement houses, parks, and green areas as well as single-family houses. Some areas are supplied by the district heating network, while others, especially single-family houses, are equipped with individual boilers or fireplaces (mainly for solid fuels), being a key source of particulate matter emissions. On the other hand, Warsaw, which is the capital of Poland, has a slightly different climate but is also quite diverse in terms of the variability of pollutants and main sources of emission [29]. Alphasense sensors record the voltage related to the measured quantity at the two outputs of each sensor: • WE u -voltage of the working electrode (mV); • AE u -voltage of the auxiliary electrode (mV). At the same time, the following values are provided individually for each sensor (as a result of calibration performed by the manufacturer): • WE e -the value of the electronic offset for the used ISB (Alphasense Individual Sensor Board) for the working electrode (mV); • AE e -the value of the electronic offset for the used ISB plate for the auxiliary electrode (mV); • WE 0 -an indication of the working electrode in (mV) for air free of pollutants; • AE 0 -an indication of the auxiliary electrode in (mV) in the case of unpolluted air; and two parameters independent of a specific sensor: • n T -factor provided by the manufacturer (correction depending on the temperature); • k T -factor provided by the manufacturer (correction depending on the temperature). The values of the above-mentioned factors from the manufacturer's documentation for the tested type of sensors are shown in Table 1. To convert the measured value to the pollutant concentration, depending on the sensor used, first, the corrected voltage value (WE c ) is calculated and then the appropriate conversion factor (mV/ppb), also, given individually for each sensor, is applied. The last step is to convert the concentration to the appropriate unit (e.g., from (ppb) to (µg/m 3 )). For the NO 2 -B43F sensors the manufacturer proposes the use of one of the following two equations to determine the corrected voltage (we will refer to them also as method (1) and method (2), respectively): In the comparative measurements carried out at the monitoring station in Nowy Sącz, two NO 2 -B43F sensors were used. For each of these electrochemical sensors, called later in the text as NO2_1 and NO2_2, the following set of parameters was available: WE E , AE E , WE 0 , and AE 0 . Measurement instruments incorporating these sensors recorded the values of the appropriate voltages every few seconds, and then, every minute relayed these data to the server. The data were aggregated, as a result of which, 1 h mean values were determined. The study includes the comparative analysis of the data obtained in this way and 1 h measurement data from SEM's air quality monitoring station. The following statistical measures were used to compare the measurements from Alphasense sensors to measurements from the SEM station: Pearson's correlation coefficient, mean error, mean percentage error, mean absolute error, mean absolute percentage error, and mean square error. All these statistical measures were determined for particular months of the measurement periods. Results In order to determine the quality of the obtained measurements, the measured voltage at both outputs of individual sensors was converted into concentrations with the use of both methods presented above. For the obtained results, the basic statistics for each month were calculated. The results are shown in Table 2. These statistical parameters were calculated by comparing the data from a given Alphasense sensor with the values obtained from the SEM measurement station. Table 3 shows the average monthly temperature measured in the chamber of the device. It should be emphasized that the instrument was intentionally left unsheltered and therefore the maximum temperatures inside the device during the summer period often exceeded +40 • C. Table 3. Mean, maximum, and minimum temperature and air humidity in the chamber of the measuring device and mean NO 2 concentration from SEM station in particular months of the analyzed period. The data presented in Table 2 show relatively different errors; in particular, months of the measurement period. For both tested sensors and both methods for converting voltages into concentrations suggested by the manufacturer, the lowest values of this factor were recorded in July. The best parameters were recorded in October. To some extent, this may have been related to the variability of actual concentrations. July The highest absolute percentage error occurred for the measurements conducted in warm months. The decrease in the average temperature lowered the error. While in July, the absolute percentage error exceeded 130%; in November, it oscillated around 50%. A similar tendency is also visible in case of the absolute errors. The absolute errors decreased from 11-15 µg/m 3 in July to 8-10 µg/m 3 in November. It is also worth mentioning the high correlation of low-cost sensors' indications-for the entire period, it was r = 0.952 after applying the method (1) and r = 0.949 after applying the method (2). This means that the readings of the sensors were repeatable and that they both reacted similarly to the parameters and changes in the atmospheric air, and the actual concentration values (and possibly other pollutants). This repeatability also implies the assumption that a single, universal method of mathematical correction of sensor indications can be created. During warm and hot days, when the temperature was approximately or exceeding 30 • C, both sensors regularly generated voltage that corresponded to negative concentrations of nitrogen dioxide after applying methods (1) or (2). The variability of 1 h mean concentrations of NO 2 and the temperature during one of the study days are shown in Figures 4 and 5. Individually for both methods (1) and (2), deviations due to high temperature were corrected. After calculations, negative WE C values and the relevant minimum T WEC temperature were determined in this set. For method (1), it was T WEC = 23.1 • C, while for method (2), T WEC = 23.8 • C. Then, for all hours for which the average temperature was higher or equal to the determined minimum, the average temperature and the determined voltages WE U and AE U were collected. For each hour, the desired WE C • voltage was determined, which would correspond to the NO 2 concentration measured in the SEM station. It means that the WE C • value denotes the result of the relationship (1) or (2) where the result from the Alphasense sensors would correspond exactly to the value determined by the SEM station. In the next step, the difference WE C = WE C • − WE C (offset) was calculated, determined by how much the current indication from the lowcost sensor should be corrected to obtain the value corresponding to the measurement from the SEM. During warm and hot days, when the temperature was approximately or exceeding 30 °C, both sensors regularly generated voltage that corresponded to negative concentra tions of nitrogen dioxide after applying methods (1) or (2). The variability of 1 h mean concentrations of NO2 and the temperature during one of the study days are shown in Individually for both methods (1) and (2), deviations due to high temperature were corrected. After calculations, negative WEC values and the relevant minimum TWEC temperature were determined in this set. For method (1), it was TWEC = 23.1 °C, while for method (2), TWEC = 23.8 °C. Then, for all hours for which the average temperature was higher or equal to the determined minimum, the average temperature and the determined voltages WEU and AEU were collected. For each hour, the desired WEC° voltage was determined, which would correspond to the NO2 concentration measured in the SEM station. It means that the WEC° value denotes the result of the relationship (1) or (2) where the result from the Alphasense sensors would correspond exactly to the value determined by the SEM station. In the next step, the difference WEC' = WEC° − WEC (offset) was calculated, determined by how much the current indication from the low-cost sensor should be corrected to obtain the value corresponding to the measurement from the SEM. Multiple regression was applied to the set of 1 h mean values (T, WEU, AEU, and Multiple regression was applied to the set of 1 h mean values (T, WE U , AE U , and WE C ), where the independent variables were: 1 h mean temperature (T), 1 h mean voltage WE U , and 1 h mean voltage AE U . The dependent variable was the WE C value. As a result, for (1) and (2) the following relationships were obtained, respectively: These additional WE C values were added to the WE C for measurements where the T WEC was higher than the predetermined cutoff for both methods. In the case of the previously presented data from 20 July 2019, the new volatility patterns are presented in Figure 6. Equations (3) and (4) were added to (1) and (2), respectively, and again NO2 concentrations were recalculated for both Alphasense sensors. The results for July and August 2019 (being the months with the highest temperatures) are presented in Table 4. The use of an offset (methods (3) or (4)) in most cases improved the quality of the results. First of all, the "negative" concentrations were removed, and the correlation with the measurements from the SEM stations was significantly improved. The highest improvement appeared in August, where correlation coefficients in relation to the SEM stations were r = 0.8. Correlation between NO2_1 and NO2_2 sensors in July were: r = 0.934 (for (3)) and r = 0.923 (for (4)), while in August: r = 0.959 (for (3)) and r = 0.946 (for (4)). The mean values of the absolute percentage errors and thus the absolute errors also improved Equations (3) and (4) were added to (1) and (2), respectively, and again NO 2 concentrations were recalculated for both Alphasense sensors. The results for July and August 2019 (being the months with the highest temperatures) are presented in Table 4. The use of an offset (methods (3) or (4)) in most cases improved the quality of the results. First of all, the "negative" concentrations were removed, and the correlation with the measurements from the SEM stations was significantly improved. The highest improvement appeared in August, where correlation coefficients in relation to the SEM stations were r = 0.8. Correlation between NO2_1 and NO2_2 sensors in July were: r = 0.934 (for (3)) and r = 0.923 (for (4)), while in August: r = 0.959 (for (3)) and r = 0.946 (for (4)). The mean values of the absolute percentage errors and thus the absolute errors also improved significantly. In most cases, they were reduced by half. The next step was to analyze the set of measurement data and determine the new correction functions. First, the relationship between the Alphasense sensor indications and the measurements from the SEM stations was examined. Figure 7 presents that the best fit function was a 2-degree polynomial regression. The obtained relationship is as follows: where • NA-NO2 concentration measured by the Alphasense sensor, determined from the equation: where • sA-conversion factor (mV) to (ppb) given by Alphasense individually for each sensor; • nA-factor for converting the NO2 concentration from (ppb) to (µ g/m 3 ); In the last step, multiple regression was used to bind the sensor indications with the meteorological parameters. The independent variables were: 1 h mean temperature (T), 1 h mean relative humidity (H), 1 h mean NO2 concentration expressed by the Equation (5), 1 h mean measured voltage (WEU − WEE), and the 1 h mean measured voltage (AEU − AEE). The dependent variable was the concentration of NO2: NA″. Although the Equation (5) is based on the measured WEU and AEU voltages, it was decided to include them as additional, independent variables (or in fact, as differences in relation to the WEE and AEE values) to obtain a relationship with the interval, to which these measured voltages belong (as opposed to Equation (5), where information about the measured voltage values is lost). Secondly, the lack of additional consideration of WEU and AEU voltages led to worse results than those presented below. The obtained, final relationship for the original method (1) is as follows: The obtained relationship is as follows: where • N A -NO 2 concentration measured by the Alphasense sensor, determined from the equation: where • s A -conversion factor (mV) to (ppb) given by Alphasense individually for each sensor; • n A -factor for converting the NO 2 concentration from (ppb) to (µg/m 3 ); • WE C -described by Equation (1); • WE C -described by Equation (3). In the last step, multiple regression was used to bind the sensor indications with the meteorological parameters. The independent variables were: 1 h mean temperature (T), 1 h mean relative humidity (H), 1 h mean NO 2 concentration expressed by the Equation (5), 1 h mean measured voltage (WE U − WE E ), and the 1 h mean measured voltage (AE U − AE E ). The dependent variable was the concentration of NO 2 : N A ". Although the Equation (5) is based on the measured WE U and AE U voltages, it was decided to include them as additional, independent variables (or in fact, as differences in relation to the WE E and AE E values) to obtain a relationship with the interval, to which these measured voltages belong (as opposed to Equation (5), where information about the measured voltage values is lost). Secondly, the lack of additional consideration of WE U and AE U voltages led to worse results than those presented below. The obtained, final relationship for the original method (1) is as follows: The analysis for the second method (described by the original Equation (2)) was carried out similarly. The best fit (among regression: linear, exponential, polynomial-see Table 5) between the indications of Alphasense sensors and measurements from the SEM stations was obtained by a polynomial regression of the second degree (Figure 8). The obtained relationship is as follows: (1) and (2). stations was obtained by a polynomial regression of the second degree ( Figure 8). The obtained relationship is as follows: After applying multiple regression, the final form of the relationship for the original method (2) is as follows: Kind of Regression Equation (1) Equation (2) Statistical parameters after the conversion of the measurement results for NO2_1 and NO2_2 sensors using Equations (7) and (9) are presented in Table 6. Table 6. Statistical parameters of the measurement results from the NO2_1 and NO2_2 sensors in particular months of the analyzed period for methods (7) and (9). After applying multiple regression, the final form of the relationship for the original method (2) is as follows: Statistical parameters after the conversion of the measurement results for NO2_1 and NO2_2 sensors using Equations (7) and (9) are presented in Table 6. The effectiveness of the determined relationship was checked with the use of a new NO 2 -B43F (indicated later in the text as NO2_3) sensor during the measurement campaign conducted at the Warsaw-Chrościckiego SEM station from December 2019 to May 2020. The results were calculated using Equation (7) and the basic equation recommended by the manufacturer. The results are presented in Table 7. Table 6. Statistical parameters of the measurement results from the NO2_1 and NO2_2 sensors in particular months of the analyzed period for methods (7) and (9). During the analyzed period, the statistical parameters describing the measurements calculated using method (1) were closer to the results received from the SEM air quality monitoring station than those presented in Tables 2-5. Therefore, the application of the correction method did not bring such a spectacular improvement as in the measurements in Nowy Sącz. The reason for this could be the fact that in the analyzed period, there were no very high temperatures, as was the case during the measurements carried out between July and September 2019. Sensor Nevertheless, the application of the correction function resulted in the improvement of the measurement data quality. The value of the correlation coefficient improved each month-the highest increase was recorded in May and the lowest in March. In all months, the mean absolute percentage error improved, with the best values obtained for the colder months, and slightly worse for the warmer months. Similarly, to the measurements carried out in Nowy Sącz, and also in Warsaw-Chrościckiego, it can be observed that the sensors during the colder months tend to underestimate the measurements, and during the warmer months-to overestimate (even after applying the correction method). Discussion As the analyzed low-cost NO 2 sensors at the output do not indicate the measured concentration of the pollutant, Alphasense recommends the use of one of two different methods to determine the voltage. The analysis showed that slightly better results were obtained using method (2)-especially in terms of the mean absolute percentage error. Taking into account the correlation coefficient, in the case of the NO2_1 sensor, the method using the relationship (1) turned out to be better, and in the case of the second sensor-it is difficult to indicate a better variant. Additionally, none of the methods proposed by the manufacturer offer a satisfactory quality of measurements, as the long-term percentage error maybe even be over 150%, and in certain situations, implementation of the recommended methods may lead to negative concentrations of the pollutant. The conducted research presented in this paper as well as other studies have shown that high temperature, in particular, is one of the factors interfering with the accuracy indications of Alphasense NO 2 sensors [24]. As it results from the relationships provided by the manufacturer, to determine the value of the measured pollutant concentration, calculations should be made using the measured voltages and individual values provided for a given sensor by the manufacturer. However, in both Equations (1) and (2) there is also a temperature-related parameter (n T or k T , depending on the method) given by the manufacturer (these values are common to all instances of a given series of sensors). In both relationships, it is used for indications related to the auxiliary electrode, which reduces the temperature influence. However, the values of the n T and k T coefficients provided by the manufacturer do not completely fulfill their role, because, during high air temperature, the determined NO 2 concentration turned out to be negative. Therefore, we proposed an additional factor (offset), modifying the corrected WE C voltage. Its role was to react when the temperature exceeds a certain threshold. In the presented analysis, the threshold was determined empirically, observing the temperature above which voltage disturbances appeared on the electrodes, causing a negative concentration. The result of the application of the proposed coefficient is shown in Figure 6. It shows a significant improvement in the results from Alphasense sensors during periods with high temperatures. First, there were no longer any negative concentrations. Secondly, the corrected values resulted in a significant approximation of the values measured by low-cost sensors to the values determined by the SEM station. This was also reflected in the improved coefficient of determination between the corrected measurement results of the Alphasense sensors and the indications of the SEM stations. The values for the relationships proposed by the manufacturer and after adding the designated offset are presented in Table 8. The introduced offset did not improve the situation in the remaining period when the temperatures were closer to those recommended by the manufacturer. Taking into account the high correlation of NO 2 Alphasense sensor indications with the concentrations measured in SEM stations, correction functions were determined for both methods recommended by the manufacturer. Their application in the form of Equations (7) and (9) resulted primarily in the improvement in the correlation coefficients, and the reduction in the mean absolute errors and the mean absolute percentage errors. As is presented in Figures 9 and 10, the implementation of the correction function resulted in the improvement in R 2 values as follows: R 2 = 0.39 without correction to R 2 = 0.63 after correction. (7) and (9) resulted primarily in the improvement in the correlation coefficients, and the reduction in the mean absolute errors and the mean absolute percentage errors. As is presented in Figures 9 and 10, the implementation of the correction function resulted in the improvement in R 2 values as follows: R 2 = 0.39 without correction to R 2 = 0.63 after correction. Comparable changes (this means-improvement) in R 2 values can be found in other research results [20,30], presenting that the correction formula improved the coefficient of determination between analyzed measurement stations from R 2 = 0.207 to R 2 = 0.709. The (7) and (9) resulted primarily in the improvement in the correlation coefficients, and the reduction in the mean absolute errors and the mean absolute percentage errors. As is presented in Figures 9 and 10, the implementation of the correction function resulted in the improvement in R 2 values as follows: R 2 = 0.39 without correction to R 2 = 0.63 after correction. Comparable changes (this means-improvement) in R 2 values can be found in other research results [20,30], presenting that the correction formula improved the coefficient of determination between analyzed measurement stations from R 2 = 0.207 to R 2 = 0.709. The results show also that the sensor's performance is better in the months with lower tem- Comparable changes (this means-improvement) in R 2 values can be found in other research results [20,30], presenting that the correction formula improved the coefficient of determination between analyzed measurement stations from R 2 = 0.207 to R 2 = 0.709. The results show also that the sensor's performance is better in the months with lower temperatures. Although, regardless of the method used-in such months, the correction formula tends to underestimate the measurement values. The opposite trend was visible in other studies. An Alphasense NO 2 sensor was used in calibration research in Beijing China, where the impact of seasonal changes in its performance was observed [31]. Relative bias was growing with the increase in the temperature. The R 2 = 0.76 for measurements conducted in the fall decreased to R 2 = 0.12 in the summer. Conclusions Several months of comparative measurements of Alphasense low-cost NO 2 sensors with a professional device in various climatic conditions have shown that using the manufacturer's algorithms, the obtained measurement results are characterized by large measurement errors. It indicated the need to identify factors influencing the measurement values and to mathematically correct the obtained results. In the case of the presented research, the focus was on meteorological factors-air temperature and relative humidity. Using the collected comparative data, to improve the quality of the measurements, a correction function was proposed that uses raw measurements as well as data on temperature and relative humidity. To verify its effectiveness, several months of comparative measurements were carried out with the use of other sensors and in a different location. This correction function-in many cases-significantly improved the quality of the data. In most cases, after applying the correction, the mean monthly absolute percentage error was reduced to 40-50%. In addition to the reduction in deviations from the actual values, the stabilization of long-term measurement errors was also achieved. Using the original equations recommended by the sensors' manufacturer, the range of long-term percentage measurement errors was over 100%. The application of the proposed methods reduced the range of these errors to approximately 20%. The achieved improvement in the quality of measurements may significantly expand the potential applications of this type of sensor. The potential area of application is portable devices that can be easily placed in those locations where, for example, there is a suspicion of high concentrations of pollutants. However, research on Alphasense sensors cannot be considered complete-the analysis requires, for example, a thorough examination of the scale of the increase in measurement errors with time, or the influence of other pollutants. It would also be desirable to place more sensors in more locations and compare the effectiveness of the proposed algorithm with a professional device. This type of analysis can also be found in other research works.
7,215.6
2022-05-01T00:00:00.000
[ "Environmental Science", "Engineering", "Mathematics" ]
HiTop 2.0: combining topology optimisation with multiple feature size controls and human preferences ABSTRACT Topology optimisation is a computational design approach that generates high-performing, efficient structures uniquely suited to a design engineer’s goal. However, there exist two major obstacles to the accessibility, or ease of use, of topology optimisation: expensive computational costs and users’ binary decision between personal intuition and the algorithm’s result. Human-informed topology optimisation, or HiTop, presents an alternative approach to topology optimisation when a user lacks access to a high-performance computer or knowledge of code parameters. HiTop 2.0 prompts users to interactively identify a region of interest in the preliminary design and modify the size of the solid and/or void features. The novel contribution of this paper implements multi-phase minimum and maximum solid feature size controls in HiTop 2.0, and demonstrates 2D and 3D benchmark examples, including test cases that show how the user can interactively enhance issues related to eigenvalues, stress, and energy absorption, while solving the minimum compliance problem. Using a numeric computing program, topology optimisation users can specify a design problem by inputting the bounds of their domain, loads, boundary conditions, objective functions, and constraints, shown in Figure 1(a).The design domain is discretized into a user-specified number of finite elements.The classic formulation of topology optimisation aims to minimise compliance, subject to static equilibrium and a userspecified volume fraction [23].Using a gradient-based optimiser, the algorithm iteratively converges on an optimal design through the redistribution of material within the design domain.At each iteration, finite element analysis is conducted to evaluate the performance associated with the given distribution of material.Each iteration of a topology optimisation code thus requires a finite element analysis and sensitivity calculation, and each code usually runs hundreds to thousands of iterations, depending on the objective functions, constraints, and mesh density.More complex objective functions, constraints, and finer mesh densities, typically increase the time per iteration and the total number of iterations needed to achieve convergence.Resulting optimized designs, such as Figure 1(b), are often unique and typically highly materially efficient, satisfying the common demand for engineers to use less material while sustaining highperformance and aesthetic appeal. There exist two major barriers to the accessibility, or ease of use, of topology optimisation in academia and industry: expensive computational costs and the binary decision users must make as to whether to accept the design or not.First, topology optimisation can be computationally expensive since the finite element mesh density scales directly with the computational cost of the algorithm.For advanced algorithms that target specific dynamic or nonlinear objective functions such as buckling load, the computational costs further increase.Therefore, access to and familiarity with highperformance computers are critical to running these algorithmsan asset that novice users or some users in industry lack.Second, topology optimisation places users in a retroactive judgement state where they are presented with two options: use or do not use the outputted design.For new users or those with real-world engineering experience, concern for load variability and constructability or a general proclivity for traditional design practices, leads them to favour suboptimal, conventional designs [24] similar to Figure 2(a), rather than computationally optimized designs, such as Figure 2(b). Human-Informed Topology Optimization, or HiTop, presents the alternative approach shown in Figure 2(c) that allows users to interactively modify regions of the optimized design [25].HiTop embeds the user's design intentions and concerns through interactive control of the feature sizes.The interactive control of the length scale grants design flexibility, allowing the user to improve other engineering performance metrics while solving the minimum compliance problem and maintaining low computational costs.HiTop does not replace an in depth dynamic, nonlinear optimisation for performance metrics such as buckling or energy absorption, but rather provides a quick, human-guided alternative to improve such metrics when computational resources are restricted. Incorporating human intuition into topology optimisation or artificial intelligence (AI) driven generative design gained increased attention in the past year.To articulate how humans interact with computational design optimisation, Saadi et al. [26] interview engineers working in industrial design, mechanical engineering, and architecture on the role of designers during earlystage design in the age of AI.From their results, they suggest restructuring how humans and AI collaborate to design structures.Beyond the initial specification of a design problem and objective function, designers will be required to iteratively evaluate results and guide AI generated outputs towards a converged solution that satisfies their interests [26].Smith [24] conducts a similar survey of over thirty engineers working in industry in the North-eastern United States, asking respondents to indicate their likelihood of using topology optimisation given they had prerequisite knowledge and unlimited time and budget.They find that less than half of the engineers responded they would use topology optimisation due to concerns for constructability and load variability [24].These usersurveys illustrate a general hesitation to use topology optimisation that HiTop can help mitigate.Through HiTop, design engineers can embed their design intentions and concerns for load variability or constructability, while leveraging the algorithmic, generative power of topology optimisation. Background HiTop is a density-based topology optimisation approach [27,28] that builds upon almost a decade of human-inthe-loop schemes that integrate engineers' preferences into generative design.This section will first discuss the original formulation and limitations of HiTop [25], and Section 3 will introduce this paper's novel extension in HiTop 2.0 that improves its flexibility and utility as an interactive design tool.Other examples of interactive design codes include Yan et al.'s [29] Bi-directional Evolutionary Structural Optimization (BESO) based algorithm that takes the user's design intentions as an input and generates parent designs that range in resemblance to the user's graphical drawing.Li et al. [30] expand this BESO-based approach by prompting the user, after the optimisation has converged, to input subjective scores or drawings that will re-run the BESO with weights that guide the optimiser towards the inputted subjective preferences.Both BESO approaches inject designer preferences after the optimisation algorithm has converged, in between generations of the evolutionary process.By contrast, HiTop prompts users to interactively modify the design mid-run, stopping pre-convergence to incorporate the user's updated feature size requirements of the minimum solid or void [25].HiTop effectively reduced stress concentrations and increased buckling load on 2D benchmark problems in roughly 5% of the time the stress and buckling topology optimisation codes require [25].However, a shortcoming of HiTop prior to HiTop 2.0 was the limitation in modifying only either minimum solid or minimum void feature sizes [25].Section 3 of this article will introduce HiTop 2.0's new capability that allows multiple modifications to the minimum solid and/or void, and prescription of maximum solid feature sizes, and/or passive regions. Minimum feature size controls, often referred to as minimum length scale controls, and their relation to manufacturability is a well-developed, densely researched subdiscipline of topology optimisation, as the emergence of additive manufacturing necessitates control of the minimum feature size.A full review of feature size controls in density-based topology optimisation is thus beyond the scope of this paper and readers are instead referred to [31].Initially focusing on controlling the minimum feature sizes of the solid material features [32][33][34], research has in recent years expanded into enabling minimum feature size control of both solid and void features [35][36][37][38][39].It has additionally naturally extended to incorporating maximum feature size requirements [40][41][42][43][44].With control over the maximum feature size, users can generate porous, structurally redundant designs that introduce load path diversification, increase buckling load [45], and show improved fail-safe and energy absorption performance [46].Two intellectual neighbours of maximum feature size control are the optimisation of porous infill [46,47] and multiscale structures [48][49][50][51], as all methods result in structurally redundant designs with extensive applications in additive manufacturing.In this context, it is worth mentioning that similar approaches to achieve minimum and maximum length scale control also exist for topology optimisation based on the level set approach [52][53][54][55]. As evidenced by decades of research, the universal application of minimum and maximum length scale can improve the manufacturability and performance of topology-optimized designs [31].However, any addition of complexity, constraints, or nonlinearity increases compliance and further diverges the result from 'true' optimality; therefore, there is a marked benefit to the selective application and spatial variation of the feature size controls.By applying feature size requirements in selective regions of interest, researchers are finding improvements in alternative engineering metrics, along with reduced complexity and computational costs [25].Amir and Lazarov [56] implement spatial variation of the minimum void feature size requirements to minimise the maximum stress at the sharp corner of an L-bracket.Schmidt et al. [57] vary the maximum feature size of designs through graded porosity control, optimising the allocation of infill for additive manufacturing using Wu et al.'s [46] local volume constraint.Similarly, Yan et al. [58] expand their BESO-based approach by allowing users to vary a local volume constraint over the design domain. In addition to code-based topology optimisation algorithms, many commercially available optimisation platforms, such as Fusion 360 and Abaqus, enable modification to the minimum or maximum length scale requirements to improve manufacturability.Like the workflow for established topology optimisation codes, the user initialises the optimisation with predefined minimum or maximum feature sizes.HiTop 2.0 improves upon these academic codes and commercial software in three major ways.First, HiTop 2.0 allows for easy and flexible application of feature size controls, whereas current commercial software requires extensive preparation of the design domain to be partitioned and individually selected, and codes require rigorous, unintuitive lines of code to accurately reference certain finite elements in the design domain.Second, HiTop 2.0 can modify the minimum solid and/or void, maximum solid, and/or passive regions, while the original version of HiTop, length scale topology optimisation algorithms, and commercial software typically allow one modification per design problem.Finally, by inserting the user's input mid-run, HiTop 2.0 not only allows the user to make an informed decision on the selective application of feature size controls, but also prevents performance loss due to post-processing.Working in established codes or commercial software, users must pre-emptively specify regions of interest to modify feature sizes, without knowledge of where material may be allocated in the optimized design.Or users must modify designs post-optimisation to facilitate their manufacturing process, without knowledge of how the modifications will impact performance.HiTop 2.0, on the other hand, prompts the user to specify length scale controls based on a 50-iteration output of the optimisation, saving the time users lose when improperly, blindly selecting regions of interest in other codes and software and having to rerun the problem.Additionally, HiTop 2.0 prevents the user from pushing the design further from optimality and sacrificing performance due to post-processing [31,59,60].To ease implementation, a HiTop 2.0 MATLAB code is available via GitHub. This research introduces a novel extension from current interactive design algorithms' one-phase modification of minimum solid or void feature sizes [25,29] to increased flexibility that allows multiple modifications to any or all three feature requirements: minimum solid and void, and maximum solid.The previous version of HiTop [25] used the classic proximity-based weighting function with spatially varied minimum void or solid feature radius to enforce one modification to the design.In this paper, HiTop 2.0 abandons this approach and adopts Carstensen and Guest's [41] three-phase projection scheme, which is the most compatible for the desired control of the minimum solid and void and maximum solid.Multiple maximum and minimum feature control approaches were explored in this research, and the three-phase projection scheme grants the most versatility with control over solid and void phases, while maintaining low computational costs.The increased complexity of the design space that is typically reported as a limitation of the method is counteracted by the regional application of additional filters and the lack of additional constraints or finite element solutions that other methods require.This article will first provide an overview of HiTop 2.0 in Section 3, detailing the optimisation formulation, filtering operations, sensitivity calculations, and continuation schemes.Section 4 will present five 2D numerical examples that illustrate the results for the minimum solid and void, and maximum solid modifications, as well as interactive incorporation of passive regions and a 3D example.Finally, Section 5 will summarise the ideas presented in this article and provide ideas for future research. HiTop 2.0: multiple feature size controls HiTop 2.0 is an interactive design algorithm that applies a user's design concerns and intentions through control of the minimum solid and/or void, and/or maximum solid feature sizes.It is based on the 88-line code for compliance-based topology optimisation in MATLAB [61], with the standard extension that uses the Method of Moving Asymptotes [62] as the gradient-based optimiser.In addition, a modified version of the multiphase filtering approach from Carstensen and Guest [41] is used.The feature size controls are interactively updated by changing the settings of the filtering operations.The code begins with an interactive graphic user interface (GUI), Figure 3(a), where HiTop 2.0 presents the user with a finite element mesh of the design domain and prompts them to draw the loads, boundary conditions, and passive regions that can be predetermined as passive or void, as well as to input initial minimum feature sizes for solid and void. The algorithm runs 50 iterations, solving the minimum compliance problem using the two-phase projection of the minimum solid and void phase variables, Figure 3(b).Although the user is not interactively modifying the length scale at this stage, the two-phase projection of solid and void from the start of the algorithm ensures compatibility with the user's future changes.For all examples in this work, the initial minimum feature size of both phases is set to a small value greater than 1, such as 1.5 elements, to prevent checkerboarding without initiating additional length scale constraints.As in the earlier version of HiTop [25] the user is prompted to modify the design at 50 iterations because it is the earliest time at which a clear image of the design is generated, and most intermediate densities are resolved.At 50 iterations, the user is presented with a plot of the stress distribution across the design domain and uses MATLAB's Image Processing Toolbox to define an elliptical Region of Interest (ROI), Figure 3(c).The position and dimensional information of the elliptical ROI are stored, and concentric contours are generated inward, towards the centre of the ellipse as this reduces the risk of having sharp, discontinuous features appear in the design [25].The user specifies the number of elliptical contours and degree of gradation that gradually increases or enforces the new minimum or maximum length scale in the ROI from the unaltered universal length scale imposed on the rest of the design domain.The formulaic and graphical descriptions of ROI contours and gradation are discussed in detail in [25].If the designer chooses to update the minimum solid, minimum void, or maximum solid feature size they will do so by inputting a new minimum or maximum radius applied only within the ROI.The algorithm resumes with the updated filtering operations for 50 more iterations, as shown in Figure 3 (d).At this point, the user can make an additional modification to the design, such as implementing a local maximum feature size control like in Figure 3 (e), and the optimisation then resumes till it converges, Figure 3(f).For the shown example, the final minimum and maximum feature size maps of the design domain can be seen in Figure 3(g-i).Figure 3(g) demonstrates that the minimum void length scale control is active across the domain but is set to a small (shown as blue) value outside the user-selected ROI.The colour bar and scale below the feature size map in Figure 3 (g-i) graphically show the number of elements required for the minimum or maximum feature size radii across the domain.For the finite elements within the ROI, a larger size is required for the minimum void feature size (yellow).Figure 3(h) shows that since the user did not modify the minimum solid feature size, the ROI map is uniformly set to a small (blue) value to prevent checkerboarding.Finally, Figure 3(i) demonstrates that if a user changes the maximum length scale, the maximum feature size control is only 'turned on' in the design domain within the selected ROI. Optimisation formulation HiTop 2.0 follows the classic topology optimisation formulation shown in Equation 1, where the objective function minimises compliance subject to static equilibrium, a volume constraint, and a solid, void solution [63]: Where x is the design variable vector, U is the displacement vector, K is the stiffness matrix, F is the force vector, V(x) is the total volume, V 0 is the fully solid volume, f is the volume fraction, x min and x max are bounds of the design variables equal to 0 and 1, respectively, V is the design domain, and c (x) is the compliance of the structure. The Solid Isotropic Material Penalisation (SIMP) Method is implemented to penalise intermediate densities and drive the designs variables towards a 0, 1 solution [28].The SIMP scheme is as follows: Where E is the Young's modulus of the material as a function of the element density x e , E 0 is the Young's Modulus of an element with x e = 1, E min is a small value that ensures the stiffness matrix is positive definite, and h is the penalisation factor.For the purposes of this paper, E min = 1 E −9 , E 0 = 1, and h = 3. Filtering operations This work uses a slightly modified version of the multiple feature size controls suggested by Carstensen and Guest [41].Three filtering operations are conducted consecutively to implement length scale control of the minimum solid, minimum void, and, if selected, maximum solid, shown schematically in Figure 4. The first filtering operation takes the vector of design variables, x, as an input and calculates three phase variables, w s , w p , and w m .Unlike in the original formulation in Carstensen and Guest [41] this work uses the boxcar weighting functions [64], shown in Equation 3, as these have been found to provide better results in the HiTop 2.0 context [64]: Where x is the design variable vector, w s is the nonlinear function used for the minimum solid phase, w v is used for the minimum void phase, w m for the maximum solid phase, a is the slope of the minimum solid and void boxcar weighting functions, a m is the slope of the maximum solid boxcar weighting function, and s, v, and m determine the locations of the respective functions' peaks. A plot of the boxcar weighting functions is shown in Figure 5 and illustrates how a design variable, x, is projected to the minimum solid and void, and maximum solid phase.The maximum solid and minimum void boxcar functions are collocated due to the radius over which the maximum solid is projected.The projection radius for the maximum solid is equal to the sum of the user-inputted maximum feature radius and minimum void feature radius.Therefore, for the maximum solid length scale requirement to be satisfied, the void must also be actively projected.This is discussed in detail in [41]. The outputs of the boxcar weighting functions in Figure 5 are fed as inputs into the classic proximitybased weighting function [63]: H ei = max (0, r min,p − D(e, i)) Where xe p is the filtered element density for the minimum solid or void, or maximum solid phase as a function of w p , the filtered phase variable, N e p is the collection of elements in the neighbourhood of given element e, H ei is the matrix of weight factors, and r min,p is the minimum feature size for the given phase.This filter prevents checkerboarding of the design, where each phase variable, w p , is weighted by the value of its neighbours. The final filter is the Heaviside's projection function seen in Guest [32], which smooths the design and drives the variables decisively towards 0 or 1, is listed in Equation 6: Where x e p is the Heaviside filtered density as a function of xe p , the proximity filtered density for the given phase p, xmax is the maximum value of the weighted element density, set to 1, and b is the Heaviside parameter controlling the steepness of the Heaviside function's slope.For this study it is kept constant at b = 25. The filtered variables are then combined into a vector of physical elemental densities, x e .This value corresponds to the physical density of the element calculated in Equation 7 or 8, which is mapped to the structure and used for the finite element analysis and compliance calculation [41].Equation 7 is used when the user changes the minimum solid and/or void, while Equation 8 is used in ROIs where changes to the maximum solid are made: x e s + (1 − x e v ) 2 ( 7) Where x e s , x e m , and x e v , are the minimum solid, maximum solid, and minimum void filtered phase variables, calculated from Equation 6. Calculation of sensitivities Gradient-based optimisation requires the calculation of the sensitivity of the objective function with respect to the design variables.The sensitivity of the compliance with respect to the design variables is derived using the chain rule and listed as Equation 9: The first term in Equation 9 can be found by taking the derivative of the compliance with respect to the physical element density, which is modified for the SIMP method shown and shown in Equation 10[63]: The second term in Equation 9 can be found by taking the derivative of the physical element density with respect to the design variable, accounting for the three filtering operations.The summed sensitivity is shown in Equation 11 [41]: Where each phase variable's sensitivity is a product of the sensitivities of the three filtering operations.Starting left to right, the sensitivity of the physical element density with respect to the Heaviside's projection can be found in Equation 12 [63]: The second sensitivity in each phase's term is the derivative of the proximity based weighting function with respect to the boxcar weighting function output.This is calculated in Equation 13 [65,66]: Finally, the last derivative in the chain rule calculation of each phase is the derivative of the phase generated with the boxcar weighting function with respect to the input design variable [64]: With the length scale implicitly defined through the three filtering operations, the gradient based optimisation relies upon the sensitivity calculations to enforce the length scale requirements while converging on a solution. Continuation of boxcar weighting functions and seeding designs Upon experimentation, it is found that HiTop 2.0 requires continuation of the boxcar weighting functions to remove intermediate densities without increasing the nonlinearity of the design space.HiTop 2.0 begins with the boxcar weighting functions in a relatively relaxed state, Figure 6(a), where a m = 15, a = 15, s = 0.25, v = 0.25, m = 0.26.After the algorithm reaches 50 iterations, the slopes steepen to a m = 45, a = 45, enforcing a stricter requirement of the length scale.However, with this stricter application of the length scale constraint, the algorithm settles into a local minimum where it is difficult to resolve intermediate densities without substantially increasing the penalisation factor.Therefore, the constraints are relaxed for 50 iterations to a m = 15, a = 15, and the boxcar weighting functions are stepped away from each other, s = 0.167, v = 0.167, m = 0.26, driving the design towards solid or void and 'reshuffling' the elements, Figure 6(b).For the remainder of the continuation scheme, the slopes of the weighting functions are increased to a m = 45, a = 45 and eventually a m = 55, a = 55,, and the functions are stepped further from the midpoint, s = 0.133, v = 0.133, m = 0.22, shown in Figure 6(c), encouraging each element to be decisively solid or void to comply with the length scale requirements. Although the continuation scheme shown in Figure 6 allows for some intermediate densities to remain, the resulting designs converge quicker and yield lower compliance than designs optimized with only continuation of the SIMP penalisation factor or Heaviside's beta value.As the numerical examples will illustrate, post-processing will remove the intermediate densities while retaining the integrity of the design and allow HiTop 2.0 to remain a cheaper alternative when computational resources and time are constrained.Running HiTop 2.0 on a finer mesh allows for clear solid, void solutions. Seeding the maximum feature size ROI is an additional modification to HiTop 2.0 that facilitates faster convergence towards a 0-1 design.After the user specifies the maximum feature size and the ROI to where it should apply (Figure 7(a)) the design variables within the region are changed to x e = 0.8, while all other elements retain their value from the 50-iteration output, Figure 7(b).Seeding the maximum length scale ROI encourages the algorithm to converge on a design with material in the region, as demonstrated by Figure 7(c,d), rather than avoiding the ROI entirely. 2D minimum void In this example, a user is interested in reducing the maximum stress of an L-bracket while minimising compliance.The design domain is shown in Figure 8, where the red arrows correspond to a downward force, blue lines indicate the upper edge is fixed, and the elements in the yellow upper quadrant are prescribed as void.The number of elements in the x- Subject to a f = 25% volume fraction constraint, the minimum compliance topology optimisation formulation with no modifications to the design is shown in Figure 9(a).A user who is constrained by time or computational resources is concerned about the stress concentration at the sharp corner of the design introducing a premature potential failure mode.Therefore, they specify a region of interest at the sharp corner of the L-bracket and increase the minimum radius of the void phase, Figure 9(b,c).The resulting HiTop 2.0 modified design creates a fillet at the corner, shown in Figure 9(d). The traditional and HiTop 2.0 designs are post-processed in Abaqus with a 2D elastic, static analysis using constant boundary conditions, loads, and material properties.Figure 10 shows that the HiTop 2.0 design reduces the maximum Von Mises stress by 23%, while incurring a 2.5% increase in compliance.Although HiTop 2.0 does not specifically minimise the maximum stress, [50], human intuition guides the algorithm to improve this performance metric and mitigate a potential failure mode.Additionally, HiTop 2.0 provides the improved stress performance at a run time comparable (within 1 min) to the original topology optimisation problem. 2D minimum solid In this example, HiTop 2.0 with modifications to the minimum solid feature size provides a method to improve the minimum eigenvalue.The user is designing an MBB Beam, shown in Figure 11, with traditional topology optimisation.For this MBB problem, nelx = 345 and nely = 115. The output of this code generates a long, thin strut that the user identifies as problematic shown in Figure 12(a).Therefore, using HiTop 2.0, the user defines a region of interest for this long, thin member and increases the minimum solid length scale, Figure 12(b, c).The output from HiTop 2.0 merges the thin member to the bottom strut of the MBB beam, resulting in Figure 12(d). Applying the same loads, boundary conditions, and material properties, an eigenvalue analysis of the traditional and HiTop 2.0 designs show that the HiTop 2.0 MBB beam has a 44% increase in the eigenvalue, as shown in Figure 13, while decreasing compliance 2.5%.This example illustrates how humans can effectively guide design algorithms towards better solutions, especially when complex, nonlinear codes may yield artefacts or undesirable features. 2D minimum solid and void There exist many applications where it is desirable for the user to modify both the solid and void design variables.An example is illustrated on a U-bracket problem, shown in Figure 14 below, where elements in the yellow region are predetermined as void.For the U-bracket problem, nelx = 300 and nely = 150. The optimized U-bracket without user modifications is shown in Figure 15(a), with elements in the yellow region prescribed as void.The sharp corner near the void boundary introduces a stress concentration; therefore, the user increases the minimum void length scale within a region of interest defined at the corner, Figure 15(b,c).As the design converges, the user identifies a small thin bar that would be difficult to manufacture, prompting an additional region of interest with increased minimum solid length scale requirements, Figure 15(d,e).The appearance of high stress in Figure 15(d) at the minimum void ROI is due to intermediate densities.The low-density value in the region generates an artificially high stress value that is resolved in postprocessing, shown in Figure 16.Ultimately, the minimum solid and void modified HiTop 2.0 design is shown in Figure 15(f), where the user has manually introduced a fillet and thickened a structural member to ease manufacturability. Post-processing of the original topology-optimized Ubracket and HiTop 2.0 minimum solid and void modified designs show that for a 1.8% increase in compliance, the maximum stress of the HiTop 2.0 design is reduced by 20%, shown in Figure 16.Additionally, it is apparent that the minimum solid ROI thickened the selected connective bar, allocating more material to the region and resulting in a less complex design with fewer thin struts than the traditional topology-optimized result.The doubly interactively modified HiTop 2.0 result both improved performance for an alternative engineering metric, while also conforming to the user's design intentions and concern for additive manufacturability. 2D maximum solid Structural redundancy is an established method to diversify load paths and increase energy absorption, which are both critical for structures subjected to blast or impact loading.As discussed in Section 2, redundant structures are also high-performing methods of lightweighting and are popular in design for additive manufacturing.HiTop 2.0 with maximum length scale control allows users to create selectively redundant designs with localised redundancy in a region of anticipated impact or blast exposure.An example is demonstrated on a cantilever beam, shown in Figure 17 below, where nelx = 240 and nely = 150. The traditional design is displayed in Figure 18(a).The user decides to design a cantilever with structural redundancy on an interior strut of the cantilever, shown in the ROI defined in Figure 18(b,c), and resulting in the cantilever beam shown in Figure 18(d). The traditional and HiTop 2.0 modified structures are post-processed in Abaqus and undergo yielding analysis with elastic and plastic material properties, loads, and boundary conditions.The structures are defined with steel material properties found in [68] and subjected to 110 kN of force applied in the direction shown in Figure 17 along 14 nodes to prevent the mesh distortions resultant from single node load applications.A left strip of elements is fixed in all degrees of freedom and the structures are analysed as 2D shell models.The load plastically yields both cantilever beams and the applied force and displacement in the direction of loading are plotted in Figure 19, as well as the stress distribution of the deformed structures at peak load. HiTop 2.0 incurs a 4.5% increase in compliance, while increasing the energy absorption by 181%.The addition of structural redundancy anywhere in the design will minimally increase the compliance relative to traditional topology optimisation, while substantially increasing the structure's energy absorption.The user-guided addition of structural redundancy can be tailored to anticipated blast or impact directions, which is applicable to structures with inward and outward facing components, such as barrier walls or vehicle protection. 2d passive region An extension of the HiTop 2.0 framework includes an option to interactively introduce passive regions to the design as it converges.The introduction of solid or void elements, in conjunction with modifications to the feature size controls, grant increased flexibility for users to design for manufacturability.For example, a user designs a cantilever beam with the boundary and loading conditions shown in Figure 17.The optimized beam without modifications is shown in Figure 20(a); however, the user would like to add a central hole to facilitate the design's assembly with other mechanical components.Therefore, the user defines a region of interest in Figure 20(b), resulting in the passive map shown in Figure 20(c).After incorporating the hole, the user is still able to modify the length scale, choosing to increase the minimum solid requirements of the central connections near the hole, shown in Figure 20(d).This modification yields the minimum solid map in Figure 20(e) and HiTop 2.0 optimized result in Figure 20(f). 3d maximum solid As the 88-line two-dimensional topology optimisation code [61] is extended to three dimensions in top3d [69], HiTop 2.0 can similarly be used to design 3D structures.HiTop 2.0 presents the user with a 3D GUI, allowing users to specify vertices and faces to use as loads and boundary conditions, then prompting the user to interactively specify the ROI with the drawcuboid function.An example extending HiTop 2.0 with maximum length scale control to three dimensions is illustrated on a 3D pinned wheel, shown in Figure 21(a,b).For the 3D pinned wheel, nelx = 30, nely = 30, and nelz = 30. The 3D pinned wheel with no modifications to the length scale is shown in Figure 22(a).The user is interested in designing the wheel for an anticipated blast or impact loading directed from above.Therefore, they specify the ROI in the upper half of the design domain and apply a maximum feature size, shown in Figure 21(b).The optimized, structurally redundant output is shown in Figure 22(b).The cross-sections are taken at the same heights for Figure 22(a,b), showing how the material allocation for the maximum length scale in the upper half of the domain in Figure 22(b) compares to the original output in Figure 22(a) for the coarsely meshed, 3D pinned wheel example.The redundant wheel with maximum length scale has concentric rings of material to satisfy the maximum feature size, while the unmodified wheel is completely solid across the cross-section. Conclusions HiTop presents an alternative approach to design topology-optimized structures for minimum compliance, while improving the performance of other engineering metrics.Established codes that maximise buckling load or energy absorption outperform HiTop for those specific metrics; however, they incur high computational costs and introduce complex parameters.Therefore, HiTop presents a quick, simple method for the user to address manufacturability concerns and alternative failure modes by interactively modifying the length scale requirements of the design.This paper has presented an extension of the preliminary HiTop algorithm [25], in which the human engineer is enabled to interactively impose minimum feature size controls on both the solid and void phases of the design as well as interactively control the maximum solid features.HiTop 2.0 with multiple feature size controls has been demonstrated on benchmark 2D and 3D examples.In addition, HiTop 2.0 has an interactive GUI to ease set-up of the design domain and allows the user to interactively define passive and solid regions.Human-guided alteration of the feature size requirements have been shown to decrease stress concentrations, increase the lowest eigenvalue and improve the energy absorption of compliance-based topology-optimized structures. A general limitation of HiTop and other optimisationdriven design frameworks that seek to embed human preferences, is that they rely heavily on the knowledge and skills of the user.Should the user choose to make poor choices, such as enforcing large fillet radii in regions where it is not needed to reduce stress concentrations or improve manufacturability, the compliance will increase even if other metrics do not improve.As such, educated design intuition is a prerequisite for successful use of HiTop.In addition, it is possible that added nonlinearity imposed on the design problem through the multiple filtering operations may lead to design results that are poor-performing local minima.However, this was not seen by the authors when experimenting with the herein presented multiple feature size controls in HiTop.Should the added nonlinearity lead to poorly performing solutions, the general HiTop feature of obtaining results fast (all 2D results in this work are obtained within 10 min on a regular laptop), makes it feasible for the design engineer to restart the design process with new parameter settings. Future work will examine how to increase the flexibility of ROI selection in 3D and to further develop the convergence criteria for when the user is prompted to modify the design.The 3D compatible ROI functions in MATLAB are confined to sizing and rotation cuboid operations, precluding users from defining ellipsoid ROIs that may be more aligned with their design intentions.Also, users are currently prompted after 50 iterations to modify the feature sizes; however, there may exist a more methodical or critical point at which user's design intentions should be incorporated into HiTop's human in the loop design scheme. Figure 2 . Figure 2. (a) L-bracket designed with traditional design practices, (b) topology-optimized L-bracket, and (c) human-informed topology-optimized L-bracket modified for manufacturability and mechanical performance. Figure 3 . Figure 3. (a) GUI where the user defines the design problem, (b) preliminary design at 50 iterations, (c) Region of Interest (ROI) interactively selected by the user to modify the minimum void feature size requirements locally in response to seeing a plot of the stress distribution for the initial design, (d) intermediate design after an additional 50 iterations, (e) ROI selected by the user to add maximum solid feature controls locally, (f) converged final design, (g) map of minimum void feature sizes, (h) map of minimum solid feature sizes, and (i) map of maximum solid feature sizes that are only applied within the selected ROI from (e). Figure 4 . Figure 4. Overview of filtering operations.A single set of design variables, x, are filtered multiple times to achieve control of the minimum solid and void features, and if selected, maximum solid features.The multiple filters are combined to a single physical density x e . Figure 5 . Figure 5. Plot of boxcar weighting functions used herein to let the magnitudes of a single set of design variables, x, allow for control of multiple feature sizes, with s = 0.25, v = 0.25, m = 0.26, a = 30, and a m = 30.When w p (x) = 1, the given phase, p, is active while w p (x) = 0 indicates the phase is inactive. HiTop 2 . 0 is extremely flexible and tailored to the user's design intentions and concerns.This paper provides five illustrative examples in 2D and one 3D example where the user can improve alternative engineering performance metrics or manufacturability while solving the minimum compliance problem.The 2D examples are as follows: minimum void, minimum solid, minimum solid and void, maximum solid, and an interactive implementation of passive regions.A MATLAB script is used to convert the element density matrix into an .stlfile that is post-processed in Abaqus, a finite element modelling software[67]. Figure 6 . Figure 6.Continuation of boxcar weighting functions: (a) the weighting functions are initially relatively relaxed, and (b) are tightened after 50 iterations to enforce a stricter requirement of the feature size control.To avoid convergence to a poor-performing local minimum, (c) the weighting functions are again relaxed for 50 iterations before again being tightened. Figure 7 . Figure 7. Seeding for maximum feature size control: (a) ROI selected for implementation of maximum feature size control, (b) design variables in ROI are seeded with x e = 0.8 values, (c) mid-optimisation design, and (d) converged result. Figure 9 . Figure 9. (a) Optimized L-bracket without modifications, (b) increased minimum void feature size ROI, (c) map of minimum void feature size, and (d) HiTop 2.0 optimized L-bracket. Figure 12 . Figure 12.(a) Optimized MBB beam without modifications, (b) increased minimum solid feature size ROI, (c) map of minimum solid feature sizes, and (d) HiTop 2.0 optimized MBB beam with interactively modified solid features. Figure 13 . Figure 13.Lowest eigenmode and eigenvalue of (a) optimized MBB, and (b) HiTop 2.0 optimized MBB beam.The plotted displacement is a normalised, dimensionless value that shows the most likely failure location of the given eigenmode [67]. Figure 15 . Figure 15.(a) Optimized U-Bracket without modifications, (b) increased minimum void feature size ROI, (c) map of minimum void feature sizes, (d) increased minimum solid feature size ROI, (e) map of minimum solid feature sizes, and (f) HiTop 2.0 optimized Ubracket with interactively modified solid and void feature size controls. Figure 18 . Figure 18.(a) Optimized cantilever without modifications, (b) defined maximum solid feature size within ROI, (c) map of maximum solid feature sizes where applied, and (d) HiTop 2.0 optimized design with interactively modified maximum solid feature size control. Figure 19 . Figure 19.Force vs. displacement of traditional and HiTop 2.0 modified designs. Figure 20 . Figure 20.(a) Optimized cantilever beam without modifications, (b) user specified passive void region, (c) map of void passive region (hole), (d) increased minimum solid feature size ROI, (e) map of minimum solid feature sizes, and (f) HiTop 2.0 optimized cantilever with interactively imposed passive void region and modified minimum solid feature sizes. Figure 21 . Figure 21.(a) 3D pinned wheel design problem (b) 3d pinned wheel design problem with maximum length scale imposed in upper half of domain.
9,451.4
2023-10-19T00:00:00.000
[ "Engineering", "Computer Science" ]
Energy Requirements for Unfolding and Membrane Translocation of Precursor Proteins during Import into Mitochondria” ATP is involved in conferring transport competence to numerous mitochondrial precursor proteins in the cytosol. Unfolded precursor proteins were found not to require ATP for import into mitochondria, suggesting a role of ATP in the unfolding of precursors. Here we report the unexpected finding that a hybrid protein containing the tightly folded passenger protein dihydrofolate reductase becomes unfolded and specifically translocated across the mitochondrial membranes independently of added ATP. Moreover, interaction of the precursor with the mitochondrial receptor components does not require ATP. The results suggest that ATP is not involved in the actual process of unfolding during membrane translocation of precursors. ATP rather appears to be necessary for preventing the formation of improper structures of precursors in the cytosol and for folding of imported polypeptides on (and release from) chaperone-like molecules in the mitochondrial matrix. Folding and unfolding of precursor proteins during membrane translocation are essential reactions of the complex pathway proteins take to traverse biological membranes. So far very little is known about the energetic aspects of these reactions. Transport of precursor proteins into various cell organelles was found to depend on the addition of ATP (l-3). Studies on protein transport into mitochondria and the endoplasmic reticulum suggested that ATP is involved in conferring a transport-competent conformation to the precursor proteins in the cytosol (2,(4)(5)(6). Incompletely synthesized and thus loosely folded mitochondrial polypeptide chains required less ATP for import than the corresponding completed precursor proteins (7). Artificially unfolded precursor forms did not depend on ATP for import, whereas import of the (partially folded) authentic precursors required ATP (8,9). As membrane translocation of mitochondrial precursor proteins requires a loosely folded conformation of the polypeptide chain (lo-13), it was assumed that ATP participates in modulating the folding state of precursor proteins. Two major possible roles of ATP can be envisaged. ATP could be directly involved in the process of unfolding of precursor proteins. e.g. via ATP-dependent cytosolic cofactors (14). Alternatively, ATP (and cytosolic cofactors) could be required to prevent misfolding or aggregation of precursor proteins that cannot be reversed by the membrane-associated import machinery of mitochondria. In the first case, the levels of ATP required for import should correlate with the degree of folding of a polypeptide chain, i.e. a precursor protein with a stably folded structure should strongly depend on ATP for import. In the second case, a precursor protein with a stably yet correctly folded domain may be independent of ATP for import. To distinguish between these possibilities, we investigated mitochondrial import of several hybrid proteins containing the correctly and tightly folded dihydrofolate reductase (DHFR)' polypeptide at the carboxyl terminus. We found that import of one hybrid protein with a relatively short amino-terminal portion (that was derived from a mitochondrial precursor protein) was not inhibited by removal of ATP. Specifically, the interaction with the recently discovered import receptor MOM19 (15), unfolding, and membrane translocation of the precursor protein did not require ATP. We conclude that ATP is not directly involved in the transfer of precursor proteins across the mitochondrial membranes. ATP may rather be involved in the maintenance of transport competence of those precursor proteins that might form improper structures in the cytosol. RESULTS ATP Dependence of Mitochondrial Import of Cytochrome b2-DHFR Hybrid Proteins-We investigated the ATP dependence of import into mitochondria of three hybrid proteins composed of portions of the precursor of cytochrome bz (amino terminus) and the entire DHFR (carboxyl terminus). The hybrid proteins contained the 167,331, or 561 amino-terminal amino acid residues of the cytochrome bz precursor polypeptide (Fig. 1A). The cell free import system, isolated N. crassa mitochondria and rabbit reticulocyte lysate containing the [35S]methionine-labeled precursor proteins, was depleted of ATP by preincubation with apyrase. Oligomycin was included to prevent synthesis of ATP by the F,,F,-ATPase. Pretreatment with apyrase, an ATPase and ADPase from potato, had been found to inhibit mitochondrial import of many precursor proteins, e.g. F1-ATPase subunit /3 (F$), and this inhibition could be reversed by readdition of ATP but not by the addition of nonhydrolyzable ATP analogues (19,23,27,28). Import of the hybrid protein bP(l-561)-DHFR was inhibited to the same extent as import of F,P by depletion of ATP, while inhibition of import of b2(1-331)-DHFR was less pronounced (Fig. 1B). Import of b*(l-167)-DHFR was practically not affected by the pretreatment with apyrase (Fig. 1B). It appeared that the shorter the cytochrome b2 portion of the hybrid protein was, the less ATP was required for import. We recently reported that unfolding of the DHFR domain of b2(1-167)-DHFR on the mitochondrial surface was a prerequisite for its translocation into mitochondria (13). This raised the interesting possibility that unfolding and membrane translocation of the DHFR part did not require ATP. Unfolding and Membrane Translocation of DHFR Are Independent of Added ATP-We first investigated if under the conditions used the DHFR domain of the hybrid protein bp( l-167)-DHFR was tightly folded. This can be assessed by probing its resistance to relatively high concentrations of proteinase K (13). Treatment of the precursor of bz(l-167)-DHFR with proteinase K produced a fragment that was slightly larger than authentic DHFR (13) and was resistant to all concentrations of proteinase K tested ( Fig. 2A). Similarly, when bz(l-167)-DHFR was accumulated in contact sites of outer and inner membranes (see below) by performing import at a low temperature (8 "C), treatment with proteinase K generated a DHFR-containing fragment with high protease resistance (Fig. 2B). Translocation of b2( l-167)-DHFR across the mitochondrial membranes can be experimentally divided into two steps (13). First, the cytochrome bp portion is inserted into contact sites (10); the presequence is proteolytically processed by the processing peptidase in the mitochondrial matrix (29) while the DHFR domain remains on the cytosolic side. In a second step, the DHFR is unfolded on the mitochondrial surface and translocated across the membranes. In the experiment described in Fig. 3, the ATP dependence of these two import steps of b,(l-167)-DHFR was analyzed. Translocation of b2(1-167)DHFR into contact sites performed at 8 "C was only slightly inhibited by the pretreatment with apyrase ( Receptor MOMIS-Import of bz(l-167)-DHFR could be independent of added ATP, in contrast to many other precursor proteins, because it might use a different mitochondrial import site which allows ATP-independent import. To exclude this possibility we asked if by( l-167)-DHFR uses the import receptor MOM19 that was recently shown to function as a receptor for most mitochondrial precursor proteins studied, including the precursor of F,P (15). IgGs directed against MOM19 were bound to mitochondria and import of bz(l-167)-DHFR was tested. Fig. 4 shows that IgGs against MOM19 strongly inhibited import whereas control IgGs, against the major outer membrane protein porin or from preimmune sera, did not inhibit. We conclude that b,(l-167)-DHFR employs the receptor MOM19, leading to the interesting notion that binding of a precursor to and release from A, transport into contact sites. The experiment was performed as described in the legend of Fig. 1B MOM19 appear to be ATP-independent. The conclusion that &(l-167)-DHFR uses the same mitochondrial import site as ATP-dependent precursor proteins is further supported by the finding that b2(1-167)-DHFR accumulated in contact sites inhibits import of other precursor proteins such as F$, indicating that the precursors use the same translocation contact sites (13). In summary, a precursor protein with a tightly folded carboxyl-terminal domain can be imported into mitochondria although the ATP levels were drastically reduced. Moreover, unfolding and membrane translocation of the DHFR portion itself was found to be independent of added ATP. Although involvement of bound ATP (that may not be hydrolyzed by apyrase) cannot be excluded, the behavior of b*(l-167)-DHFR in the in vitro import reaction is in clear contrast to that of several other mitochondrial precursor proteins where the degree of unfolding required for import appeared to correlate with the amounts of ATP necessary (7,8,27). In all likelihood, ATP is thus not generally involved in receptor binding, unfolding, and membrane translocation of proteins during import into mitochondria. This conclusion is supported by results that were previously obtained with import of the precursor of FO-ATPase subunit 9 (Fog). The precursor of Fo9 The pretreatment with apyrase and the addition of antimycin A and oligomycin were omitted; the mitochondria were preincubated with the indicated IgGs as described (15). The amount of i-b2(1-167)-DHFR imported into mitochondria that had been pretreated with trypsin (15 rg/ml) (bypass import (50)), representing about 15% of the total import, was subtracted (15). Monospecific IgGs against more than 15 other outer membrane proteins of N. cra.s.sa mitochondria (15) did not inhibit import (data not shown). The similar result was obtained when the import system was depleted of ATP (as described in the legend of Fig. 1B) prior to the preincubation with IgGs. Furthermore, the amount of bypass import did not depend on the levels of ATP in the import reaction. contains a stably folded domain (32) and was found to be efficiently imported into mitochondria at very low levels of ATP (27), indicating that targeting, unfolding, and membrane translocation of this authentic mitochondrial precursor protein may also be independent of ATP. The observation that a fusion protein between F09 and DHFR required ATP for import (27) indicates that an ATP-independent import is a property specific for the complete precursor and not just caused by the presence of DHFR in a precursor, similar to the results obtained with the longer b2-DHFR fusion proteins (Fig. 1B). The Fog-DHFR fusion protein might, for example, possess an unfavorable conformation and therefore depend on the assistance of ATP and cytosolic cofactors for membrane translocation. DISCUSSION We describe the seemingly paradoxical situation that both tightly folded precursor proteins ((27) this study) and highly unfolded precursor proteins (8,9) do not require ATP for import into mitochondria. Moreover, both a stably folded precursor protein (11) and an unfolded precursor protein (8,33) were found not to depend on cytosolic cofactors for import. On the other hand, most mitochondrial precursor proteins do depend on both ATP and cytosolic cofactors including 70-kDa stress proteins (hsp7Os) (1,2,7,27,34). How can this apparent discrepancy be explained? Most precursor proteins obviously do not fold to their stable mature forms in the cytosol (summarized in Ref. 6). At the same time, they are not fully unfolded (8,27). We argue that in the absence of ATP and cofactors intra-or intermolecular interactions are locking the precursors in conformational states that cannot be resolved so to allow passage through the membranes. ATP and cytosolic cofactors seem to be required to prevent or reverse such improper folding and interactions of proteins. Artificial unfolding of precursors allows a rapid import because thereby the usually rate-limiting unfolding reaction is circumvented (8,9). The transport-competent state of such precursors, and consequently their independence of ATP and cofactors, has only a short lifetime. This is in agreement with the expected formation of improper structures in the absence of import into mitochondria.' Tightly and correctly folded domains such as the DHFR part most likely do not undergo unspecific interactions that might interfere with import. In support of this, we found that the ATP requirement for import of b2-DHFR fusion proteins became higher the longer the amino-terminal portion was, i.e. the higher the chance of improper folding or interaction of the cytochrome bz part was. Evidence for a role of ATP in modulating the tertiary and quarternary structures of cytosolic precursors in the absence of mitochondria is also provided by the following observations. The conformational state (as assessed by the sensitivity to added proteases) of precursor proteins synthesized in rabbit reticulocyte lysate was dependent on the levels of ATP (27). A mutant form of F1/3 lacking an internal oligomer-forming sequence required less ATP for import than the authentic precursor that was present in an oligomeric complex in the cytosol (35). What is ATP then doing? It is suggested that cofactors bind to folding intermediates that expose certain critical features such as hydrophobic (36) or certain hydrophilic segments (37). These complexes may not be competent for translocation because the tight interactions cannot be relieved by the unfolding process during translocation. ATP would be required for allowing the dissociation of precursors and cofactors in the course of translocation. Folded proteins would not bind cofactors and thus would not need ATP for releasing them. This view is consistent with the general role thought to be played by 70-kDa stress proteins that act in an ATPdependent manner, namely binding to not fully folded proteins in order to prevent the formation of improper conformations or interactions (36,38,39). As the energy requirement for complete unfolding of mamy proteins is as low as 5-10 kcal/mol (40), it is well conceivable that the unfolding of correctly folded polypeptide chains can be performed by the mitochondrial import machinery without the need for ATP as external energy source. Moreover, we conclude that interaction of precursor proteins with the membrane-bound components of the mitochondrial import machinery, such as binding to and release from the receptor MOM19 (15) as well as translocation into and through contact sites (13), does not require the addition of ATP. The observations made here cast new light on a number of results reported previously. A hybrid protein between the presequence of cytochrome oxidase subunit IV and DHFR was unfolded on the mitochondrial surface in the absence of added ATP (41). However, one of the further import steps of this hybrid protein, i.e. membrane translocation, proteolytic processing, or (re)folding in the matrix, required ATP in the mitochondrial matrix (41,42). This led to the conclusion that the ATP-requiring step assumed to occur in the cytosol would in fact take place in the matrix and may be necessary for membrane translocation. In view of the results reported here, it seems possible that the subunit IV-DHFR hybrid protein bypassed the ATP-dependent mechanism in the cytosol due to a correctly folded structure. With regard to the ATP requirement in the matrix, we found for a number of precursor proteins imported into the matrix that interaction with the heat shock protein hsp60 in an ATP-dependent manner represents an essential step for (re)folding and assembly of the proteins and can affect the rates of proteolytic processing of precursors (9,43). The ATP-dependent step of import of the subunit IV-DHFR hybrid protein thus may well be related to interaction with the "chaperonin" hsp60 (38). The hybrid protein b,(l-167)-DHFR indeed interacts with hsp60 in a (44), binds to precursor proteins in transit through contact sites and thereby supports the translocation of the precursors4 Ssclp then apparently mediates the transfer of the precursors to hsp60.4 The binding of precursors to the chaperone Ssclp in the matrix could provide the energy for membrane translocation. ATP hydrolysis might be required for release of the precursors from Ssclp (36,38,39), setting the chaperone free for new rounds of import. It is unknown if ATP that could be tightly bound to Ssclp (and would thus not be removed by the treatment with apyrase) is already needed for binding of precursors to Ssclp. We propose a model (Table I) where at least two ATPdependent steps exist in import and assembly of mitochondrial precursor proteins: (i) maintenance or conferring of a transport-competent conformation of the cytosolic side; and (ii) intramitochondrial (re)folding and sorting of precursor proteins, including "recycling" of chaperone-like components in the matrix. Both of these ATP-dependent reactions can be bypasded: the first step by artificially unfolded precursor proteins (8,9) or by tightly folded precursor proteins (this study); the second step by the ADP/ATP carrier (27) which does not require functional hsp60 for intramitochondrial sorting and assembly (45). The requirement for a certain factor (ATP) at multiple steps illustrates the complexity of mitochondrial protein import and cautions against the use of minimal models.
3,642.8
1990-09-25T00:00:00.000
[ "Biology", "Computer Science" ]
Design and implementation of neural network based conditions for the CMS Level-1 Global Trigger upgrade for the HL-LHC The CMS detector will be upgraded to maintain, or even improve, the physics acceptance under the harsh data taking conditions foreseen during the High-Luminosity LHC operations. In particular, the trigger system (Level-1 and High Level Triggers) will be completely redesigned to utilize detailed information from sub-detectors at the bunch crossing rate: the upgraded Global Trigger will use high-precision trigger objects to provide the Level-1 decision. Besides cut-based algorithms, novel machine-learning-based algorithms will also be included in the Global Trigger to achieve a higher selection efficiency and detect unexpected signals. Implementation of these novel algorithms is presented, focusing on how the neural network models can be optimized to ensure a feasible hardware implementation. The performance and resource usage of the optimized neural network models are discussed in detail. Introduction The new CMS trigger system for the High-Luminosity LHC upgrade [1] will exploit detailed information from the calorimeter, muon and tracker subsystems at the bunch crossing rate.The final stage of the Level-1 Trigger apparatus, the Global Trigger (GT), will receive high-precision trigger objects from the upstream systems.Implemented in modern Field Programmable Gate Arrays (FPGA), it will determine the Level-1 decision based on a trigger menu consisting of more than 1000 trigger algorithms.The current system [2] relies on cut-based algorithms that act on specific combinations of reconstructed particle properties.To reach higher selection efficiency and selection of unexpected signals, the upgraded GT will include also neural-network-based conditions.Implementing these neural-networkbased conditions in the GT algorithm chain requires meeting stringent requirements in terms of latency and resources.The upgrade targets a total latency of 1 μs (40 Bunch Crossings, BX) for the entire GT.Three quarters of it is used by high speed serial links, demultiplexers, distribution and the Final-OR stage.Given neural networks (NN) are typically resource intensive, extensive optimization is required during and after training to ensure they can be integrated alongside the cut-based algorithms while meeting the target latency of ∼10 BXs.Two different flavours of NNs are considered: deep binary classifiers and deep auto-encoders.To reduce the models' resource usage and latency, multiple optimizations have been applied.Some of these optimizations, such as synapse pruning, hyper-parameter quantization and precision tuning, can be performed without completely redesigning the model.However, others require a new model to be designed and trained from scratch.In this work a technique known as knowledge distillation was used to further reduce the resource usage of the final NN model. Neural network model development Deep binary classifiers and deep auto-encoder are studied.The primary purpose of the deep binary classifiers is to discern specific signal signatures, while the deep auto-encoders are designed to learn the unlabeled data and flag anything that deviates from it as anomalous.The latter rely on an unsupervised learning technique, and, in this particular case, aim to understand efficient encoding of the feature of the well-understood physics scenarios ("background").They aim to encode the input data into a lower-dimensional representation (latent space) and then decode it back to its original form, attempting -1 -to minimize the reconstruction error.As a result, any signature which differs substantially from the background will be reconstructed poorly.The distance between the input and the reconstructed event is then used as anomaly score.To compare the performances of deep binary classifiers and auto-encoders, we consider three different signal signatures denoted as A, B and C. Each unique signal signature will be associated with its own trained binary classifier, whereas a single auto-encoder will be trained using background events.Binary classifiers are trained with a mixture of signal and background events and supervised learning is used. Hardware used for real-time inference in the Level-1 Trigger has limited computational capacity due to size and latency constraints.Incorporating resource-intensive models without a loss in performance poses a great challenge.Significant model compression is then necessary.Hyper-parameter and input/output precision quantization using qkeras [3] is employed to reduce the multiplication's complexity.Additionally, synapse pruning is implemented through Tensorflow model optimization [4].These optimization processes occur during training and result in a reduction in model size by more than threefold compared to the uncompressed model implementation [5].Despite the aforementioned compression methods, the auto-encoders typically remain too large to be implemented in FPGAs.To tackle this challenge, one more compression technique is harnessed: a basic implementation of knowledge distillation [6].First, a bigger auto-encoder (referred to as the "teacher") is trained with only background events.A secondary, more compact model (referred to as the "student") is then trained to reproduce the teacher's anomaly score using the background events and random samples.1The anomaly score is computed as Mean Squared Error (MSE).Figure 1 Data pre-processing is performed with a normalization layer: the training dataset's variables are re-scaled to have a mean equal to zero and a standard deviation equal to one.Such re-scaling parameters are applied during training and in the hardware inference.In table 1, input variables are listed for the two model topologies. , , are the representation of the candidate particle's momentum [7].Since the usage of the variables does not result in notable increase in the signal efficiency in the case of binary classifiers, they are ignored during training.Binary classifiers feature a single hidden layer with 64 nodes with ReLU activations, the output is a single node with a sigmoid activation function.The auto-encoder (teacher) features multiple hidden layers for the encoder part, a latent space with 7 nodes and a decoder with the inverted encoder's architecture.The student is a deep neural network with one output (the anomaly score).ReLU activations are used in the hidden layers and linear activation at the output.To translate NN models into firmware, hls4ml [5], developed by the CMS community, is used.It translates high level description models (Keras/Qkeras) into synthesizable C code, which is then translated by the AMD VITIS High Level Synthesis compiler [8] into a VHDL module.This results in a block for integration into an FPGA design, and eventually, firmware can be built using AMD VIVADO [9].In table 2, signal selection efficiency at a given fixed rate for the three reference samples is compared between the binary classifiers and the auto-encoder.Keras models are trained with single-precision floating-point (FP32) without synapse pruning.In Qkeras and hls4ml models, different quantizations are applied for hidden and output layers, favoring higher bit precision in the output layer for enhanced performance.Hardware deployable models usually loose performance when quantization (fixed-point with 8/6 bits) and pruning (50%) are applied.In this work, Keras to hls4ml porting for binary classifiers incurs in less than 6% signal efficiency loss, which is small considering the reduction in the model size (see section 4).The results presented above demonstrate that the auto-encoder is sensible to the signal samples, albeit with reduced efficiency compared to the binary classifiers, even though it was not trained with signal events. Interface between the Global Trigger and neural networks As was mentioned in section 2, a pre-processing step is applied at the inputs of the NN models, where the re-scaling parameters have to be passed to the hardware.The mathematical operation is described in eq.(3.1),where the two passed parameters are the mean () and the standard deviation Implementation of neural networks in the Global Trigger hardware Each development step, if not managed properly, could lead to timing violations in the final hardware implementation.The hls4ml step inherits all the optimizations described in section 2. During VITIS HLS compilation, the target clock frequency for the auto-encoder (student) was increased to 300 MHz to avoid possible timing violations, while 240 MHz was kept for binary classifiers.The clock uncertainty was increased to 33% for both.To relax any possible routing congestion within the NN block, the input vector is registered twice to allow the place and route process to focus on the NN itself rather than its external connections [12].Crossing from 240 MHz back to 480 MHz at the output stage requires multi-cycle path constraints.Finally, timing constraints were met enabling most of the aggressive implementation strategies in VIVADO [9].A breakdown of the resource usage and latency is given in table 3. Post Training Quantization (PTQ) was used for the uncompressed models, while Quantization Aware Training (QAT) was used for the compressed ones.The latency shown for the re-scaler modules comprises the normalization and the clock domain crossing (CDC) logic.The prototype firmware is implemented in a Serenity ATCA board [13] equipped with a Xilinx VU9P FPGA part with 3 Super Logic Regions (SLR).The prototype design features one auto-encoder (student) and the three binary classifiers, each replicated in all SLRs for a total of 3 auto-encoders (student) and 9 binary classifiers (figure 4).First, data are read from the data link buffers, then they are demultiplexed and distributed in the whole chip and finally are injected in the NN interfaces.Algorithm bits are written to the output channels and sent to the Final-OR board where monitoring, pre-scaling and masking takes place [10]. Summary The CMS Global Level-1 Trigger for Phase-2 features novel algorithms based on machine learning.In this study, we utilized quantization-aware training, pruning and knowledge distillation to compress the neural network models for the implementation in the FPGA fabric.Binary classifiers offer better performance on discerning known signal signatures with low latency and low resource usage with respect to auto-encoders, but a distinct model is needed for each signal type, requiring prior knowledge to generate the requisite dataset.Conversely, a single trained can be employed to detect known and unknown signatures, utilizing solely background events for its training.Furthermore, there is a notable difference in the final model sizes between the two approaches.Despite the inclusion of knowledge distillation, the auto-encoder (student) is approximately ten times larger than the binary classifier.Meeting timing constraints in this intricate architecture necessitated the employment of particular coding techniques [12] to guide the VIVADO implementation algorithm.Deep neural network models have been developed, evaluated and successfully tested in a Serenity prototype board. Figure 1 . Figure 1.Left: in the supervised learning a model is trained knowing the output labels.Right: auto-encoders rely on unsupervised learning where the model is trained with only background events.With the knowledge distillation technique the student model is trained to behave like the teacher model. Table 1 . Input variables of the two model topologies. Table 2 . Relative performance of auto-encoder with respect to binary classifiers. Table 3 . Resource usage breakdown of the relevant modules.vs. compressed synthesizable models' size comparison.On the bottom, the resource usage of the re-scaler modules is shown.
2,394
2024-03-01T00:00:00.000
[ "Physics", "Computer Science", "Engineering" ]
From multiple unitarity cuts to the coproduct of Feynman integrals We develop techniques for computing and analyzing multiple unitarity cuts of Feynman integrals, and reconstructing the integral from these cuts. We study the relations among unitarity cuts of a Feynman integral computed via diagrammatic cutting rules, the discontinuity across the corresponding branch cut, and the coproduct of the integral. For single unitarity cuts, these relations are familiar. Here we show that they can be generalized to sequences of unitarity cuts in different channels. Using concrete one- and two-loop scalar integral examples we demonstrate that it is possible to reconstruct a Feynman integral from either single or double unitarity cuts. Our results offer insight into the analytic structure of Feynman integrals as well as a new approach to computing them. 1 Introduction The precise determination of physical observables in quantum field theory involves computing multiloop Feynman integrals. The difficulty of these integrals has led to their extensive study and the development of various specialized integration techniques. One approach to computing Feynman integrals has been to analyze the discontinuities across their branch cuts. Like the integrals themselves, their discontinuities can be computed by diagrammatic rules [1][2][3] in which diagrams are separated into two parts, with the intermediate particles at the interface of the two components restricted to their mass shells, resulting in the so-called cut integrals. This on-shell restriction can simplify the integration, and its result, considerably. Traditionally, the integral might then be reconstructed directly from one of its discontinuities by a dispersion relation [1][2][3][4][5]. Alternatively, modern unitarity methods [6][7][8][9][10][11][12][13] make use of discontinuities to constrain an integral through its expansion in a basis of Feynman integrals. A large class of Feynman integrals can be expressed in terms of transcendental functions called multiple polylogarithms, which are defined by certain iterated integrals and include classical polylogarithms as a special case. Multiple polylogarithms, and iterated integrals in general, carry a lot of unexpected algebraic structure. In particular, they form a Hopf algebra [14,15], which is a natural tool to capture discontinuities. By now, there is considerable evidence that the coproduct of a Feynman integral of transcendental weight n, with massless propagators, satisfies a condition known as the first entry condition [16]: the terms in the coproduct of transcendental weight (1, n − 1) can be written in the form where s i ranges over all Mandelstam invariants, and f s i is the discontinuity of the integral as a function of the variable s i . One might wonder whether the deeper structure of the Hopf algebra contains useful information about a fuller range of discontinuities and perhaps even points to techniques for reconstructing a full integral from its discontinuities. In this paper we present evidence that it does. We develop techniques to evaluate the cut integral explicitly, and we verify in several examples that the functions f s i in eq. (1.1) are indeed given by sums of cut integrals. We emphasize that we work in real kinematics, which allows us to use explicit real-phase space parametrizations. Furthermore, we see that even if the original Feynman integral is finite in D = 4 dimensions, it is sometimes necessary to regularize the corresponding cut integrals. Indeed, although individual cut diagrams can be infrared divergent, their sum is finite, through a mechanism similar to the cancellation of infrared divergences in a total cross section. We use dimensional regularization. While it might not seem very surprising that the functions f s i are related to cut integrals, the question of whether the coproduct of the cuts themselves allows for a similar interpretation in terms of generalized cuts is more intriguing. We analyze this question in several examples at one loop, and the three-point ladder at two loops. In order to do so, we first extend the diagrammatic cutting rules of ref. [2,3], which have only been formulated for single unitarity cuts so far, to allow for sequential unitarity cuts in multiple channels. We observe that several new features arise that were not present in the case of single unitarity cuts, and that we can obtain consistent results even in this case by restricting the computation to real kinematics, which implies in particular that diagrams with onshell massless three-point vertices must vanish in dimensional regularization. Furthermore, we see that beyond single unitarity cuts, the results depend crucially on the phase-space boundaries imposed by the kinematic region where each cut diagram is computed, and not only on the set of cut propagators. Equipped with this new set of rules, we show that we can correctly reproduce the relevant components of the iterated coproduct of these specific integrals, thus strengthening our hope of a deeper connection between a Feynman integral and its cuts and coproduct. The paper is organized as follows. In section 2, we give a brief review of multiple polylogarithms and their Hopf algebra, and we discuss the class of pure transcendental functions that we expect to be able to analyze. In section 3, we present definitions of the three types of discontinuities that we consider: Disc is the difference in value as a function crosses its branch cut; Cut is the value obtained by cutting diagrams into parts; and δ is a function identified algebraically inside the coproduct. Each of these discontinuities is defined not just for a single cut, but for sequences of unitarity cuts in different Mandelstam invariants or related variables. We close this section with statements of our two conjectured relations, one between Cut and Disc, and one between Disc and δ. By combining the two relations, we claim that diagrammatic cuts correspond to functions within the coproduct. In section 4, we give examples of our relations at one-loop, including a presentation of our technique for evaluating cut integrals. Our main example is the three-mass triangle, but we include the four-mass box and the two-mass-hard box as well and discuss their different properties. Sections 5 and 6 contain the main example of this paper, namely the two-loop three-point ladder integral with massless propagators. In section 5, we compute unitarity cuts and verify our relations. In section 6, we compute sequences of two unitarity cuts, explain how to make our relations concrete, and verify them; we then consider sequences of three unitarity cuts and explain why they vanish. In section 7, we review dispersion relations and we argue that the information they contain is the same as the information contained in specific entries of the coproduct: we show that the symbols of the ladder (and the one-loop triangle) can be reconstructed from even limited knowledge of its cuts, using the integrability condition. In section 8, we close with discussion of outstanding issues and suggestions for future study. Appendix A summarizes our key conventions for evaluating Feynman diagrams and cut diagrams. Appendix B collects results of one-loop diagrams, cut and uncut, that are used throughout the paper. In appendix C we give explicit results for single unitarity cuts of the two-loop ladder. Finally, in appendix D we summarize the calculation for two sets of double cuts of the two-loop ladder, and give explicit expressions for their result. The Hopf algebra of multiple polylogarithms Feynman integrals in dimensional regularization usually evaluate to transcendental functions whose branch cut structures reflect the branch cuts of the loop integral. Although it is known that generic Feynman integrals can involve elliptic functions [17][18][19][20][21][22], large classes of Feynman integrals can be expressed through the classical logarithm and polylogarithm functions, log z = and generalizations thereof (see, e.g., ref. [23][24][25][26][27][28][29], and references therein). In the following we will concentrate exclusively on integrals that can be expressed entirely through polylogarithmic functions. Of special interest in this context are the so-called multiple polylogarithms, and in the rest of this section we will review some of their mathematical properties. Multiple polylogarithms Multiple polylogarithms are defined by the iterated integral [15,30] G(a 1 , . . . , a n ; z) = z 0 dt t − a 1 G(a 2 , . . . , a n ; t) , (2.2) with a i , z ∈ C. In the special case where all the a i 's are zero, we define, using the obvious vector notation a n = (a, . . . , a n ), 3) The number n of integrations in eq. (2.2), or equivalently the number of a i 's, is called the weight of the multiple polylogarithm. In the following we denote by A the Q-vector space spanned by all multiple polylogarithms. In addition, A can be turned into an algebra. Indeed, iterated integrals form a shuffle algebra, G( a 1 ; z) G( a 2 ; z) = a ∈ a 1 a 2 G( a; z) , (2.4) where a 1 a 2 denotes the set of all shuffles of a 1 and a 2 , i.e., the set of all permutations of their union that preserve the relative orderings inside a 1 and a 2 . It is obvious that the shuffle product preserves the weight, and hence the product of two multiple polylogarithms of weight n 1 and n 2 is a linear combination of multiple polylogarithms of weight n 1 + n 2 . We can formalize this statement by saying that the algebra of multiple polylogarithms is graded by the weight, A n with A n 1 · A n 2 ⊂ A n 1 +n 2 , (2.5) where A n is the Q-vector space spanned by all multiple polylogarithms of weight n, and we define A 0 = Q. Multiple polylogarithms can be endowed with more algebraic structures. If we look at the quotient space H = A/(π A) (the algebra A modulo π), then H is a Hopf algebra [14,15]. In particular, H can be equipped with a coproduct ∆ : H → H ⊗ H, which is coassociative, Li n−k (z) ⊗ log k z k! . (2.9) For the definition of the coproduct of general multiple polylogarithms we refer to refs. [14,15]. The coassiciativity of the coproduct implies that it can be iterated in a unique way. If (n 1 , . . . , n k ) is a partition of n, we define ∆ n 1 ,...,n k : H n → H n 1 ⊗ . . . ⊗ H n k . (2.10) Note that the maximal iteration of the coproduct, corresponding to the partition (1, . where ∧ denotes the usual wedge product on differential forms. While H is a Hopf algebra, we are practically interested in the full algebra A where we have kept all factors of π. Based on similar ideas in the context of motivic multiple zeta values [36], it was argued in ref. [37] that we can reintroduce π into the construction by considering the trivial comodule A = Q[iπ] ⊗ H. The coproduct is then lifted to a comodule map ∆ : A → A ⊗ H which acts on iπ according to ∆(iπ) = iπ ⊗ 1. In the following we will, by abuse of language, refer to the comodule as the Hopf algebra A of multiple polylogarithms. Let us conclude this review of multiple polylogarithms and their Hopf algebra structure by discussing how differentiation and taking discontinuities (see section 3 for precise definition of discontinuity in this work) interact with the coproduct. In ref. [37] it was argued that the following identities hold: In other words, differentiation only acts in the last entry of the coproduct, while taking discontinuities only acts in the first entry. Pure Feynman integrals In the rest of this paper we will be concerned with connected Feynman integrals in dimensional regularization. Close to D = 4 − 2 dimensions, an L-loop Feynman integral F (L) then defines a Laurent series, In the following we will concentrate on situations where the coefficients of the Laurent series can be written exclusively in terms of multiple polylogarithms and rational functions, and a well-known conjecture states that the weight of the transcendental functions (and numbers) that enter the coefficient F of an L loop integral is less than or equal to 2L − k. If all the polylogarithms in F (L) k have the same weight, the integral is said to have uniform (transcendental) weight. In addition, we say that an integral is pure if the coefficients F (L) k do not contain rational or algebraic functions of the external kinematical variables. It is clear that pure integrals are the natural objects to study when trying to link Hopf algebraic ideas for multiple polylogarithms to Feynman integrals. For this reason we will only be concerned with pure integrals in the rest of this paper. However, the question naturally arises of how restrictive this assumption is. In ref. [38] it was noted that if a Feynman integral has unit leading singularity [39], i.e., if all the residues of the integrand, obtained by integrating over compact complex contours around the poles of the integrand, are equal to one, then the corresponding integral is pure. Furthermore, it is well known that Feynman integrals satisfy integration-by-parts identities [40], which, loosely speaking, allow one to express a loop integral with a given propagator structure in terms of a minimal set of so-called master integrals. In ref. [41] it was conjectured that it is always possible to choose the master integrals to be pure integrals, and the conjecture was shown to hold in several nontrivial cases [42][43][44]. Hence, if this conjecture is true, it should always be possible to restrict the computation of the master integrals to pure integrals, which justifies the restriction to this particular class of integrals. Another restriction on the class of Feynman integrals considered in this paper is that we consider all propagators to be massless. In this case, it is known that the branch points of the integral, seen as a function of the invariants s ij = 2p i · p j , where p i are the external momenta (which can be massive or massless), are the points where one of the invariants is zero or infinite [4]. It follows then from eq. (2.14) that the first entry of the coproduct of a Feynman integral can only have discontinuities in these precise locations. In particular, this implies the so-called first entry condition, i.e., the statement that the first entries of the symbol of a Feynman integral with massless propagators can only be (logarithms of) Mandelstam invariants [16]. This observation, combined with the fact that Feynman integrals can be given a dispersive representation, provides the motivation for the rest of this paper, namely the study of the discontinuities of a pure Feynman integral with massless propagators through the lens of the Hopf algebraic language reviewed at the beginning of this section. Three definitions of discontinuities In this section we present our definitions and conventions for the discontinuities of Feynman integrals with respect to external momentum invariants, also called cut channels. There are three operations giving systematically related results: a discontinuity across a branch cut of the Feynman integral, which we denote by Disc and define in section 3.1 below; unitarity cuts computed via Cutkosky rules and the diagrammatic rules of ref. [2,3], which we extend here to multiple cuts and denote by Cut (section 3.2); and a corresponding δ operation on the coproduct of the integral (section 3.3). Discontinuities taken with respect to one invariant are familiar, but sequential discontinuities must be constructed with care in order to derive equivalent results from the three operations. In this section, we present these concepts in general terms. Concrete illustrations appear in the following sections. Let F be a pure Feynman integral, and let s and s i denote Mandelstam invariants (squared sums of external momenta), labeled by i in the case where we consider several of them. These invariants come with an iε prescription inherited from the Feynman rules for propagators. In the case of planar integrals, such as the examples we will consider in the following sections, the integral is originally calculated in the Euclidean region, where all Mandelstam invariants of consecutive legs are negative, so that branch cuts are avoided. It may then be analytically continued to any other kinematic region by the prescription The most natural kinematic variables for a given integral might be more complicated functions of the momentum invariants. We denote these general kinematic variables by x or x i . Indeed, it is known that the Laurent expansion coefficients in eq. (2.15) are periods (defined, loosely speaking, as integrals of rational functions), which implies that the arguments of the polylogarithmic functions are expected to be algebraic functions of the external scales [45]. Disc: Discontinuity across branch cuts The operator Disc x F gives the direct value of the discontinuity of F as the variable x crosses the real axis. If there is no branch cut in the kinematic region being considered, then the value is zero. Concretely, where the iε prescription must be inserted correctly in order to obtain the appropriate sign of the discontinuity. For example, Disc x log(x + i0) = 2πi θ(−x). We will discuss the sign in more detail at the end of this section, when we relate Disc to the other definitions of discontinuities. The sequential discontinuity operator Disc x 1 ,...,x k is defined recursively: Note that Disc may be computed in any kinematic region after careful analytic continuation, but if it is to be related to the value of Cut, it should be computed in the region where only the cut invariants are positive, and the rest are negative. In particular, sequential Disc can be computed in different regions at each step. Cut: Cut integral The operator Cut s gives the sum of cut Feynman integrals, in which some propagators in the integrand of F are replaced by Dirac delta functions. These propagators themselves are called cut propagators. The sum is taken over all combinations of cut propagators that separate the diagram into two parts, in which the momentum flowing through the cut propagators from one part to the other corresponds to the Mandelstam invariant s. Furthermore, each cut is associated with a consistent direction of energy flow between the two parts of the diagram, in each of the cut propagators. In this work, we follow the conventions for cutting rules established in ref. [2,3], and extend them for sequential cuts. First cut. Let us first review the cutting rules of ref. [2,3]. We start by enumerating all possible partitions of the vertices of a Feynman diagram into two sets, colored black (b) and white (w). Each such colored diagram is then evaluated according to the following rules: • Black vertices, and propagators joining two black vertices, are computed according to the usual Feynman rules. • White vertices, and propagators joining two white vertices, are complex-conjugated with respect to the usual Feynman rules. • Propagators joining a black and a white vertex are cut with an on-shell delta function, a factor of 2π to capture the complex residue correctly, and a theta function restricting energy to flow in the direction b → w. For a massless scalar theory, the rules for the first cut may be depicted as: The dashed line indicating a cut propagator is given for reference and does not add any further information. We write Cut s to denote the sum of all diagrams belonging to the same momentum channel, i.e., in each of these diagrams, if p is the sum of all momenta through cut propagators flowing in the direction from black to white, then p 2 = s. Note that cut diagrams in a given momentum channel will appear in pairs that are black/white color reversals -but of each pair, only one of the two can be consistent with the energies of the fixed external momenta, giving a potentially nonzero result. Sequential cuts. The diagrammatic cutting rules of ref. [2,3] reviewed so far allow us to consistently define cut integrals corresponding to a single unitarity cut. The aim of this paper is however the study of sequences of unitarity cuts. The cutting rules of ref. [2,3] are insufficient in that case, as they only allow us to partition a diagram in two parts, corresponding to connected areas of black and white vertices. We now present an extension of the cutting rules to sequences of unitarity cuts on different channels. At this stage, we only state the rules, whose consistency is then backed up by the results we find in the remainder of this paper. In a sequence of diagrammatic cuts, energy-flow conditions are overlaid, and complex conjugation of vertices and propagators is applied sequentially. We continue to use black and white vertex coloring to show complex conjugation. Colors are reversed as cuts are crossed. We illustrate an example in fig. 1, which will be discussed below. Consider a multiple-channel cut, Cut s 1 ,...,s k I. It is represented by the sum of all diagrams with a color-partition of vertices for each of the cut invariants s i = p 2 i . Assign a sequence of colors {c 1 (v), . . . , c k (v)} to each vertex v of the diagram, where each c i takes the value 0 or 1. For a given i, the colors c i partition the vertices into two sets, such that the total momentum flowing from vertices labeled 0 to vertices labeled 1 is equal to p i . A vertex v is finally colored according to c(v) ≡ k i=1 c i (v) modulo 2, with black for c(v) = 0 and white for c(v) = 1. The rules for evaluating a diagram are as follows. • A propagator joining vertices u and v is cut if c i (u) = c i (v) for any i. There is a theta function restricting the direction of energy flow from 0 to 1 for each i for which c i (u) = c i (v). If different cuts impose conflicting energy flows, then the product of the theta functions is zero and the diagram gives no contribution. • We exclude crossed cuts, as they do not correspond to the types of discontinuities captured by Disc and δ. 1 In other words, each new cut must be located within a region of identically-colored vertices with respect to the previous cuts. In terms of the color labels, we require that for any two values of i, j, exactly three of the four possible distinct color sequences {c i (v), c j (v)} are present in the diagram. • Likewise, we exclude sequential cuts in which the channels are not all distinct. This restriction is made only because we have not found a general relation between such cuts and Disc or δ. In principle, there is no obstacle to computing these cut diagrams. For massless scalar theory, the rules for sequential cut diagrams may then be depicted thus: Let us make some comments about the diagrammatic cutting rules for multiple cuts we just introduced. First, we note that these rules are consistent with the corresponding rules for single unitarity cuts presented at the beginning of this section. Second, using these rules, it is clear that sequential cuts are independent of the order of cuts. Indeed, none of our rules depends on the order in which the cuts are listed. Finally, the dashed line is an incomplete shorthand merely indicating the location of the delta functions, but not specifying the direction of energy flow, for which one needs to refer to the color indices. Our diagrams might also include multiple cut lines on individual propagators, such as p . (3.9) We also introduce notation allowing us to consider individual diagrams contributing to a particular cut, and possibly restricted to a particular kinematic region. When no region is specified, for the planar examples given in this paper, it is assumed that the cut invariants are taken to be positive while all other consecutive Mandelstam invariants are negative. We write Cut s,[e 1 ···ew],R D (3.10) to denote a diagram D cut in the channel s, in which exactly the propagators e 1 · · · e w are cut, and computed in the kinematic region R. Rules of complex conjugation and energy flow will be apparent in the context of such a diagram. Examples of sequential cuts. We briefly illustrate the diagrammatics of sequential cuts. Consider taking two cuts of a triangle integral. At one-loop order, a cut in a given channel is associated to a unique pair of propagators. We list the four possible color partitions {c 1 (v), . . . , c k (v)} in fig. 1. The first graph is evaluated according to the rules above, giving first cut second cut Figure 2: An example of crossed cuts, which we do not allow. The second and third graphs evaluate to zero, since the color partitions give conflicting restrictions for the energy flow on the propagator labeled p. The fourth graph is similar to the first, but with energy flow located on the support of θ(−p 0 )θ(−q 0 )θ(−r 0 ). Just as for a single unitarity cut, in which only one of the two colorings is compatible with a given assigment of external momenta, there can be at most one nonzero diagram for a given topology of sequential cuts. In the examples calculated in the following sections of this paper, we will thus omit writing the sequences of colors {c 1 (v), . . . , c k (v)}. We may also omit writing the theta functions for energy flow in the cut integrals. We include an example of crossed cuts, which we do not allow, in fig. 2. Notice that there are four distinct color sequences in the diagram, while we only allow three for any given pair of cuts. δ: Entries of the coproduct We denote by δ x 1 ,...,x k F the cofactor of the first entries log x 1 ⊗ · · · ⊗ log x k in the coproduct ∆ 1,...,1,n−k F , where we must be careful to account for relations between log x and log(−x), for example, or more generally, log(f (x)) for any function f (x). Stated more precisely, if F is of transcendental weight n, and ∆ 1,1,...,1 k times ,n−k F = {a 1 ,...,a k } log a 1 ⊗ · · · ⊗ log a k ⊗ g a 1 ,...,a k , (3.11) where the a i are functions of some (combination of) variables x i , then where 13) and the congruence symbol indicates that δ x 1 ,...,x k F can be defined only modulo 2πi. If the integral contains overall numerical factors of π, they should be factored out before performing this operation. The definition of δ x 1 ,...,x k F is motivated by the relation eq. (2.14) between discontinuities and coproducts. In particular, if δ x F ∼ = g x , then Disc x F ∼ = (Disc x ⊗ id)(log x ⊗ g x ) = ±2πi g x . The sign is determined by the iε prescription for x in F and will be discussed in more detail in the following subsection. The first entry condition [16] mentioned at the close of Section 2 implies that this operation can be performed in a physical momentum channel for the first cut. But we will see in our main examples that the later arguments a 2 , a 3 , . . . of the coproduct are not necessarily momentum invariants, so we must formulate a clear prescription for matching δ x 1 ,...,x k F to physical discontinuities. Relations among Disc, Cut, and δ Cut diagrams and discontinuities. The rules for evaluating cut diagrams are designed to compute their discontinuities. The fact that such a relation exists at all follows from the largest time equation. For the first cut, the derivation may be found in ref. [2,3]. The original relation is where the sum runs over all momentum channels. In terms of diagrams with colored vertices, the left-hand side is the all-black diagram plus the all-white diagram. The righthand side is -1 times the sum of all diagrams with mixed colors. We can isolate a single momentum channel s by analytic continuation into a kinematic region where among all the invariants, only s is on its branch cut. Specifically, for planar integrals such as the examples given in this paper, we take s > 0 while all other invariants of consecutive momenta are negative. There, the left-hand side of eq. (3.14) can be recast 2 as Disc s F , while the right-hand side collapses to a single term: For sequential cuts, we claim that Cut s 1 ,...,s k F captures discontinuities in variables x 1 , . . . , x k which are related to arguments of the multiple polylogarithms, in a relation of the form We recall that no two of the invariants s 1 , . . . , s k should be identical, nor may any pair of them cross each other in the sense given in the cutting rules above. We now make this relation precise by explaining how to obtain the variables x 1 , . . . , x k from the Mandelstam invariants s 1 , . . . , s k . The procedure is the following. • We assume prior knowledge of the set of variables from which the x i are drawn. • Let R[s 1 , . . . , s j ] denote the kinematic region in which the invariants inside the brackets are positive while all other invariants are negative. The left-hand side of eq. (3.16) is evaluated in the region R[s 1 , . . . , s k ]. 3 On the right-hand side, we proceed step by step according to the definition eq. (3.2), and each Disc x i is evaluated in the region R[s 1 , . . . , s i ]. • By the traditional cutting rules cited above, we can take the first variable to be a Mandelstam invariant, x 1 = s 1 . For each subsequent i ∈ {2, . . . , k}, x i runs over all values for which log(x i ) has branch points in common with log(s i ), and for which the variable x i can approach the branch point independently of all the other x j within the region R[s 1 , . . . , s i ]. • The iε prescription for x i is inherited naturally from the iε prescription of s i in the region R[s 1 , . . . , s i ]. While sequential cuts are independent of the order in which the channels are listed, the correspondences to Disc are derived in sequence, so that the right-hand side of eq. (3.16) takes a different form when channels on the left-hand side are permuted. Thus, eq. (3.16) implies relations among the Disc x 1 ,...,x k F . The right-hand side of eq. (3.16) may sometimes coincide with Disc s 1 ,...,s k F . We will find an instructive counterexample with k = 3 in Section 6.4, where the correspondence breaks down because there are only two independent variables to take the positions x 1 , x 2 , and thus there is no possibility for any x 3 to approach a branch point independently. The relation (3.16) is therefore a statement that cutting rules contain information about the nature of the variables x i which are the natural arguments of the function F . Coproduct and discontinuities. As a consequence of eq. (2.14), the first discontinuity of F is captured by the operation δ. We claim that sequential discontinuities of F are captured by δ as well, in a relation of the form The congruence symbol indicates that the relation is valid modulo (2πi) k+1 , consistent with the definition of δ x 1 ,...,x k . Since the coproduct is the same in all kinematic regions, we have inserted the schematic factor Θ to express the restriction to the region where the left-hand side is to be compared with Cut. For k ≥ 2, the relation eq. (3.17) is not at all obvious, because later entries in the coproduct do not distinguish between log x i and log(−x i ), for example, and so we cannot tell whether the argument is on its branch cut, in general. Our claim is that the arguments are always on their branch cuts, so that the relation is valid, in the case of pure Feynman integrals, and where the left-hand side is related to cuts on invariants s 1 , . . . , s k through a relation of the type eq. (3.16), i.e. matching the branch points of their logarithms and allowing the x i to approach their branch points independently. Again, the left-hand side must be computed step by step in the corresponding kinematic regions, namely R[s 1 , . . . , s i ] for Disc x i . The operator δ x 1 ,...,x k can likewise be expressed sequentially as δ x 1 (δ x 2 (· · · (δ x k ))), and the factor Θ encodes a corresponding product of theta functions relating Disc x i to δ x i at each step. To make the relation eq. (3.17) completely precise, we must specify how to fix the sign of each term. The branch cut of log x i is taken conventionally, along the negative real axis. Between the functions log x i and log(−x i ), we select the one on the branch cut in the region R[s 1 , . . . , s i ], i.e. where the argument is negative, which can be written in either case as log(x i (1 − 2θ(x i ))). The kinematic restriction allows a clear iε prescription to be inherited by x i from s i , in the region R[s 1 , . . . , s i ]. Thus we follow the iε prescription to see whether x i (1 − 2θ(x i )) is above or below the branch cut, and attach a factor of +2πi if above and −2πi if below. For example, let us take a look at the first entries. The coproduct of F can be written so that each term has its first entry of the form log(−s 1 ), where s 1 is a Mandelstam invariant. As stated below eq. (3.16), we simply take x 1 = s 1 . Since it is a cut invariant, we work in the region where x 1 > 0. But our claim is that the coproduct sees the discontinuity coming from log(−x 1 ), rather than the function log(x 1 ). We must follow the iε to determine its sign. The original iε prescription for propagators leads to the prescription s i + iε for invariants. Thus we have −(x 1 + iε) = −x 1 − iε, and so we pick up a factor of −2πi from the first entry, giving In this paper, we give evidence for the validity of eq. (3.16) and eq. (3.17) by matching cut diagrams and coproduct entries directly, as well as by computing discontinuities in some cases. One-loop examples In this section, we present three simple examples of discontinuities of one-loop integrals to demonstrate the relations discussed in the previous section. We first consider the threemass triangle in some depth, which is an illuminating introduction to the two-loop ladder example in the following section, as their kinematic analyses have many common features. The second, brief, example is the four-mass box, whose functional form is closely related to the triangle although the cut diagrams are quite different. Finally, we discuss the infrareddivergent "two-mass-hard" box, which will be used as a building block for cuts of the two-loop ladder and also demonstrates the validity of consistent dimensional regularization. Three-mass triangle The triangle in D = 4 dimensions. We begin by analyzing the three-mass triangle integral with massless propagators. According to our conventions, which are summarized in appendix A, the three mass triangle integral in D = 4 − 2 dimensions is defined by Figure 3: The triangle integral, with loop momentum defined as in the text; and with cuts in the p 2 2 and p 2 3 channels. where γ E = −Γ (1) denotes the Euler-Mascheroni constant. As the focus of the paper will be the computation of cut diagrams, it is of utmost importance to keep track of all imaginary parts. We follow the conventions for massless scalar theory listed in the preceding section. In particular, until cuts are introduced, all vertices (denoted by a black dot, see fig. 3) are proportional to i, and all propagators have an explicit factor of i in the numerator and follow the usual Feynman +iε prescription. These factors lead to the explicit minus sign in eq. (4.1). Note that we do not include a factor of i −1 per loop into the definition of the integration measure. Many different expressions are known for the three-mass triangle integral, both in arbitrary dimensions [49,50] as well as an expansion around four space-time dimensions in dimensional regularization [51][52][53]. Note that the three-mass triangle integral is finite in four dimensions, and we therefore put = 0 and only analyze the structure of the integral in exactly four dimensions. We start by giving a short review of this function. It is clear that, up to an overall factor carrying the dimension of the integral, the three-mass triangle can only depend on the dimensionless ratios of momentum invariants, Furthermore, it is convenient to introduce variables z,z, satisfying the relations An explicit solution to the above relations is given by where we define with the Källén function λ(a, b, c) defined by We note that, for positive values of λ, we always have z >z. Since eq. (4.3) is symmetric in z andz, there is a second solution in whichz > z, which could be interpreted as taking the negative branch of the square root in eq. (4.4). In most of our calculations, we will indeed restrict ourselves to the region where z >z, for concreteness. In the regions where all p i have the same sign, there is a portion of kinematic phase space in which λ is negative, so that (z,z) take complex values. In terms of the variables (4.4), the triangle integral takes the form where Some comments are in order: we see that the three-mass triangle is of homogeneous transcendental weight two, i.e., it is only a function of dilogarithms and products of ordinary logarithms. It is, however, not a pure function in the sense of the definition in section 2, but it is multiplied by an algebraic function of the three external scales p 2 i (or equivalently, a rational function of z,z and p 2 1 ), which is the leading singularity. In the following we are only interested in the transcendental contribution, and we therefore define, for arbitrary values of the dimensional regulator , ) is a pure function at every order in the expansion. Let us now consider the discontinuities of the triangle integral. It is well known that the branch points of a Feynman integral with massless propagators are the points where the Mandelstam invariants approach 0 or ∞. It is easy to see that in the (z,z) plane these branch points correspond to z orz taking values among 0, 1, ∞. The correspondence is given explicitly in Table 1. The first-entry condition for Feynman integrals discussed in section 2 implies that the symbol of the three-mass triangle can only have u 2 = zz and u 3 = (1 − z)(1 −z) as its leftmost entry. The coproduct of the one-loop three mass triangle can be computed explicitly from eq. (4.9), with the result where in the second equality we made the first entry condition explicit. Our aim is to interpret the coproduct of the one-loop three-mass triangle in terms of cut diagrams, through the relations of Section 3. In the rest of this section we present, as a warm-up, the explicit computation of the unitarity cut of the one-loop three-mass triangle. Branch point Limit value Table 1: Branch points of the triangle, in terms of Mandelstam variables or the z,z of equation (4.3). The Mandelstam invariants can approach the branch point at ∞ from either positive or negative values. We will let z andz vary independently, and therefore we are sensitive only to the first set of branch points, where Mandelstam invariants approach 0. Table 2: Some kinematic regions of 3-point integrals, classified according to the signs of the Mandelstam invariants and the sign of λ, as defined in eq. (4.5). In the first six rows, λ > 0, so that z andz are real-valued, and we take z >z without loss of generality. Unitarity cuts of the one-loop three-mass triangle. It is well known [1,2,4] that the discontinuity in a physical channel is given by replacing propagators in the Feynman integral by delta functions, as depicted in fig. 3b. As already discussed, the branch points of the three-mass triangle are wherever one of the external masses approaches zero or infinity, or equivalently where z orz approaches one of the points {0, 1, ∞}. The restriction of kinematic region will make clear which of these various branch points are accessible. The correspondence between signs of Mandelstam invariants and values of z,z is given in Table 2. In the following we review the cut integral calculation. Although it is not necessary in this example, we now work in D = 4 − 2 dimensions, as a warmup to the two-loop integral where the D-dimensional formalism will be important at the level of cuts. We will work in the region which we denote by R * , where all the invariants are positive and λ < 0 (and thusz = z * ), because having z andz complex simplifies the calculation. The cut integral we want to compute reads . Without loss of generality we can select our frame and parametrize the loop momentum as follows: where θ ∈ [0, π] and |k| > 0, and 1 D−2 ranges over unit vectors in the dimensions transverse to p 2 and p 3 . Momentum conservation fixes the value of α in terms of the momentum invariants to be With this frame and parametrization, the cut integration measure becomes The D-dimensional cut triangle integral, with energy flow conditions suited for the p 2 channel, is . Performing the change of variables, and turning to the dimensionless variables (4.2) and (4.4), the cut integral becomes The results for the cuts on different channels can be obtained in a similar way and are collected in appendix B. Let us now consider a sequence of cuts on the p 2 2 and p 2 3 channels, consistent with energy flow from leg three to leg two (see fig. 3c). We must work in a region where p 2 2 , p 2 3 > 0; we choose R 2,3 . The cut integral is (4.17) Using the parametrization (4.12), we find (4.18) Summary and discussion. We now interpret the results for the cuts of the triangle integral we just computed in terms of the coproduct. It is trivial to analytically continue to the region R 2 in which p 2 2 > 0 and p 2 1 , p 2 3 < 0. In keeping with the familiar cutting rules in a single momentum channel, we recover the discontinuity of the function with a minus sign, (4.19) and similarly for the cuts on the other channels, in agreement with eq. (3.16). This result is in agreement with computing the discontinuity from the coproduct of the triangle integral, eq. (4.10), according to the relation eq. (3.17), Proceeding to a sequence of two discontinuities, 4 let us relate Cut p 2 2 ,p 2 3 to Disc and then to δ. The first step is to identify the variable in which the discontinuity is taken. For the triangle, we see that the natural variables appearing in the multiple polylogarithms are taken from four possible values, We must work in the region R 2,3 . In terms of z,z, we see from Table 2 thatz < 0, z > 1. Table 1 shows that the only branch point for p 2 3 within this region is z → 1. Therefore, the discontinuity in p 2 3 can be understood entirely as the discontinuity in the only variable of eq. (4.21) whose logarithm shares this branch point, namely (1 − z). Finally, to get the correct sign of the discontinuity, we observe in fig. 3b (after the p 2 cut and before the p 3 cut) that the p 3 vertex is in the white complex-conjugated region of the diagram. Therefore, we take the discontinuity from the conjugated iε prescription, namely p 2 3 − iε, which implies (1 − z) + iε inside this kinematic region. Thus we compute as a consequence of eq. (4. 19), in full agreement with eq. (4.18) and eq. (3.16): To compare this same discontinuity to the coproduct, we take the same variables as in Disc and read from eq. (4.10) that δ p 2 2 ,1−z T = −1/2. To attach the factors of 2πi with the correct signs into the relation eq. (3.17), we follow the iε at each step. As explained below eq. (3.17), the first entry always gives a factor of (−2πi). For the second factor, since 1 − z is negative in our kinematic region, we deduce that we are picking up the discontinuity of log(1 − z) rather than log(−(1 − z)). As above, the prescription is (1 − z) + iε, giving a factor of (2πi). In total, the relation eq. (3.17) between Disc and δ is which agrees with eq. (4.22) after accounting for the factor relating T to T . Four-mass box The four-mass box is also finite in four dimensions, and may in fact be expressed by the same function as the three-mass triangle [51]. If we label the momenta at the four corners by p 1 , p 2 , p 3 , p 4 , as in fig. 4a, and define s = (p 1 + p 2 ) 2 , t = (p 2 + p 3 ) 2 , then the box in the Euclidean region is given by where we have introduced variables Z,Z defined as follows: Since the functional form is the same as for the three-mass triangle, most of the multiple cuts can be analyzed exactly the same way. Because the transcendental weight is two, we are limited to a sequence of two cuts in computing δ. This limitation is consistent with Cut, as any real-valued cut of all four propagators of the diagram vanishes; and with Disc as related to the other discontinuities by the rules of section 3, as there are only the two variables Z,Z in which to take discontinuities. Ordinary single-channel cuts are consistent when calculated by each of the three methods listed in the previous subsection. In view of the permutation symmetry, we can say without loss of generality that the first cut is in the channel p 2 2 . For a second cut channel, we only need to distinguish two types: p 2 4 , or any of the others. Suppose we choose p 2 3 . Then, the analysis of discontinuities from direct analytic continuation and from the coproduct is exactly the same as in the triangle example. The corresponding cut integral, with three delta functions and one of the original propagators, is shown in fig. 4b and produces the leading singularity. The truly new kind of multiple cut to consider is the discontinuity of Disc p 2 2 B 4m in the p 2 4 channel, shown in fig. 4c. In a region where p 2 2 , p 2 4 > 0, all other invariants are negative, and λ is real-valued, we must have (1 − Z)/(1 −Z) > 0. So, either by considering the discontinuity directly, or from the coproduct, we find 5 Recalling the similarity of the functional form of this box to the triangle example, this calculation is analogous to trying to cut the triangle twice in the same channel. For the box, however, we can actually set up a cut integral to capture this sequential discontinuity. It would have all four of its propagators replaced by delta functions. This is the familiar "quadruple cut" [8], which is evaluated at its complex-valued solutions. Here, in our correspondence between cut integrals and discontinuities, we insist on real parametrization of the loop momentum. Thus there is no solution to the four delta functions, and we conclude that the cut integral vanishes, in agreement with eq. (4.26). Two-mass-hard box We close this section with the example of the two-mass-hard box, since some of its discontinuities are needed for our two-loop calculations. This example illustrates several features different from the previous examples, even apart from the presence of external massless legs: we must work in dimensional regularization consistently, and we can use Mandelstam invariants directly rather than new variables. Because of the infrared divergences of this integral, we employ dimensional regularization. The coproduct structure requires that we work order by order in the regularization parameter. We take the result from ref. [49], with an additional factor of ie γ E inserted to match our conventions. In the Euclidean region, the box is given by In the following equations, we drop the O( ) terms. The coproduct is evaluated order by order in the Laurent expansion in . At order 1/ 2 , it is trivial and there is clearly no discontinuity. At order 1/ , the coproduct is simply the function itself, At order 0 , we are interested in the ∆ 1,1 term of the coproduct, which is given by Discontinuity in the t-channel. Using the analytic continuation of the dilogarithm for x > 0, we find that the discontinuity of B 2mh in the t-channel, with all other invariants negative, is given by From the point of view of the terms of the coproduct in eq. (4.29), we find and thus Disc t B 2mh ∼ = −2πi Θ δ t B 2mh , as expected. Sequential discontinuities. Since the two-mass-hard box has four momentum channels, there are six pairs to consider as generalized cut integrals, or sequential discontinuities. Cutting any of the channel pairs (s, p 2 3 ), (s, p 2 4 ), or (p 2 3 , p 2 4 ) cuts the same set of three propagators, as shown in fig. 5a, and gives the leading singularity. The result of the integral (in the respective kinematic regions) is −4π 2 i/(st), which matches the value computed from the coproduct, eq. (4.32), or the direct evaluation of discontinuities. Cutting the channel pair (t, p 2 3 ) or (t, p 2 4 ) corresponds to a cut integral in which a massless three-point vertex has been isolated, as shown in fig. 5, diagrams (b) and (c). It is well known that a three-point on-shell vertex in real Minkowski space requires collinear momenta. Let us see how this property figures in the cut integral. Parametrize the loop The delta functions set x = w = 0, so that = yp 2 , which is the familiar collinearity condition. If D > 4, then the integral over w vanishes. For D = 4 exactly, one can find a finite result for the integral. (It would again give the leading singularity, −4π 2 i/(st).) Looking at the coproduct, eq. (4.32), or the p 2 i -channel discontinuity of eq. (4.31), noting the appearance of p 2 i in the denominator of (1 − t/p 2 i ), we see that the sequential discontinuity for either of the channel pairs (t, p 2 3 ) and (t, p 2 4 ) is zero. Thus we see that it is correct to insist on D > 4, keeping the dimensional regularization parameter nonzero, even though the cut itself is finite in four dimensions. Finally, the channel pair (s, t) is excluded because the cuts cross, in the sense given in the cutting rules of the previous section. Note that in the coproduct, eq. (4.29), there are terms proportional to log(−s) ⊗ log(−t) and log(−t) ⊗ log(−s). If we were to compute the cut integral, it would be zero, not only because of the on-shell three-point vertices, but also because there is no real-valued momentum solution for any box with all four propagators on shell, even in D = 4. The relations between Cut s,t and δ s,t break down at the level of Disc s,t : because the cuts cross, there is no clear iε prescription for the second cut invariant. This is the reason we exclude the possibility of crossed cuts. We have seen again that sequential discontinuities, cut integrals, and entries of coproducts agree-provided that we take < 0 for infrared-divergent integrals, with the consequence that on-shell three-point vertices force cut integrals to vanish. Unitarity cuts at two loops: the three-point ladder diagram The two-loop, three-point, three-mass ladder diagram with massless internal lines, fig. 6, is finite in four dimensions [51]. In terms of the variables z,z defined in eq. (4.4), it is given by a remarkably simple expression: where we have defined the pure function Because the two-loop three-point ladder in four dimensions is given by weight four functions, its coproduct structure is much richer than the one-loop cases of the preceding section. Since one of our goals is to match the entries in the coproduct to the cuts of the integral, we list below for later reference all the relevant components of the coproduct, of the form ∆ 1, . . . , 1 k times ,n−k . We have Notice that the first entry of ∆ 1,1,1,1 is (the logarithm of) a Mandelstam invariant, in agreement with the first entry condition. In the rest of this section we evaluate the standard unitarity cuts of the ladder graph of fig. 6, which give the discontinuities across branch cuts of Mandelstam invariants in the time-like region. Our goal is, first, to relate these cuts to specific terms of ∆ 1,3 of T L (p 2 1 , p 2 2 , p 2 3 ), and, in the following section, to take cuts of these cuts and relate them to ∆ 1,1,2 . In contrast to the one-loop case, individual cut diagrams are infrared divergent. Again, we choose to use dimensional regularization. Even though T L (p 2 1 , p 2 2 , p 2 3 ) is finite in D = 4 dimensions, its unitarity cuts need to be computed in D = 4−2 dimensions. The finiteness of T L (p 2 1 , p 2 2 , p 2 3 ) for = 0 imposes cancellations between -poles of individual cut diagrams. These cancellations can be understood in the same way as the cancellation of infrared singularities between real and virtual corrections in scattering cross sections. The cut diagrams will be computed in the region R * , wherez = z * and all the Mandelstam invariants are timelike. This restriction is consistent with the physical picture of amplitudes having branch cuts in the timelike region of their invariants. When comparing the results of cuts with δ, but particularly with Disc, we will be careful to analytically continue our result to the region where only the cut invariant is positive, as this is where Disc is evaluated. Before we start computing the cut integrals, we briefly outline our approach to these calculations. We will compute the cuts of this two-loop diagram by integrating first over a carefully chosen one-loop subdiagram, with a carefully chosen parametrization of the internal propagators. We make our choices according to the following rules, which were designed to simplify the calculations as much as possible: • Always work in the center of mass frame of the cut channel p 2 i . The momentum p i is taken to have positive energy. • The routing of the loop momentum k 1 is such that k 1 is the momentum of a propagator, and there is either a propagator with momentum (p i − k 1 ) or a subdiagram with (p i − k 1 ) 2 as one of its Mandelstam invariants. • The propagator with momentum k 1 is always cut. • Whenever possible, the propagator with momentum (p i − k 1 ) is cut. • Subdiagrams are chosen so to avoid the square root of the Källén function as their leading singularity. This is always possible for this ladder diagram. These rules, together with the parametrization of the momenta where θ ∈ [0, π], |k 1 | > 0, and 1 D−2 ranges over unit vectors in the dimensions transverse to p i and p j , make the calculation of these cuts particularly simple. It is easy to show that [12] Figure 7: Two-particle cuts in the p 2 3 -channel. The changes of variables are also useful (the y variable is useful mainly when (p i − k 1 ) is not cut). Unitarity cut in the p 2 3 channel We present the computation of the cuts in the p 2 3 channel in some detail, in order to illustrate our techniques for the evaluation of cut diagrams outlined above. We follow the conventions of appendix A. We then collect the different contributions and check the cancellation of divergent pieces and the agreement with the term δ u 3 F (z,z) in eq. (5.3). There are four cuts contributing to this channel, and our aim is to show that δ u 3 F (z,z). (5.10) Two-particle cuts. There are two two-particle cut diagrams contributing to the p 2 3channel unitarity cut, Cut p 2 3 , [45] T L (p 2 1 , p 2 2 , p 2 3 ) and Cut p 2 3 , [12] T L (p 2 1 , p 2 2 , p 2 3 ), shown in fig. 7. We start by considering the diagram in fig. 7a, which is very simple to compute because the cut completely factorizes the two loop momentum integrations into a one-mass triangle and the cut of a three-mass triangle: We substitute the following expressions for the one-loop integrals, which we have compiled in appendix B, where we have used p 2 1 = p 2 1 + iε to correctly identify the minus sign associated with p 2 1 in this region where p 2 1 > 0. As expected, the result is divergent for → 0: the origin of the divergent terms is the one-loop one-mass triangle subdiagram. Expanding up to O( ), we get Expressions for the coefficients f (i) [45] (z,z) are given in appendix C. We now go on to fig. 7b. We can see diagrammatically that the integration over k 2 is the (complex-conjugated) two-mass-hard box we have already studied in Section 4.3, with masses p 2 1 and p 2 2 . More precisely, we have To proceed, we parametrize the momenta as in eq. (5.6), with (i, j) = (3, 1). Then, we rewrite the momentum integration as The two delta functions allow us to trivially perform the k 0 and |k| integrations. For the remaining integral, it is useful to change variables to cos θ = 2x − 1, as in eq. (5.8), and we get, (5.14) The factor e −iπ was determined according to the iε prescription of the invariants. After expansion in , all the integrals above are simple to evaluate in terms of multiple polylogarithms. We write this expression as: and give the expressions for the coefficients f (i) [12] (z,z) in appendix C. Three-particle cuts. There are two three-particle cut diagrams that contribute to the p 2 3 -channel unitarity cut, Cut p 2 3 ,[234] T L (p 2 1 , p 2 2 , p 2 3 ) and Cut p 2 3 ,[135] T L (p 2 1 , p 2 2 , p 2 3 ), shown in fig. 8. As these two cuts are very similar, we only present the details for the computation of the cut in fig. 8a, and simply quote the result for fig. 8b. In both cases, we note that the integration over k 2 is the cut in the (p 3 − k 1 ) 2 -channel of a two-mass one-loop triangle, with masses p 2 3 and (p 3 − k 1 ) 2 . More precisely, for the cut in fig. 8a we have We take the result for the cut of the two mass triangle given in appendix B and insert it into eq. (5.16), where we have used the δ-function to set k 2 1 = 0, and we have dropped the ±iε. We have included the θ-functions because the cut of the two-mass triangle is only nonzero when the (p 3 − k 1 ) 2 -channel is positive. It is also important to recall that the positive energy flow across the cut requires k 1,0 > 0, so we have included this θ-function explicitly. We use the parametrization of eq. (5.6), with (i, j) = (3, 2) and both changes of variables in eq. (5.8), since the propagator with momentum (p 3 − k 1 ) is not cut. The two conditions imposed by the θ-functions imply that 0 ≤ y ≤ 1 . (5.18) We then get We can now expand the hypergeometric function into a Laurent series in using standard techniques [59], and we then perform the remaining integration order by order. As usual, we write the result in the form [234] (z,z) + f The diagram of fig. 8b can be calculated following exactly the same steps, the only difference being that when using the parametrization of eq. (5.6) we have (i, j) = (3, 1). The result is Explicit expressions for the f (i) [234] (z,z) and f (i) [135] (z,z) are given in appendix C. Summary and discussion. Let us now combine the results for each p 2 3 -channel cut diagram and compare the total with Disc and the relevant terms in the coproduct. We observe the sum is very simple, compared to the expressions for each of the cuts. Note that, as imposed by the fact that the two-loop ladder is finite in four dimensions, the sum of the divergent terms of each diagram vanishes. In fact, this cancellation happens in a very specific way: the sum of the two-particle cuts cancels with the sum of the threeparticle cuts. If we write We call the divergent contribution of two particle cuts a virtual contribution because it is associated with divergences of loop diagrams, whereas the divergent contribution of three particle cuts, the real contribution, comes from integrating over a three-particle phase space. This cancellation is similar to the cancellation of infrared divergences for inclusive cross sections, although in this case we are not directly dealing with a cross section, but merely with the unitarity cuts of a single finite Feynman integral. A better understanding of these cancellations might prove useful for the general study of the infrared properties of amplitudes, and it would thus be interesting to understand how it generalizes to other cases. As expected, the sum of the finite terms does not cancel. We get Since all divergences have cancelled, we can set = 0 and write the cut-derived discontinuity of the integral as For comparison with Disc, we now analytically continue this result to the region R 3 where only the cut invariant is positive: p 2 3 > 0 and p 2 1 , p 2 2 < 0. In terms of the z andz variables, the region is: z > 1 >z > 0. None of the functions in eq. (5.26) has a branch cut in this region, and thus there is nothing to do for the analytic continuation and the result is valid in this region as it is given above, This is consistent with the expectation that the discontinuity function would be real in the region where only the cut invariant is positive [2,3]. The relations with Disc and δ are now easy to find. As expected, we find, We recall that this is not an unexpected result: it is just the relation between discontinuities and cuts of Feynman diagrams, see, e.g., ref. [1][2][3][4][5]60], related in turn to the language of the coproduct. The computation of the two cuts diagrams follows the same strategy as before, i.e., we compute the cut of the two-loop diagram by integrating over a carefully chosen one-loop subdiagram. Unitarity cut in the Computation of the cut diagrams. We start by computing Cut p 2 2 , [46] T L (p 2 1 , p 2 2 , p 2 3 ). As suggested by the momentum routing in fig. 9a, we identify the result of the k 2 integration with the complex conjugate of an uncut two-mass triangle, with masses (p 3 + k 1 ) 2 and p 2 3 : (5.29) Using the result for the triangle given in appendix B and proceeding in the same way as with the p 2 3 -channel cuts, we get (setting (i, j) = (2, 3) in eq. (5.6)) Cut p 2 2 , [46],R * T L (p 2 1 , p 2 2 , p 2 3 ) = 2π The cut integral Cut p 2 2 ,[136] T L (p 2 1 , p 2 2 , p 2 3 ) is slightly more complicated. Using the routing of loop momenta of fig. 9b, we look at it as the k 1 -integration over the cut of a three-mass box, where Cut t B 3m (l 2 2 , l 2 3 , l 2 4 ; s, t) is the t-channel cut of the three-mass box with masses l 2 i , for i ∈ {2, 3, 4}, l 2 1 = 0, s = (l 1 + l 2 ) 2 and t = (l 2 + l 3 ) 2 . In our case: The result for the t-channel cut of the three-mass box is given in appendix B in the region where the uncut invariants are negative, and t is positive. Since we work in the region where all the p 2 i are positive, some terms in the expression (B.11) need to be analytically continued using the ±iε prescriptions given above. Using eq. (5.6) with (i, j) = (2, 3) and introducing the variables x and y according to eq. (5.8), we have: 6 log(−s) = log p 2 1 + log u 3 + iπ , (5.32) 6 Strictly speaking, this analytic continuation is valid forz = z * , with Re(z) < 1. For the case of Re(z) > 1, the factors of iπ are distributed in other ways among the different terms, but the combination of all terms is still the same. Summary and discussion. Similarly to the p 2 3 -channel cuts, we first analyze the cancellation of the singularities in the sum of the two cuts contributing to the p 2 2 -channel, and check the agreement with δ u 2 T L (p 2 1 , p 2 2 , p 2 3 ) given in eq. (5.3). In this case we only have single poles, and we see that the poles cancel, as expected: This cancellation can again be understood as the cancellation between virtual (from cut [46]) and real contributions (from cut [136]). Adding the finite contributions, we find f (0) [46] (z,z) + f Hence, the cut of the two-loop ladder in the p 2 2 channel is (5.36) Since this result was computed in the region where all invariants are positive, we now analytically continue to the region R 2 where p 2 2 > 0 and p 2 1 , p 2 3 < 0. For the z andz variables, this corresponds to 1 > z > 0 >z. The analytic continuation of the Li 2 and Li 3 functions is trivial, because their branch cuts lie in the [1, ∞) region of their arguments. However, the continuation of log u 2 needs to be done with some care, since u 2 becomes negative. We can determine the sign of the iε associated with u 2 by noticing that where we associate a −iε to p 2 1 because it is in the complex-conjugated region of the cut diagrams. We thus see that the −iπ term in eq. (5.36) is what we get from the analytic continuation of log (−u 2 − iε) to positive u 2 . In region R 2 , we thus have This agrees with the expectation that the discontinuity function should be real in the region where only the cut invariant is positive [2,3]. Furthermore, we again observe the expected relations with Disc and δ, Diagrammatically, the relation can be written as follows: 5.3 Unitarity cut in the p 2 1 channel Given the symmetry of the three-point ladder, the cut in the p 2 1 channel shown in fig. 10 can be done in exactly the same way as the p 2 2 channel, so we will be brief in listing the results for completeness. For the sum of the two cut integrals, the reflection symmetry can be implemented by exchanging p 1 and p 2 in eq. (5.36), along with transforming z → 1/z andz → 1/z. The total cut integral is then We now analytically continue p 2 2 and p 2 3 to the region R 1 where we should match Disc. In this region, we havez < 0 and z > 1. Similarly to the previous case, we take p 2 2 − iε to find that log(u 2 − iε) → log(−u 2 ) − iπ, and thus In the last line, we have confirmed that the cut result agrees with a direct evaluation of the discontinuity of T L (p 2 1 , p 2 2 , p 2 3 ) in the region R 1 . The δ discontinuity evaluated from the coproduct is simply related to the discontinuities in the p 2 2 and p 2 3 channels. Indeed, we can rewrite eq. (5.3) as which agrees with Disc p 2 1 T L from eq. (5.41) modulo π 2 . Sequence of unitarity cuts In the previous section we gave a diagrammatic interpretation of the δ u 2 F (z,z) and δ u 3 F (z,z) terms of eq. (5.3) as unitarity cuts in p 2 2 and p 2 3 respectively. In this section we will take sequences of two unitarity cuts as defined in section 3.2 and match the result to entries of the coproduct. Unlike the single unitarity cuts, which could be computed in the kinematic region R * where √ λ is imaginary and thusz = z * , and then analytically continued back to the region in which Disc is evaluated, the calculation of double unitarity cuts (in real kinematics) has to be done in the region where z,z and √ λ are real in order to get a nonzero result. Moreover, we must work in the specific region in terms of z andz corresponding to positive cut invariants and negative uncut invariant. We start by reviewing and applying the general procedure to relate the sequential application of the Disc operator to cut integrals and to specific terms in the coproduct, as in eq. (3.16) and (3.17). It is hoped that in the context of a specific example, the procedure will become clearer and more intuitive. Next, as an example, we focus on the cases of Cut p 2 3 ,p 2 1 and Cut p 2 2 ,p 2 1 , comparing the results to the terms δ u 3 ,z F (z,z) and (δ u 2 ,z + δ u 2 ,1−z )F (z,z) of the ∆ 1,1,2 F (z,z) component of the coproduct in eq. (5.4). Then, we present our method to evaluate the necessary cuts. We check that we indeed reproduce the expected terms of the coproduct and satisfy the relations we expect, and that the relations (3.16) and (3.17) among Disc, Cut, and the coproduct components hold. We stress that the fact that we reproduce the expected relations between Disc, Cut and the coproduct components is a highly nontrivial check on the consistency of the extended cutting rules of section 3.2. In particular, we see that our assumption that we can restrict ourselves to real kinematics is justified. Finally, we observe that, unlike the case of single unitarity cuts, it is insufficient to define cut diagrams only through the set of propagators that go on shell, but the results for the integrals strongly depend on the phase space boundaries, which are specified by the correct choice of kinematic region. Relation between Cut and the coproduct, for sequential cuts of the ladder We start by deriving the exact form of the expected relations between Cut p 2 i ,p 2 j F and truncated entries of the coproduct, δ x,y F , via Disc x,y F , according to (3.16) and (3.17). This is a generalization of what was done for the one-loop triangle in Section 4.1 to the case of the ladder diagram. It is possible to write the coproduct such that x = p 2 i , an exact Mandelstam invariant in accordance with the first-entry condition, but here we take x ∈ {u 2 , u 3 } and y ∈ {z,z, 1 − z, 1 −z}, for a direct correspondence with ∆ 1,1,2 F as written in eq. (5.4). We present one example in detail and then list the results for all sequences of two cuts below, with some details of the derivation listed in Table 3. Let us look at the case i = 1, j = 2. The first discontinuity is taken in the p 2 1 channel, which is captured by −[− Disc u 2 − Disc u 3 ]; one minus sign appears because p 2 1 is in the denominator of u 2 and u 3 , and the other is inherent in the relation eq. (3.16). For the second discontinuity, we must work in the kinematic region R 1,2 where p 2 1 , p 2 2 > 0 and p 2 3 < 0, or equivalently 0 <z < 1 < z. 7 Approaching the branch point p 2 2 → 0 can be done either by z → 0 or z → 0, according to Table 1. The former limit is not contained in the region R 1,2 , so we have only y =z and not y = z. Thus we have already arrived at the relation where the iε prescription on the right-hand side follows from the rules of the cut diagram, which for us is p 2 1 + iε, p 2 2 − iε. There were two minus signs from a double discontinuity in eq. (3.16), and one from exchanging p 2 1 for u 2 and u 3 . To derive the correct sign in the relation between Disc and δ, we must probe the branch cuts of log(u 2 ), log(u 3 ), and log(z) on the negative real axis. We have shown once and for all in eq. (3.18) that the first entry introduces a minus sign in the relation eq. (3.17). For the second entry, we are again in the region R 1,2 , wherez > 0, so we must take log(−z) i j region x y from p 2 j − iε log approaching branch cut F and δ x,y F , via Disc x,y F , as described in section 6.1. It is necessary that Disc be given the same iε prescription as the cut diagram. Here it is always p 2 i + iε and p 2 j − iε. rather than log(z). This argument of the logarithm inherits a positive imaginary part, −z + iε, from the imaginary part of p 2 2 − iε, so it is above its branch cut. Therefore the second discontinuity does not introduce a minus sign. We have only the single minus sign from the first entry, and a factor of 2πi for each of the two cuts, giving the final relation The other five cases are analyzed similarly. Some information for the steps in the derivation is listed in Table 3. The resulting relations are summarized as follows: Double unitarity cuts In this section we describe the computation of the sequences of two unitarity cuts corresponding to Cut p 2 1 • Cut p 2 3 and Cut p 2 1 • Cut p 2 2 ; see fig. 11 and fig. 12. All the cut integrals can be computed following similar techniques as the ones outlined in Section 5, so we will be brief and only comment on some special features of the computation. Details on how to compute the integrals can be found in appendix D.1, and the explicit results for all the cuts in fig. 11 and fig. 12 are given in appendices D.2 and D.3 respectively. First, we note that, since we are dealing with sequences of unitarity cuts, the cut diagrams correspond to the extended cutting rules introduced in section 3.2. In particular, in section 3.2 we argued that cut diagrams with crossed cuts should be discarded, and such diagrams are therefore not taken into account in our computation. (In this example, all possible crossed cut diagrams would vanish anyway, for the reason given next.) Figure 11: Cut diagrams contributing to the Cut p 2 1 • Cut p 2 3 sequence of unitarity cuts. Second, some of the cut integrals vanish because of energy-momentum constraints. Indeed the cut in fig. 11e vanishes in real kinematics because it contains a three-point vertex where all the connected legs are massless and on shell. Hence, the cut diagram cannot satisfy energy momentum conservation in real kinematics with D > 4 (recall the example of the two-mass-hard box). We will set this diagram to zero, and we observe a posteriori that this is consistent with the other results, thus supporting our approach of working in real kinematics. Let us now focus on the cuts that do not vanish. As we mentioned previously, the cuts are computed by integrating over carefully chosen one-loop subdiagrams. In particular, for simplicity we avoid integrating over three-mass triangles, cut or uncut, because the leading singularity of this diagram is the square root of the Källén function, which leads to integrands that are not directly integrable using the tools developed for multiple polylogarithms. In Tables 4 and 5 we summarize the preferred choices of subdiagrams for the first loop integration. We observe that it is insufficient to define a cut integral by the subset of propagators that are cut. Indeed, some cut integrals in the two tables have the same cut propagators, but are computed in different kinematic regions due to the rules of Section 3, leading to very different results 8 . Finally, depending on the cut integral and the kinematic region where the cut is computed, the integrands might become divergent at specific points, and we need to make sense of these divergences to perform the integrals. In the case where the integral develops an end-point singularity, we explicitly subtract the divergence before expanding in , using the technique known as the plus prescription. For example, if g(y, ) is regular for all y ∈ [0, 1], Cut two-mass triangle, masses p 2 3 and (p 1 + k 1 ) 2 , in (p 1 + k 1 ) 2 channel, fig. 12b Cut two-mass triangle, masses p 2 3 and (p 2 + k 1 ) 2 , in (p 2 + k 1 ) 2 channel, fig. 12c and p 2 2 , in t = (p 1 − k 1 ) 2 channel, fig. 12d. Table 5: Cuts contributing to the Cut p 2 1 • Cut p 2 2 sequence of unitarity cuts. then, for < 0, we have: The remaining integral is manifestly finite, and we can thus expand in under the integration sign. However, we also encounter integrands which, at first glance, develop simple poles inside the integration region. A careful analysis however reveals that the singularities are shifted into the complex plane due to the Feynman iε prescription for the propagators. As a consequence, the integral develops an imaginary part, which can be extracted by the usual principal value prescription, where PV denotes the Cauchy principal value, defined by where g(y) is regular on [0, 1] and y 0 ∈ [0, 1]. Note that the consistency throughout the calculation of the signs of the iε of uncut propagators and subdiagram invariants, as derived from the conventions of the extended cutting rules of section 3.2 (see also appendix A), is a nontrivial consistency check of these cutting rules. Summary and discussion As expected from the relations eq. (3.16) and eq. (3.17) among Cut, Disc and δ, and in particular from eq. (6.3), we observe that and Cut p 2 , and, Based on the results presented above, it is natural to ask whether double unitarity cuts reproduce the discontinuity of single unitarity cuts on a diagram by diagram basis. For instance, if we consider (p 2 1 , p 2 2 ) sequences of cuts, is it true that: 10) The answer to this question is not simple. Indeed, while eq. (6.9) is true, eq. (6.10) is not. This is because these kinds of diagram by diagram relations are very sensitive to the branch cut structure of single cut diagrams. Interestingly, all these subtleties are washed out when considering full sets of double unitarity cuts, and the results given in eq. (6.7) and (6.8) are valid despite them. We verified that for the case of the (p 2 1 , p 2 3 ) cuts of the ladder, diagram by diagram relations hold for all single cut diagrams. Because this falls outside of the subject of this paper, which is to relate sequences of unitarity cuts to iterated discontinuities and to the coproduct of uncut Feynman diagrams, we will not comment further on these relations. However, we believe this is an interesting subject for further study. More than two unitarity cuts Having considered a sequence of two unitarity cuts, it is natural to wonder about a sequence of three unitarity cuts in the three distinct channels of the ladder. Since the ladder is of transcendental weight four, we might expect the result of three cuts to give a function of weight one. It turns out, however, that the sequential cut in all three channels Figure 13: Cut [12456] on the three channels p 2 1 , p 2 2 , p 2 3 . simply gives zero. In this section, we explain briefly how this result is understood from the points of view of Disc, the coproduct and Cut. In the list of diagrams with cuts in the three channels p 2 1 , p 2 2 , p 2 3 , all but one have the property that one of the internal vertices has all three of its incident propagators cut, giving zero. The one remaining diagram is Cut [12456] , shown in fig. 13. This diagram turns out to vanish as well. In the figure, we have oriented the internal arrows according to positive energy flow. (We have assumed positive energy of p 3 and p 1 , and negative energy of p 2 , but all the other cases are similar or trivial.) To see the vanishing of this cut diagram, recall that we must evaluate this cut in a region where all three invariants p 2 i are positive. Use the momentum parametrization The two cut conditions on propagators 4 and 5, namely k 2 1 = 0 and (p 3 + k 1 ) 2 = 0 together, imply that k 10 = − p 2 3 /2, which violates the restriction on energy flow, k 10 > 0. The coproduct itself certainly allows truncations equalling the transcendental weight of the function, so there is no problem in writing nonvanishing expressions of the form δ x 1 ,x 2 ,x 3 ,x 4 F (z,z) for the ladder, and similarly for Disc x 1 ,x 2 ,x 3 ,x 4 . However, in relating these truncations to physical discontinuities, we must establish the correspondence between the x i and the invariants p 2 j , according to the rules stated in Section 3. Notably, the rules of Section 3 state that each variable x i must be able to approach its branch points independently of all the other x j . Since F (z,z) is merely a function of two variables, we are then limited to two iterations of the truncation of the coproduct. Any third truncation, δ x 1 ,x 2 ,x 3 F (z,z), does not correspond to a triple discontinuity of the integral. From cuts to dispersion relations and coproducts In previous sections we introduced computational tools to compute cut integrals, and we showed that extended cutting rules in real kinematics lead to consistent results. Furthermore, we argued that the entries in the coproduct of a Feynman integral can be related to its discontinuities and cut integrals. While these results are interesting in their own right, we present in this section a short application of how to use the knowledge of (sequences of) cut integrals, namely how to reconstruct some information about the original Feynman integral based on the knowledge of its cuts. It is obvious from the first entry condition that if all cuts are known, we can immediately write down the component (1, n − 1) of a pure integral of weight n. In particular, for the one-and two-loop triangle integrals investigated in previous sections, we immediately obtain ∆ 1,1 (T (z,z)) = log u 2 ⊗ δ u 2 T (z,z) + log u 3 ⊗ δ u 3 T (z,z) , and the quantities δ u i T (z,z) and δ u i F (z,z) are directly related to the discontinuities of the integral through eqs. (4.20), (5.27) and (5.38). Note that eq. (7.1) determines the functions T (z,z) and F (z,z) uniquely up to terms proportional to π. Similarly, in eq. (6.3) we have shown how the double discontinuities of the two-loop ladder triangle are related to the entries in the coproduct. We can then immediately write 2) and the values of δ u i ,α F (z,z) can be read off from eq. (6.3). 9 Thus, we see that the knowledge of all double discontinuities enables us to immediately write down the answer for the (1,1,2) component of the two-loop ladder triangle. Note that the knowledge of eq. (7.2) uniquely determines the function F up to terms proportional to zeta values. While the previous application is trivial and follows immediately from the first entry condition and the knowledge of the set of variables that can enter the symbol in these particular examples, it is less obvious that we should be able to reconstruct information about the full function by looking at a single unitarity cut, or at a specific sequence of two unitarity cuts. In the rest of this section we give evidence that this is true nevertheless. The main tool for determining a Feynman integral from its cuts is the dispersion relation, which expresses a given Feynman integral as the integral of its discontinuity across a certain branch cut. Traditionally used in the context of the study of strongly interacting theories, dispersion relations appear more generally as a consequence of the unitarity of the S-matrix, and of the analytic structure of amplitudes [60]. These relations are valid in perturbation theory, order by order in an expansion of the coupling constant. It was shown in refs. [1][2][3][4][5] that individual Feynman integrals can also be written as dispersive integrals. The fundamental ingredient in the proof of the existence of this representation is the largest time equation [2], which is also the basis of the cutting rules. In the first part of this section we briefly review dispersion relations for Feynman integrals, illustrating them with the example of the one-loop three-mass triangle integral discussed in section 4.1. In the second part we show that, at least in the case of the integrals considered in this paper, we can use the modern Hopf algebraic language to determine the symbol of the integrals from a single sequence of unitarity cuts. We note however that this reconstructibility works for the full integral, and not for individual terms in the Laurent expansion in . We therefore focus on examples which are finite in four dimensions, so that we can set = 0. 10 Dispersion relations Dispersion relations are a prescription for computing an integral from its discontinuity across a branch cut, taking the form where ρ(p 2 1 , s, . . .) = Disc p 2 2 F (p 2 1 , p 2 2 , . . .) p 2 2 =s , as computed with eq. (3.1), and the integration contour C goes along that same branch cut. The above relation can be checked using eqs. (3.1) and (6.5). In order to illustrate the use of dispersion relations, we briefly look at the case of the scalar three-mass triangle. Its p 2 2 -channel discontinuity was computed in eq. (4.19), and we recall it here expressed in terms of Mandelstam invariants, This leads to a dispersive representation for the three-mass triangle of the form . (7.5) Note that the integration contour runs along the real positive axis: it corresponds to the branch cut for timelike invariants of Feynman integrals with massless internal legs. Already for this not too complicated diagram we see that the dispersive representation involves a rather complicated integration. The main difficulty in performing the integral above comes from the square root of the Källén function, whose arguments depend on the integration variable. However, defining x = s/p 2 1 , and introducing variables w andw similar to eq. (4.3) by 6) or equivalently, 10 A counterexample to the reconstructibility of individual terms in the Laurent expansion is given by the two-mass-hard box: it is clear from eq. (4.31) that a cut in a single channel can fail to capture all terms of the symbol. we can rewrite the dispersive integral as, where the integration region for w andw is deduced from the region where the discontinuity is computed (see, e.g., table 2). Written in this form, the remaining integration is trivial to perform in terms of polylogarithms, and we indeed recover the result of the three-mass triangle, eq. (4.7). For the three-mass triangle, we can in fact take a second discontinuity and reconstruct the result through a double dispersion relation because the discontinuity function, eq. (7.4), has a dispersive representation itself [1,54]. Note that this representation falls outside of what is discussed in ref. [5], and we are not aware of a proof of its existence from first principles. The double discontinuity is simply given, up to overall numerical and scale factors, by the inverse of the square root of the Källén function, see eq. (4.22). We obtain The integral is trivial to perform and leads to the correct result. 11 We see that we can obtain the result for the one-loop three-mass triangle from the knowledge of its single and double cuts. Note that an important ingredient in order to perform the dispersive integral was the choice of variables in which to write the dispersive integral. While the choice of the variables z andz is not obvious a priori when looking at the corresponding Feynman integral, these variables appear naturally when parametrizing the phase space integrals corresponding to the cut integrals. This gives hope that for more complicated Feynman integrals, computing their cuts could be a good way to identify the most suitable variables in which to express the uncut integral (the equivalent of the z and z variables that appeared naturally in this example), as they are simpler functions that already have basic characteristics of the full Feynman integral. We will not explore this point further in this paper, and we leave it for future work. Reconstructing the coproduct from a single unitarity cut As discussed above, Feynman diagrams can be fully recovered from unitarity cuts on a given channel through dispersion relations. These relations rely on two ingredients: 11 We have redefined w andw by replacing u3 by y in eq. (7.7). Just as for the single dispersion integral, the integration region is deduced from the region where the double discontinuity is computed, R2,3 in this case. Changing variables to β = 1 w and γ = 1 1−w makes the integral particularly simple to evaluate. the discontinuity of a function across a specific branch cut and the position of that particular branch cut. Given the relations between the (1, n − 1) entries of the coproduct, discontinuities and single unitarity cuts established in previous sections, it is clear that the full information about the Feynman integral is encoded in any one of these entries of the coproduct, since it contains the same information about the function as a dispersive representation. We should thus be able to reconstruct information about the full function by looking at a single cut in a given channel. For simplicity, we only work at the level of the symbol in the rest of this section, keeping in mind that we lose information about zeta values in doing so. In a nutshell, we observe that if we combine the first entry condition and the results for (the symbols of) the discontinuities with the integrability condition (2.13), we immediately obtain the symbol of the full function. In the following we illustrate this procedure on the examples of the one-loop triangle and two-loop ladder triangle. Starting from the result for the unitarity cut on a single channel, the procedure to obtain the symbol of the full function can be formulated in terms of a simple algorithm, which involves two steps: (i) check if the tensor satisfies the integrability condition, and if not, add the relevant terms required to make the tensor integrable. (ii) check if the symbol obtained from the previous step satisfies the first entry condition, and if not, add the relevant terms. Then return to step (i). We start by illustrating this procedure on the rather simple example of the three-mass triangle of Section 4.1. From eq. (4.19), the symbol of the cut on the u 2 channel is where we emphasize that the rational function is to be interpreted as the symbol of a logarithm. Since we considered a cut on the p 2 2 channel,the first entry condition implies that we need to prepend u 2 = zz to the symbol of the discontinuity. Thus we begin with the tensor We then proceed as follows. • Step (i): This tensor is not the symbol of a function, as it violates the integrability condition. To satisfy the integrability condition, we need to add the two terms The full tensor is not the symbol of a Feynman diagram, since the two new terms do not satisfy the first entry condition. • Step (ii): To satisfy the first entry condition, we add two new terms: At this stage, the sum of terms obeys the first entry condition and the symbol obeys the integrability condition, so we stop our process. Putting all the terms together, we obtain S(T (z,z)) = 1 2 which agrees with the symbol of the one-loop three mass triangle in D = 4 dimensions, eq. (4.10). While the previous example might seem too simple to be representative, we show next that the same conclusion still holds for the two-loop ladder. In the following we use our knowledge of the cut in the p 2 3 channel, eq. (5.26), and show that we can again reconstruct the symbol of the full integral F (z,z). Combining eq. (5.26) with the first entry condition, we conclude that S(F (z,z)) must contain the following terms: If we follow the same steps as in the one-loop case, we can again reconstruct the symbol of the full function from the knowledge of the symbol of the cut in the p 2 3 channel alone. More precisely, we perform the following operations: • Step (i): To obey the integrability condition, we must add to the expression above the following eight terms: • Step (ii): The terms we just added violate the first entry condition. To restore it we must add eight more terms that combine with the ones above to have Mandelstam invariants in the first entry, • Step (i): The newly added terms violate the integrability condition. To correct it, we must add two new terms, • Step (ii): We again need to add terms that combine with the two above to have invariants in the first entry, At this point the symbol satisfies both the first entry and integrability conditions, and we obtain a tensor which agrees with the symbol for F (z,z) (5.5). We note that for both examples considered above, the same exercise could have been done using the results for cuts in other channels. Reconstructing the coproduct from double unitarity cuts While the possibility of reconstructing the function from a single cut in a given channel might not be too surprising due to the fact that Feynman integrals can be written as dispersive integrals over the discontinuity in a given channel, we show in this section that in this particular case we are able to reconstruct the full answer for ∆ 1,1,2 F from the knowledge of just one sequential double cut. Note that ∆ 1,1,2 F is completely equivalent to the symbol S(F ). Indeed, the weight two part of ∆ 1,1,2 F is defined only modulo π, which is precisely the amount of information contained in the symbol. To be more concrete, let us assume that we know the value of Cut p 2 3 ,p 2 2 F , and thus we have determined that δ u 3 ,z F = − log z logz + 1 2 log 2 z . (7.11) Since the symbols of log u i and δ u 3 ,z F have all their entries drawn from the set {z,z, 1 − z, 1 −z}, we make the assumption that ∆ 1,1,2 F can be written in the following general form: where f u 3 ,z = δ u 3 ,z F and the remaining f u i ,α denote some a priori unknown functions of weight two (defined only modulo π). Imposing the integrability condition in the first two entries of the coproduct gives the following constraints among the f u i ,α : If we require in addition thatF = −F , where the tilde denotes exchange of z andz (because its leading singularity is likewise odd under this exchange), we find in additioñ Thus, we can write (7.14) Notice that up to this stage all the steps are generic: we have not used our knowledge of the functional form of the double cut f u 3 ,z , but only the knowledge of the set of variables entering its symbol and the antisymmetry of the leading singularity under the exchange of z andz. Next we have to require that eq. (7.14) be integrable in the second and third component. Assuming again that we only consider symbols with entries drawn from the set {z, 1 − z,z, 1 −z}, we use eq. (7.11) and impose the integrability condition eq. (2.13), and we see that the symbols of the two unknown functions in eq. (7.14) are uniquely fixed, in agreement with eq. (5.4). We stress that the fact that we can reconstruct ∆ 1,1,2 F from a single sequence of cuts is not related to the specific sequence we chose. For example, if we had computed only Cut p 2 1 ,p 2 2 F and thus determined that −f u 2 ,z −f u 3 ,z = −Li 2 (z)+Li 2 (z)+log z logz − 1 2 log 2 z, the integrability condition would fix the remaining two free coefficients in a similar way. Finally, we could consider Cut p 2 3 ,p 2 1 F , but since this cut is obtained by a simple change of variables from Cut p 2 3 ,p 2 2 F through the reflection symmetry of the ladder, it is clear that integrability fixes the full symbol once again. Let us briefly consider the analogous construction for the one-loop triangle, where the f u i ,α are simply constant functions. A double cut, without loss of generality say Cut p 2 2 ,p 2 3 , gives a constant value for f p 2 2 ,1−z , as in eq. (4.23) and eq. (4.24). We would conclude in the analog of eq. (7.14) above that we have a consistent solution with f u 3 ,z =f u 3 ,z and f u 2 ,z = f u 3 ,1−z = 0, which is indeed the ∆ 1,1 of the triangle, obtained by a consistent completion algorithm as in the previous subsection. While it is quite clear that the reason why the algorithm of section 7.2 converged was the existence of a dispersive representation of Feynman integrals, it is not clear to us at this stage whether the existence of a double dispersive representation is a necessary condition for the reconstruction based on the knowledge of ∆ 1,1,2 done in this section to work, although it does seem reasonable that it would be the case. In closing, we notice that in this example, the integrability condition eq. (7.12) implies that Cut p 2 , through the relations listed in eq. (6.3). It would be interesting to see whether there is a general link between the integrability of the symbol and the permutation invariance of a sequence of cuts. Discussion In this paper we studied cut Feynman diagrams with two objectives. The first was to develop techniques for analytic evaluation of such integrals, and the second to formulate precise relations between cut integrals and uncut ones, providing an interpretation of the coproduct and the symbol of the latter. Techniques for direct computation of cut integrals in D spacetime dimensions are far less developed than those for ordinary (uncut) loop integrals. A well established technique for the calculation of multi-loop diagrams is the integration over an off-shell subdiagram. The ultimate advantage of cut integrals is that multi-loop cut diagrams reduce to integrals over products of simpler lower-loop integrals with extra on-shell external legs. This was illustrated here at the two-loop level, where different cuts where computed using one-loop triangle and box integrals with massless or a limited number of massive external legs. This method has the potential to be applied to more complicated multi-loop and multi-leg cut integrals. Throughout this paper we took D = 4 − 2 -dimensional cuts. This is a necessity when dealing with infrared-divergent cut integrals: notably, individual cuts of (multiloop) integrals that are themselves finite in four dimensions may be divergent when the internal propagators that are put on shell are massless. The sum of all cuts on a given channel corresponds, according to the largest time equation [2,3], to the discontinuity of the uncut integral; given that the latter is finite, one expects complete cancellation of the singularities among the different cuts. This situation was encountered here upon taking unitarity cuts of the two-loop ladder graph, where we have seen that the pattern of cancellation is similar to the familiar real-virtual cancellation mechanism in cross sections, although this example does not correspond to a cross section. Understanding this pattern of cancellation is useful for the general program of developing efficient subtraction procedures for infrared singularities, and it would be interesting to explore how this generalizes for other multi-loop integrals. Taking a step beyond the familiar case of a single unitarity cut, we developed here the concept of a sequence of unitarity cuts. To consistently define this notion, we extended the cutting rules of refs. [2,3] to accommodate multiple cuts on different channels in an appropriately chosen kinematic region. The cutting rules specify a unique prescription for complex conjugation of certain vertices and propagators, which is dictated by the channels on which cuts are taken. Importantly, the result does not depend on the order in which the cuts are applied. The kinematic region is chosen such that the Mandelstam invariants corresponding to the cut channels are positive, corresponding to timelike kinematics. In its center-of-mass Lorentz frame, this invariant defines the energy flowing through the set of on-shell propagators. The energy flow through all these propagators has a consistent direction that is dictated by the external kinematics; for any given propagator this direction must be consistent with the direction of energy flow assigned to it by any other cut in the sequence. We further exclude crossed cuts, as well as iterated cuts in the same channel since they are not related to discontinuities as computed in this paper. Finally, we restrict ourselves to real kinematics. These cutting rules pass numerous consistency checks and they form a central result of the present paper. Understanding what information is contained in crossed cuts and in iterated cuts in the same channel as well as what can be obtained by allowing for complex kinematics are of course interesting questions for further study. Having specified the definition of a sequence of unitarity cuts, we find the following correspondence, which we conjecture to be general, among (a) the sum of all cut diagrams in the channels s 1 , . . . s k , which we denote by Cut s 1 ,...,s k ; (b) a sequence of discontinuity operations, which we denote by Disc x 1 ,...,x k , where the x i are algebraic functions of the Mandelstam invariants; (c) and the weight n − k cofactors of the terms in the coproduct of the form ∆ 1,1,...,1,n−k , where each of the k weight one entries of a specific term in ∆ 1,1,...,1,n−k is associated with one of the x i in a well defined manner, which we call δ x 1 ,...,x k . The correspondence is formulated in eqs. (3.16) and (3.17). We illustrated it using the two-loop ladder triangle example where one may take up to two sequential cuts with any combination of channels, obtaining nontrivial results; the relations are summarized by eqs. (6.3). In examples with more loops and legs, we expect that a deeper sequence of unitarity cuts may be attainable. We find that while the leftmost entry of the symbol (or equivalently of the ∆ 1,...,1,n−k terms in the coproduct) is always one invariant out of the subset of the Mandelstam invariants in which the function has a branch cut (the first entry condition [16]), all other entries may not necessarily be such a variable, but may instead be drawn from a longer list, {x i }, sometimes called the symbol alphabet. These are also the natural variables appearing as arguments of logarithms and polylogarithms in both cut diagrams and the original uncut one. For example, in the two-loop ladder triangle considered through O( 0 ), the alphabet consists of four letters {z,z, 1 − z, 1 −z} defined in eq. (4.4). In general, letters in the symbol alphabet x i are algebraic functions of the Mandelstam invariants: they are the solutions of quadratic equations which emerge upon solving the simultaneous on-shell conditions imposed by cuts. Consequently, there is hope that cuts can identify the relevant variables in terms of which the uncut integral can be most naturally expressed. Because the arguments of polylogarithms, and equivalently the second and subsequent entries of the coproduct ∆ 1,1,...,1,n−k terms, are not the Mandelstam invariants themselves, while any unitarity cut is defined by a channel that does correspond to a Mandelstam invariant s i , the relation between cuts and discontinuities in eq. (3.16) is more complicated starting from the second cut. Nevertheless, we have seen how these variables are related. The rule is that the relevant branch points are common to s i and x i , and these branch points can be approached by x i independently of the other variables x j . Also, the iε prescription of x i is inherited from that of s i , so that the relation of eq. (3.16) can be made precise. We verified that the expected relations between sequences of cuts, sequences of discontinuities and the relevant terms of the coproduct hold in the cases of the double cut of the one-loop triangle, the four-mass box and the two-mass-hard box. We then explored in detail the much less trivial two-loop three-mass ladder diagram, for which we also observed agreement with the expected relations. Given that cut diagrams are simpler to compute (owing to the fact that they reduce to integrals over products of simpler lower-loop amplitudes) and may identify the most convenient variables, it is natural to ponder whether the result of a cut diagram can be uplifted to obtain the uncut function. In the case of a single unitarity cut, this can always be done through a dispersion integral [1][2][3][4][5]. In the case of a sequence of unitarity cuts, this requires a multiple dispersion relation, and the general conditions for these to exist are not known. In section 7 of the present paper we made some progress in developing methods for the reconstruction of a Feynman integral from its cuts. Our first observation, considering the reconstruction of the one-loop three-mass triangle from either its single or double cut, was that while dispersion relations may appear as complicated integrals, they become simple when expressed in terms of the natural variables x i . In these variables the dispersion integral in the case considered falls into the class of iterated integrals amenable to the Hopf algebra techniques. This is of course consistent with the fact that each dispersion integral is expected to raise the transcendental weight of the function by one: it is the opposite operation to taking the discontinuity of the function across its branch cut. It is clearly important to study this connection between dispersion integrals and iterated polylogarithmic integrals for other examples. We next presented ways to reconstruct information about the full function from the knowledge of a single set of cuts, along with the symbol alphabet. This was achieved by using two main constraints: the integrability of the symbol and the first entry condition. More precisely, we showed how to reconstruct the symbol of the full integral from the knowledge of (the symbol of) a single unitarity cut in one of the channels. We believe that our approach to reconstruction is valid generally, provided the existence of a dispersive representation of Feynman integrals. We also showed that in the case of the two-loop ladder (and the much simpler one-loop triangle) it is possible to reconstruct all the terms of the ∆ 1,1,2 component of the coproduct of the uncut integral from the knowledge of a single sequence of double cuts. How general this procedure is is less obvious to us, and it is certainly worth investigating. Another very intriguing observation based on the examples at hand concerns the connection between the integrability condition of the symbol and the equality of sequences of unitarity cuts between which the order is permuted. As mentioned above, the result of a sequence of unitarity cuts does not depend on the order in which the cuts are applied. Therefore the double cut relations summarized in eqs. (6.3) must satisfy Cut p 2 i ,p 2 j = Cut p 2 j ,p 2 i . This in turn implies highly nontrivial relations between different ∆ 1,1,2 components; for example the r.h.s. of eq. (6.3a) must be the same as the r.h.s. of eq. (6.3b), and similarly for the other pairs. The crucial observation is that these relations indeed hold owing to the integrability constraints as summarized in eq. (7.12). Note that the latter are based solely on the symbol alphabet and the integrability condition of eq. (2.13). We leave it for future study to determine how general the connection is between integrability and permutation invariance of a sequence of cuts. In conclusion, we developed new techniques to evaluate cut Feynman integrals and relate these to the original uncut ones. In dealing with complicated multi-loop and multileg Feynman integrals there is a marked advantage to computing cuts, where lower-loop information can be systematically put to use. While cut integrals are simpler than uncut ones, they depend on the kinematics through the same variables, {x i }, which characterize the analytic structure of the integral. Identifying this alphabet is crucial in relating cuts to terms in the coproduct, and then either integrating the dispersion relation or reconstructing the symbol of the uncut integral algebraically. We have demonstrated that the language of the Hopf algebra of polylogarithms is highly suited for understanding the analytic structure of Feynman integrals and their cuts. Finally, we have shown that there is a great potential for computing Feynman integrals by using multiple unitarity cuts, and further work in this direction is in progress. e a Tecnologia, Portugal, through a doctoral degree fellowship (SFRH/BD/69342/2010). R.B. was supported in part by the Agence Nationale de la Recherche under grant ANR-09-CEXC-009-01 and is grateful to the BCTP of Bonn University for extensive hospitality in the course of this project. R.B. and C.D. thank the Higgs Centre for Theoretical Physics at the University of Edinburgh for its hospitality. E.G. was supported in part by the STFC grant "Particle Physics at the Tait Institute" and thanks the IPhT of CEA-Saclay for its hospitality. A Notation and conventions Feynman rules. Here we summarize the Feynman rules for cut diagrams in massless scalar theory. For a discussion of their origin, as well as the rules for determining whether a propagator is cut or uncut, see section 3. There can be multiple dashed lines, indicating cuts, on the same propagator, without changing its value. There is a theta function restricting the direction of energy flow on a cut propagator, whose origin is detailed in Section 3. In the examples, we omit writing the theta function, as there is always at most one nonvanishing configuration. uncut one-loop triangle with one mass (p 2 3 ) and the double cut of a three-mass triangle, with masses p 2 1 , p 2 2 and p 2 3 , in the channels p 2 1 and p 2 3 .
26,881.4
2014-01-15T00:00:00.000
[ "Physics" ]
Biglycan- and Sphingosine Kinase-1 Signaling Crosstalk Regulates the Synthesis of Macrophage Chemoattractants In its soluble form, the extracellular matrix proteoglycan biglycan triggers the synthesis of the macrophage chemoattractants, chemokine (C-C motif) ligand CCL2 and CCL5 through selective utilization of Toll-like receptors (TLRs) and their adaptor molecules. However, the respective downstream signaling events resulting in biglycan-induced CCL2 and CCL5 production have not yet been defined. Here, we show that biglycan stimulates the production and activation of sphingosine kinase 1 (SphK1) in a TLR4- and Toll/interleukin (IL)-1R domain-containing adaptor inducing interferon (IFN)-β (TRIF)-dependent manner in murine primary macrophages. We provide genetic and pharmacological proof that SphK1 is a crucial downstream mediator of biglycan-triggered CCL2 and CCL5 mRNA and protein expression. This is selectively driven by biglycan/SphK1-dependent phosphorylation of the nuclear factor NF-κB p65 subunit, extracellular signal-regulated kinase (Erk)1/2 and p38 mitogen-activated protein kinases. Importantly, in vivo overexpression of soluble biglycan causes Sphk1-dependent enhancement of renal CCL2 and CCL5 and macrophage recruitment into the kidney. Our findings describe the crosstalk between biglycan- and SphK1-driven extracellular matrix- and lipid-signaling. Thus, SphK1 may represent a new target for therapeutic intervention in biglycan-evoked inflammatory conditions. Introduction Inflammation can be triggered by various external microbial stimuli as well as by endogenous molecules under sterile conditions. The latter are called damage-associated molecular patterns (DAMPs) and are released following cell death or tissue injury [1]. There are two adapter molecules essential for the TLR signaling: the myeloid differentiation primary response protein (MyD88) and Toll/IL-1R domain-containing adaptor inducing interferon (IFN)-β (TRIF). While signaling through TLR4 involves both MyD88 and TRIF adapters the TLR2 signaling pathway requires exclusively MyD88 for NF-κB activation [10]. It is well documented that the biglycan protein core is solely responsible for the high affinity binding of this proteoglycan to TLR2 and TLR4 [11]. On the other hand, only fully glycanated intact biglycan, consisting of the protein core and two glycosaminoglycan side chains, is capable of inducing TLR2 and TLR4 signaling [5,8]. The structural motifs of biglycan protein and the adapter molecules involved in these interactions need further investigations. By selective engagement of TLRs and their adaptor molecules biglycan tightly regulates inflammatory outcome [6,7,11,12]. Accordingly, biglycan-induced recruitment of macrophages to the kidney depends on biglycan-triggered transcription and secretion of macrophage chemoattractants, chemokine (C-C motif) ligand CCL2 and CCL5 [4][5][6]9,13,14]. Previously, we showed that circulating biglycan evokes the production of CCL2 in a TLR2/4/MyD88-dependent manner, whereas the production of CCL5 was TLR4/TRIF dependent [6]. The interactions between biglycan and different receptors [3,4] orchestrate the recruitment of macrophages to inflamed tissues under disease conditions such as in lupus nephritis [9] and renal ischemia-reperfusion injury [11,15]. However, to date, the exact molecular mechanism through which biglycan-induced TLR2/TLR4/MyD88 and TLR4/TRIF pathways lead to the production of CCL2 and CCL5 remain elusive. There is growing evidence that sphingolipid signaling plays an essential role in the modulation of various inflammatory pathways [16]. Sphingosine kinases (SphK)s, with the two isoforms SphK1 and -2, are enzymes, which catalyze the adenosine triphosphate (ATP)-dependent phosphorylation of sphingosine (Sph) to produce sphingosine 1-phosphate (S1P) [17,18]. S1P is implicated in cellular processes such as cell survival, proliferation, differentiation, migration, and immune function [18]. Given these roles of S1P, the sphingosine kinases activity is a target in many pathological conditions such as atherosclerosis, acute pulmonary injury, respiratory distress, tumorigenesis, and metastasis as well as inflammation [19]. In response to TNFα or IL-1β, SphK1 is phosphorylated by Erk1/2, which increases its catalytic activity [20]. Furthermore, SphK1 and the production of S1P increase the activity of the TNF receptor-associated factor 2 (TRAF2) E3 ubiquitin ligase, receptor-interacting protein 1 (RIP1) polyubiquitination and further NF-κB activation [21]. This potentiates the expression of chemokine (C-X-C motif) ligand (CXCL) 10 and CCL5 resulting in the recruitment of the mononuclear cells to the site of inflammation [22]. Moreover, lipopolysaccharides (LPS) induce SphK1 activation via TLR4 in macrophages, thereby promoting IL-6 generation [23]. Targeting SphK1 in mice by genetic ablation or pharmacological inhibition ameliorates the inflammatory cytokine production as well as the pathogenesis of experimental models of arthritis [24,25], hepatitis [26], and pulmonary fibrosis [27]. Based on these reports, it is tempting to speculate that there is a reciprocal interference between biglycan and sphingolipid signaling in the regulation of inflammation. Here we demonstrate for the first time that there is crosstalk between the ECM-derived component biglycan and SphK1-driven lipid signaling. We show that soluble biglycan enhances the expression and activity of SphK1 via the TLR4/TRIF pathway in mouse primary macrophages. Biglycan-induced SphK1 activity is essential for the production of CCL2 and CCL5 chemoattractants. Importantly, we prove this concept in vivo in soluble biglycan-overexpressing mice deficient for SphK1. Thus, targeting SphK1 may represent a potential therapeutic strategy in biglycan-evoked sterile inflammation. Biglycan Triggers the Expression and Activity of Sphk1 in Mouse Peritoneal Macrophages To address the potential interference between biglycan and sphingolipid signaling in inflammation, thioglycolate-elicited primary macrophages isolated from wild-type (WT) C57BL/6 mice were stimulated with recombinant intact biglycan consisting of the protein core and two glycosaminoglycan side chains (4 µg/mL). After 30 min of incubation with biglycan already a significant increase in Sphk1 mRNA level was detected with time-dependent, consecutive Sphk1 expression enhancement up to five-fold after 6 h (Figure 1a, shown at 6 h of incubation). Here we demonstrate for the first time that there is crosstalk between the ECM-derived component biglycan and SphK1-driven lipid signaling. We show that soluble biglycan enhances the expression and activity of SphK1 via the TLR4/TRIF pathway in mouse primary macrophages. Biglycan-induced SphK1 activity is essential for the production of CCL2 and CCL5 chemoattractants. Importantly, we prove this concept in vivo in soluble biglycan-overexpressing mice deficient for SphK1. Thus, targeting SphK1 may represent a potential therapeutic strategy in biglycan-evoked sterile inflammation. Biglycan Triggers the Expression and Activity of Sphk1 in Mouse Peritoneal Macrophages To address the potential interference between biglycan and sphingolipid signaling in inflammation, thioglycolate-elicited primary macrophages isolated from wild-type (WT) C57BL/6 mice were stimulated with recombinant intact biglycan consisting of the protein core and two glycosaminoglycan side chains (4 μg/mL). After 30 min of incubation with biglycan already a significant increase in Sphk1 mRNA level was detected with time-dependent, consecutive Sphk1 expression enhancement up to fivefold after 6 h (Figure 1a, shown at 6 h of incubation). (c) Chromatin immunoprecipitation with anti-NF-κB p65 antibody in WT macrophages stimulated with biglycan for 60 min followed by qPCR for Sphk1. The PCR primers bind at the transcription start site (TSS) or the promoter region. ChIP-qPCR analysis was normalized to IgG and given as fold induction of untreated control. (d) SphK activity assay based on sphingosine phosphorylation in the presence of [γ-32 P] ATP in WT macrophages stimulated with biglycan for 30, 60, 90 and 120 min. Product separation was performed by TLC and detection with PharosFX Plus Molecular Imager (Bio-Rad, Munich, Germany). Arrow indicates [γ-32 P] S1P. (e) Quantification of the resulting bands in (d). Data are expressed as means ± standard deviation (SD). (a,b) n = 5 individual experiments; (c,d,e) n = 3 individual experiments; * p < 0.05; n.s. = not significant. TLR: toll-like receptor; TRIF: Toll/IL-1R domain-containing adaptor inducing interferon (IFN)-β; NF-κB: nuclear factor κ-light-chain-enhancer of activated B-cells; SphK1: sphingosine kinase 1; qPCR: quantitative real-time polymerase chain reaction; ChIP: chromatin immunoprecipitation; WT: wild type; IgG: immunoglobulin G; MyD88: myeloid differentiation primary response protein; ATP: adenosine triphosphate; TLC: thin layer chromatography; S1P: sphingosine-1 phosphate; Gapdh: Glyceraldehyde 3-phosphate dehydrogenase. Next, the receptor and adaptor molecules involved in biglycan-induced Sphk1 expression were investigated, using WT, Tlr2 −/− , Tlr4 −/− and Tlr2 −/− /Tlr4-m macrophages. Quantitative real-time polymerase chain reaction (PCR) analysis revealed a similar level of Sphk1 mRNA in WT and Tlr2 −/− macrophages in response to biglycan, but no increase in Tlr4 −/− and Tlr2 −/− /Tlr4-m macrophages ( Figure 1a). Thus, biglycan signals through the TLR4 to induce Sphk1 mRNA expression. To identify the TLR4 adaptor molecule involved in biglycan-dependent Sphk1 overexpression, macrophages from WT and Trif -m mice were pre-incubated for 30 min with the MyD88 inhibitor (50 µM) prior to stimulation with biglycan. Dysfunctional mutation of TRIF protein completely abolished biglycan induced Sphk1 expression, whereas the MyD88 inhibitor had no influence on its expression (Figure 1b). The functionality of the MyD88 inhibitor was proven by its inhibitory effects on biglycan-dependent induction of TNFα, as described previously [7]. To further demonstrate whether NF-κB is involved in biglycan/TLR4/TRIF induction of Sphk1, a chromatin immunoprecipitation (ChIP) assay was performed. Indeed, stimulation with biglycan (30-120 min) potentiated direct interaction between the NF-κB p65 subunit and the transcription start site (TSS) in WT macrophages (Figure 1c, shown at 60 min of incubation). Next, we investigated whether biglycan is capable of inducing SphK1 activity in macrophages. In fact, 30-120 min of stimulation with biglycan resulted in a 2-3-fold enhancement of SphK1 activity in WT macrophages (Figure 1d,e). Collectively, we demonstrated that biglycan triggers via TLR4/TRIF/NF-κB the activity of Sphk1 in murine macrophages. Sphk2 Deficiency Potentiates Biglycan Triggered Sphk1 mRNA Expression In the next set of experiments, the influence of biglycan on the expression of the sphingosine kinase isoform Sphk2 was investigated. Biglycan had no effect on Sphk2 mRNA expression in WT macrophages during 30 min-6 h of incubation ( Next, the receptor and adaptor molecules involved in biglycan-induced Sphk1 expression were investigated, using WT, Tlr2 −/− , Tlr4 −/− and Tlr2 −/− /Tlr4-m macrophages. Quantitative real-time polymerase chain reaction (PCR) analysis revealed a similar level of Sphk1 mRNA in WT and Tlr2 −/− macrophages in response to biglycan, but no increase in Tlr4 −/− and Tlr2 −/− /Tlr4-m macrophages ( Figure 1a). Thus, biglycan signals through the TLR4 to induce Sphk1 mRNA expression. To identify the TLR4 adaptor molecule involved in biglycan-dependent Sphk1 overexpression, macrophages from WT and Trif-m mice were pre-incubated for 30 min with the MyD88 inhibitor (50 μM) prior to stimulation with biglycan. Dysfunctional mutation of TRIF protein completely abolished biglycan induced Sphk1 expression, whereas the MyD88 inhibitor had no influence on its expression (Figure 1b). The functionality of the MyD88 inhibitor was proven by its inhibitory effects on biglycan-dependent induction of TNFα, as described previously [7]. To further demonstrate whether NF-κB is involved in biglycan/TLR4/TRIF induction of Sphk1, a chromatin immunoprecipitation (ChIP) assay was performed. Indeed, stimulation with biglycan (30-120 min) potentiated direct interaction between the NF-κB p65 subunit and the transcription start site (TSS) in WT macrophages (Figure 1c, shown at 60 min of incubation). Next, we investigated whether biglycan is capable of inducing SphK1 activity in macrophages. In fact, 30-120 min of stimulation with biglycan resulted in a 2-3-fold enhancement of SphK1 activity in WT macrophages (Figure 1d,e). Collectively, we demonstrated that biglycan triggers via TLR4/TRIF/NF-κB the activity of Sphk1 in murine macrophages. Sphk2 Deficiency Potentiates Biglycan Triggered Sphk1 mRNA Expression In the next set of experiments, the influence of biglycan on the expression of the sphingosine kinase isoform Sphk2 was investigated. Biglycan had no effect on Sphk2 mRNA expression in WT macrophages during 30 min-6 h of incubation ( In various cell types, deficient of Sphk2, a compensatory overexpression of Sphk1 mRNA has been reported [27,28]. Indeed, Sphk2 −/− macrophages stimulated with biglycan for 2 h revealed a marked overexpression of Sphk1 mRNA (Figure 2b). Taken together, biglycan selectively upregulates Sphk1 expression in macrophages and this is more pronounced when SphK2 is lacking. Therefore, in the following experiments biglycan-stimulated Sphk2 −/− macrophages were considered as Sphk1 overexpressing cells. Biglycan Triggers CCL2 and CCL5 Production in a SphK1-Dependent Manner Previously, we have shown that biglycan triggers production of macrophage chemoattractants CCL2 and CCL5 [5][6][7]9]. As SphK1 modulates expression of various chemoattractants [23,29,30], we addressed the issue whether SphK is involved in biglycan-triggered production of CCL2 and CCL5. Indeed, Sphk1 deficiency resulted in a marked reduction of biglycan-triggered Ccl2 mRNA expression in macrophages at 2 h of incubation ( Figure 3a). Indeed, Sphk1 deficiency resulted in a marked reduction of biglycan-triggered Ccl2 mRNA expression in macrophages at 2 h of incubation ( Figure 3a). To provide direct proof that biglycan-driven overexpression of CCL2 and CCL5 in cells lacking SphK2 is caused by SphK1, WT and Sphk2 −/− , macrophages were incubated with biglycan in the presence of PF-543, a specific inhibitor of the SphK1 enzymatic activity [31,32]. As expected, this inhibitor reduced biglycan-dependent enhancement of CCL2 ( To provide direct proof that biglycan-driven overexpression of CCL2 and CCL5 in cells lacking SphK2 is caused by SphK1, WT and Sphk2 −/− , macrophages were incubated with biglycan in the presence of PF-543, a specific inhibitor of the SphK1 enzymatic activity [31,32]. As expected, this inhibitor reduced biglycan-dependent enhancement of CCL2 ( Figure 3e) and CCL5 (Figure 3f) protein levels in supernatants from WT cells and abolished the chemokine overproduction in Sphk2 −/− macrophages. Thus, we provide here genetic and pharmacological proof that SphK1 is a crucial downstream mediator of biglycan-triggered CCL2 and CCL5 mRNA and protein expression in macrophages. Biglycan Triggers Expression of Ccl2 via NF-κB, Erk1/2 and p38 MAPK, While Ccl5 Expression is Induced through NF-κB and p38 MAPK Our previous results demonstrated that biglycan induces the production of CCL2 in a TLR2/4/MyD88-and in the case of CCL5 in a TLR4/TRIF-dependent manner [6]. However, the gap in the signaling pathway between the adaptor and effector molecules had not been characterized. Therefore, we aimed to identify the kinase, which would be responsible for biglycan-, TLR2/4/MyD88-, and biglycan-TLR4/TRIF-triggered synthesis of CCL2 and CCL5, respectively. It is known that biglycan activates phosphorylation of Erk1/2, p38 MAPK and the translocation of NF-κB in macrophages [5]. Thus, we applied U0126, SB203580 and the IκB kinase (IKK) inhibitor III, the inhibitors of mitogen-activated protein kinase Erk kinase (MEK), p38 MAPK, and IκB kinase to WT macrophages for verification. Inhibition of IKK, Erk1/2 and p38 MAPK markedly reduced the biglycan-triggered Ccl2 mRNA expression (Figure 4a). In contrast, biglycan-dependent Ccl5 mRNA expression was reduced exclusively via IKK and p38 MAPK inhibitor (Figure 4b). Thus, we provide here genetic and pharmacological proof that SphK1 is a crucial downstream mediator of biglycan-triggered CCL2 and CCL5 mRNA and protein expression in macrophages. Soluble Biglycan Triggers Renal Expression of Ccl2 and Ccl5 and Macrophage Recruitment into the Kidney in Sphk1-Dependent Manner To address the in vivo relevance of these findings, human biglycan (pLIVE-hBGN) or empty pLIVE vector were transiently expressed (3 days) in WT, Sphk1 −/− and Sphk2 −/− murine livers under an albumin promoter [6,9]. Following transfection, soluble biglycan is released into the bloodstream and accumulates in various organs e.g., in the kidney [6,9]. Overexpression of human biglycan in the liver was confirmed by qPCR and restriction fragment length polymorphism analysis (data not shown) as described previously [6,9]. Plasma and renal levels of human biglycan were verified by Western blots [6,9]. As expected from our previous results, the Ccl2 (Figure 6a) and Ccl5 (Figure 6b) mRNA expression was elevated in WT pLIVE-hBGN vs. control pLIVE-kidneys [6]. These data show that biglycan-induced phosphorylation of Erk1/2, p38 MAPK and p65 is SphK1-dependent. Thus, biglycan triggers CCL2 and CCL5 production in macrophages through SphK-controlled activation of Erk1/2, p38 MAPKs and NF-κB. Soluble Biglycan Triggers Renal Expression of Ccl2 and Ccl5 and Macrophage Recruitment into the Kidney in Sphk1-Dependent Manner To address the in vivo relevance of these findings, human biglycan (pLIVE-hBGN) or empty pLIVE vector were transiently expressed (3 days) in WT, Sphk1 −/− and Sphk2 −/− murine livers under an albumin promoter [6,9]. Following transfection, soluble biglycan is released into the bloodstream and accumulates in various organs e.g., in the kidney [6,9]. Overexpression of human biglycan in the liver was confirmed by qPCR and restriction fragment length polymorphism analysis (data not shown) as described previously [6,9]. Plasma and renal levels of human biglycan were verified by Western blots [6,9]. As expected from our previous results, the Ccl2 (Figure 6a) and Ccl5 (Figure 6b) mRNA expression was elevated in WT pLIVE-hBGN vs. control pLIVE-kidneys [6]. On the contrary, the expression of both chemokines was markedly enhanced in Sphk2-deficient pLIVE-hBGN as compared to pLIVE-hBGN WT kidneys (Figure 6a,b). This was associated with reduced plasma levels of CCL2 (Figure 6c In addition, immunostaining for the macrophage marker F4/80 in renal sections from pLIVE-hBGN-injected mice revealed a lower number of macrophages in Sphk1 −/− mice compared to WT (Figure 6e,f). On the other hand, enhanced macrophage infiltration was found in kidney sections from pLIVE-hBGN-transfected Sphk2 −/− vs. WT kidneys (Figure 6e,f). Thus, biglycan triggers expression of Ccl2 and Ccl5 and the recruitment of macrophages into the kidney in a Sphk1-dependent manner. Importantly, biglycan-dependent induction of renal Ccl2 (Figure 6a) and Ccl5 (Figure 6b) mRNA expression significantly declined in transfected pLIVE-hBGN Sphk1-deficient vs. pLIVE-hBGN WT mice. On the contrary, the expression of both chemokines was markedly enhanced in Sphk2-deficient pLIVE-hBGN as compared to pLIVE-hBGN WT kidneys (Figure 6a,b). This was associated with reduced plasma levels of CCL2 (Figure 6c In addition, immunostaining for the macrophage marker F4/80 in renal sections from pLIVE-hBGN-injected mice revealed a lower number of macrophages in Sphk1 −/− mice compared to WT (Figure 6e,f). On the other hand, enhanced macrophage infiltration was found in kidney sections from pLIVE-hBGN-transfected Sphk2 −/− vs. WT kidneys (Figure 6e,f). Thus, biglycan triggers expression of Ccl2 and Ccl5 and the recruitment of macrophages into the kidney in a Sphk1-dependent manner. Discussion The present report reveals sphingosine kinase SphK1 as a key contributor to the biglycan-driven inflammatory response in macrophages. Here we show that the ECM proteoglycan biglycan promotes the expression and activity of SphK1 through TLR4/TRIF/NF-κB and thus induces the production of the inflammatory chemoattractants CCL2 and CCL5 in macrophages. Biglycan triggers the synthesis and activity of SphK1 in a selective manner having no effect on the regulation of the SphK2 isoform. Mechanistically, biglycan potentiates the production of CCL2 via TLR2/TLR4/MyD88/Erk1/2/p38 MAPK/NF-κB and CCL5 via TLR4/TRIF/p38 MAPK/NF-κB in a SphK1-dependent manner, ultimately leading to the recruitment of macrophages into the kidney. The underlying mechanisms are graphically presented in Figure 7. Discussion The present report reveals sphingosine kinase SphK1 as a key contributor to the biglycan-driven inflammatory response in macrophages. Here we show that the ECM proteoglycan biglycan promotes the expression and activity of SphK1 through TLR4/TRIF/NF-κB and thus induces the production of the inflammatory chemoattractants CCL2 and CCL5 in macrophages. Biglycan triggers the synthesis and activity of SphK1 in a selective manner having no effect on the regulation of the SphK2 isoform. Mechanistically, biglycan potentiates the production of CCL2 via TLR2/TLR4/MyD88/Erk1/2/p38 MAPK/NF-κB and CCL5 via TLR4/TRIF/p38 MAPK/NF-κB in a SphK1-dependent manner, ultimately leading to the recruitment of macrophages into the kidney. The underlying mechanisms are graphically presented in Figure 7. A working model summarizing the mechanisms of biglycan-driven and Sphk1-mediated production of macrophage chemoattractants CCL2 and CCL5. Following release from the ECM, biglycan interacts with TLR2 and -4 and triggers production of CCL2 and CCL5 through TLR2/4/MyD88 and TLR4/TRIF, respectively. By signaling via TLR4, biglycan induces the Sphk1 synthesis in a TLR4/TRIFdependent manner. Moreover, biglycan induces the activity of SphK1. In turn, active SphK1 drives the biglycan-mediated production of CCL2 through Erk1/2 p38 MAPK and NF-κB activation, while CCL5 is induced only through p38 MAPK and NF-κB activation. Consequently, this leads to the recruitment of macrophages to inflamed tissues. Green arrows underline the effect of SphK1 on the biglycan-promoted NF-κB activation, MyD88/Erk1/2/p38 MAPK and TRIF/p38 MAPK pathways. Black arrows describe the biglycan-mediated inflammatory cascade. This is the first study showing that a component of the sphingolipid signaling network SphK1 is directly triggered by the ECM component biglycan. Diverse factors, such as TNFα [20,33,34], IL-1β [35], platelet-derived growth factor (PDGF) [36], transforming growth factor (TGFβ) [37,38], and nerve growth factor [39] have been reported to regulate SphK1. Among those factors, TGFβ and PDGF are not induced by biglycan [40,41]. By contrast, it is well known that biglycan acts as a trigger of TNFα and IL-1β protein in macrophages, requiring at least two hours of stimulation before the cytokines can be detected [5,9,37]. It is of note that biglycan-induced synthesis and activity of SphK1 as well as the interaction between NF-κB and Sphk1 TSS occur already after 30 min. Therefore, it is conceivable that biglycan directly triggers SphK1 expression. At later time points, biglycan-induced TNFα and IL-1β Figure 7. A working model summarizing the mechanisms of biglycan-driven and Sphk1-mediated production of macrophage chemoattractants CCL2 and CCL5. Following release from the ECM, biglycan interacts with TLR2 and -4 and triggers production of CCL2 and CCL5 through TLR2/4/MyD88 and TLR4/TRIF, respectively. By signaling via TLR4, biglycan induces the Sphk1 synthesis in a TLR4/TRIF-dependent manner. Moreover, biglycan induces the activity of SphK1. In turn, active SphK1 drives the biglycan-mediated production of CCL2 through Erk1/2 p38 MAPK and NF-κB activation, while CCL5 is induced only through p38 MAPK and NF-κB activation. Consequently, this leads to the recruitment of macrophages to inflamed tissues. Green arrows underline the effect of SphK1 on the biglycan-promoted NF-κB activation, MyD88/Erk1/2/p38 MAPK and TRIF/p38 MAPK pathways. Black arrows describe the biglycan-mediated inflammatory cascade. This is the first study showing that a component of the sphingolipid signaling network SphK1 is directly triggered by the ECM component biglycan. Diverse factors, such as TNFα [20,33,34], IL-1β [35], platelet-derived growth factor (PDGF) [36], transforming growth factor (TGFβ) [37,38], and nerve growth factor [39] have been reported to regulate SphK1. Among those factors, TGFβ and PDGF are not induced by biglycan [40,41]. By contrast, it is well known that biglycan acts as a trigger of TNFα and IL-1β protein in macrophages, requiring at least two hours of stimulation before the cytokines can be detected [5,9,37]. It is of note that biglycan-induced synthesis and activity of SphK1 as well as the interaction between NF-κB and Sphk1 TSS occur already after 30 min. Therefore, it is conceivable that biglycan directly triggers SphK1 expression. At later time points, biglycan-induced TNFα and IL-1β might potentiate the direct effects of biglycan on SphK1 production. Hence, our data strongly suggest that biglycan directly induces SphK1 synthesis and activity. Our findings regarding biglycan-dependent Sphk1 induction are in agreement with several reports describing sphingosine kinases and sphingolipid metabolites to be involved in inflammatory reactions in response to various sterile danger signals or pathogens [17,33]. There are extensive studies, which show promoting effects of SphK1 on LPS- [20,23,42] and Mycobacterium smegmatis-triggered [43] expression of pro-inflammatory cytokines. Moreover, LPS cooperates with S1P to augment the expression of adhesion molecules and pro-inflammatory modulators [44]. S1P was shown to trigger cell death and NLR family pyrin domain containing 3 (NLRP3) inflammasome-dependent IL-1β secretion [30]. Additionally, sphingosine might act by itself as an endogenous DAMP [45]. Furthermore, we identified SphK1 as a crucial regulator of biglycan-dependent CCL2 and CCL5 production. Previously, we reported that soluble biglycan evokes CCL2 expression by engaging the TLR2/TLR4/MyD88 signaling pathway and CCL5 through TLR4/TRIF [6,7,9,11]. Here, we filled some of the signaling gaps between TLRs/ adaptor molecule complex and downstream cytokine synthesis. In macrophages genetically ablated or pharmacologically inhibited for SphK1, we discovered that SphK1 is a crucial mediator of biglycan-triggered Erk1/2, p38 MAPK, and NF-κB activation. Additionally, we found that biglycan triggers expression of Ccl2 through Erk1/2, p38 MAPK and NF-κB activation, while Ccl5 expression requires p38 MAPK and NF-κB. Importantly, SphK1 is a common upstream mediator of biglycan-dependent Erk1/2 p38 MAPK and NF-κB as well as of CCL2 and CCL5 synthesis. Thus, biglycan triggers the synthesis of SphK1 in macrophages in order to promote activation of Erk1/2, p38 MAPKs and NF-κB. Notably, this also represents a positive regulatory acceleration loop where NF-κB is required for SphK1 upregulation, which, in turn, triggers NF-κB activation. The activation of NF-κB by SphK1 is still controversially discussed. On one hand, it was shown that in mouse embryonic fibroblasts, SphK1 deficiency abolished TNFα-stimulated NF-κB activation [21], whereas, on the other hand, in macrophages of either Sphk1 deficient or myeloid-specific Sphk1/Sphk2 double deficient mice, no defect in TNFα-and LPS-induced inflammatory responses was detected, and these mice showed unaltered LPS-induced systemic inflammation and death [46]. Our study has unveiled that biglycan selectively induces SphK1, whereas the sphingosine kinase isoform SphK2 expression remains unchanged upon biglycan stimulation. Additionally, biglycan-dependent SphK1 induction was more pronounced in Sphk2-deficient macrophages due to compensatory Sphk1 upregulation. Consequently, higher CCL2 and CCL5 expression was detected in Sphk2 −/− macrophages upon biglycan stimulation. Furthermore, the selective inhibitor of SphK1 activation [31] rescued the Sphk1-driven production of CCL2 and CCL5 in Sphk2 −/− macrophages. This is in accordance with previous reports showing an inversed regulatory pattern of the Sphk1 and Sphk2 gens in various cell types and inflammatory disease models [24,53,59,60]. Furthermore, SphK1 but not SphK2-mediated S1P accelerates CCL2 expression in mast cells [61]. Thus, our findings provide strong evidence that biglycan selectively utilizes SphK1 to trigger CCL2 and CCL5 synthesis. Importantly, we provided an in vivo proof of biglycan-SphK1-dependent CCL2 and CCL5 synthesis and macrophage recruitment. As previously described, transient overexpression of soluble biglycan [6,7,9,11] resulted in higher renal Ccl2 and Ccl5 expression as well as in enhanced numbers of infiltrating macrophages in the kidney [6,9]. Accordingly, the renal expression of both chemokines and the number of macrophages were markedly reduced in Sphk1 −/− mice and abundant in kidneys lacking Sphk2 vs. WT kidneys. Even though, there are no data addressing biglycan and sphingolipid interaction directly, SphKs and S1P have been studied in several renal diseases associated with overexpression of biglycan [11,[62][63][64], namely diabetic nephropathy [65,66], glomerulonephritis [67], fibrosis [68], nephroblastoma [67] and acute kidney injury [69,70]. In this context, Sphk1 deficiency increases albuminuria and glomerular connective tissue growth factor expression in diabetic nephropathy [66,70]. However, it was also reported that SphK1 deficiency results in CCL2 reduction and prevention of renal fibrosis in diabetic nephropathy [65]. In renal ischemia reperfusion injury, however, overexpression of Sphk1 protects against inflammation and tubular damage [71]. Altogether, these data suggest that the effect of SphK1 on the outcome of renal inflammation is still controversial and appears to be diseaseand duration-dependent [16]. Here, we report an anti-inflammatory effect of Sphk1 deficiency in macrophage and kidney directly in a mouse model of transient overexpression of soluble biglycan. In conclusion, we show for the first time that there is crosstalk between ECM-and sphingolipid-signaling. We have observed in vitro and in vivo evidence for biglycan-TLR4/TRIF-triggered SphK1 expression and activity. Our data provide new insights on how biglycan regulates CCL2 and CCL5 chemokines and macrophage recruitment into the kidney via SphK1. As SphK1 seems to impact on biglycan signaling upstream of Erk1/2, p38 MAPK and NF-κB, it is conceivable that SphK1 is a general regulator of various biglycan-triggered inflammatory responses. In Vivo Transfection Eight-to twelve-week-old wild-type C57BL/6, Sphk1 −/− and Sphk2 −/− male mice were anesthetized with 2% isoflurane (Abbott, Wiesbaden, Germany) under 1 L/min oxygen supply. For intravenous delivery, 50 µg of pLIVE-hBGN or pLIVE vector was incubated for 15 min before injection in sterile filtered 5% glucose containing 6 µl of Turbofect in vivo Transfection Reagent (Thermo Fisher Scientific, Darmstadt, Germany). The mice received a single intravenous injection and were sacrificed after 3 days of transfection. Plasma and liver were collected for analysis of hBGN overexpression. Kidneys were subjected to RNA extraction, Western blotting and histological analysis. Immunohistochemistry Sections (4 µm) of paraffin-embedded kidney samples from mice were blocked with 5% milk in Tris-buffered saline (TBS) with 0.05% Tween 20 for 1 h and incubated with the primary rat anti-mouse F4/80 (MCA497, Bio-Rad, AbDSerotec, Puchheim, Germany) antibody for 2 h at room temperature. The staining was developed with 3,3 -diaminobenzidine (Vector Laboratories, Peterborough, UK). Counterstaining was performed with Mayer's Hematoxylin (AppliChem GmbH, Darmstadt, Germany). The specificity controls included omitting or replacement of primary antibody with rat unspecific IgG. The number of macrophages was estimated per high-power field (HPF 400×, with a minimum of 7 fields counted) (Soft Imaging System, Olympus, Münster, Germany). Histological examinations were performed by two observers blinded to the conditions. RNA Isolation and Quantitative Real-Time PCR Total RNA was isolated using the TRI Reagent (Sigma Aldrich, Steinheim am Albuch, Germany) and was reverse transcribed using the High Capacity cDNA Reverse Transcription Kit (Applied Biosystems, Darmstadt, Germany). Real-time quantitative PCR was performed using AbiPrism 7500 Sequence Detection System (Applied Biosystem, Darmstadt, Germany). Quantitative RT-PCR was performed using TaqMan Fast Universal PCR Master Mix (Thermo Fisher Scientific, Darmstadt, Germany) and the following primers: Ccl2 (Mm00441242_m1), Ccl5 (Mm01302428_m1), Gapdh (Mm99999915_g1), Sphk1 (Mm01252547_g1) and Sphk2 (Mm00445021_m1). Relative changes in gene expression compared to control and normalized to Gapdh were quantified by the 2 −∆∆Ct method. Statistics All data are expressed as means ± standard deviation (SD). Two-sided Student's t-test was used to evaluate significance of differences between groups. Differences were considered significant at p < 0.05.
6,643
2017-03-01T00:00:00.000
[ "Biology", "Chemistry" ]
Evaluating the Electrochemical Characteristics of Babassu Coconut Mesocarp Ethanol Produced to Be Used in Fuel Cells The aim of the present study is to assess the potential of ethanol deriving from the mesocarp of babassu coconut to be used in fuel cells. Babassu ethanol was generated through hydrolysis and fermentation processes. The Pt, PtRh and PtRu electrodes were prepared in carbon Vulcan XC72R through the reduction method and applied as electrocatalysts in ethanol oxidation reaction. X-Ray diffraction (XRD), energy-dispersive X-ray (EDX), stripping CO, cyclic voltammetry, chronoamperometry, and online differential electrochemical mass spectrometry (DEMS) were used to characterize the synthesized eletrocatalysts. The electrocatalyst Pt80Ru20/C presented larger active area and higher catalytic activity than other studied materials. The current efficiency of CO2 production rated less than 1% in all studied electrocatalysts, thus showing that babassu ethanol oxidation produces less pollutants than the commercial ethanol. Introduction Fossil fuel exploitation has generated great negative impacts on the environment.Fuel discharge into rivers, seas and oceans is common, a fact that results in endangered animal species, as well as in fuel burnings that release CO 2 into the atmosphere.][3] Accordingly, ethanol has emerged as an alternative fuel source; Brazil only produces sugarcane ethanol for commercial purposes.However, ethanol can be made from a large variety of natural and renewable sources; it is obtained through the fermentation of many raw materials such as sugarcane, corn, beet and barley. 4abassu palm tree has great economic value, almost all its parts can be used in the food, handcrafting and power generation sectors. 5Babassu palm fruit has great economic potential since it is the raw material for a wide variety of products such as coal, oil, glycerin, ethanol, etc. 5 Babassu coconut mesocarp ethanol emerged as an alternative to the alcohol P.A. sold by big industries. Ethanol is a fuel derived from renewable natural sources; therefore, it became one of the main alternatives to gasoline replacement, since it is less polluting and produced on large scale from raw materials of renewable nature. 6tarch is a clean carbon source widely used as ethanol feedstock but, in turn, it derives from sugar fermentation processes performed by microorganisms such as Saccharomyces cerevisiae. 7,8accharomyces cerevisiae is commonly used to produce ethanol on large scale; however, this microorganism is not capable of degrading starch molecules into sugars.Thus, starch needs to pass through a hydrolysis process in order to be used as raw material in sugar ethanol. 9he starch hydrolysis process requires water and chemical, or enzymatic, agents capable of breaking glycosidic bonds. 9Starch ethanol production demands the gelatinization step, in which starch is cooked, liquefacted and saccharificated to form sugars and ferment glucose into ethanol. 10,11thanol is the most outstanding alcohol among the several ones that show direct use in fuel cells.6][17] Ethanol complete oxidation reaction involves the transfer of 12 electrons per ethanol molecule, which leads to the generation of many adsorbed intermediates and by-products during the oxidation process. 17he kinetics of electrochemical oxidation of ethanol is slow; 13 therefore, studies have been devoted to develop catalysts to be used in the electrooxidation of this alcohol. 13latinum-based catalysts are the most often studied for alcohol oxidation purposes. 13,18,19Platinum is a noble metal of high catalytic activity; however, it is not able to oxidize carbon monoxide (CO) molecules into carbon dioxide (CO 2 ) at low power.Therefore, it requires using a second metal along with platinum to favor CO oxidation into CO 2 at lower potentials. 13,18etals such as rhodium and ruthenium are examples of metals commonly used along with platinum to produce electrocatalysts.These metals favor the adsorption of oxygenated species through water molecule activation, which leads to CO oxidation into CO 2 at low potentials.1][22][23] The bifunctional mechanism lies on the supply of a second metal containing oxygen species to platinum in order to oxidize the CO molecule into CO 2 and to release platinum catalytic surface to a new adsorption. 20,21he metal added to platinum in the electronic effect modifies the platinum electronic structure and leads to bonding force decrease in the CO molecule of the electrocatalyst surface. 22,23he objective of the present study was to develop and apply platinum-based nanoparticles to oxidize babassu coconut mesocarp ethanol, as well as to assess the chemical and electrochemical characteristics of this ethanol type by using the cyclic voltammetry (CV), chronoamperometry and differential electrochemical mass spectroscopy (DEMS) techniques. Synthesis of electrocatalysts The electrocatalysts Pt/C 20%, Pt80Rh20/C and Pt80Ru20/C were prepared through the alcohol reduction method, also known as polyol method, which is easy to be performed and allows nanoparticles to be produced in nanometer scale. 19The electrocatalysts were synthesized in 20% of metal mass and 80% Vulcan XC72R (Cabot) carbon support.The Vulcan carbon was treated with 5.0 mol L -1 HNO 3 in a reflux system for 5 h at a controlled temperature between 70-80 °C.After the reflux, the Vulcan carbon was washed with deionized water until pH 5 was reached (the water was distilled in a Fanem distiller model 724 and purified (18.2 mΩ cm) in a Millipore Milli-Q Academic system).The solid phase retained in the filter was then placed in an oven at 60 °C for 24 h. 19latinum, rhodium and ruthenium impregnation into the treated Vulcan carbon occurred due to the addition of precursor salt solutions and of an ethylene glycol/water solution (75/25, v/v).Such addition was conducted in order to get to the desired mass of each metal. 19The mixture was subjected to reflux system at controlled temperature (between 70 and 80 °C) for 2 h.The mixture was washed and filtered after reflux.The resulting solid phase was oven-dried at 70 °C for 24 h, then macerated and stored. 19,24talytic suspensions and electrochemical cell Catalytic suspensions were prepared with 5.0 mg of electrocatalyst, 1.0 mL of methanol due to its high volatility, 100 μL of Nafion and 1.4 mL of deionized water.It remained in ultrasound for 30 min to completely homogenize the mixture.25 The working electrode surface (vitreous carbon) was polished before catalytic suspension deposition through diamond spray, a water soluble spray manufactured with monocrystalline dimond powder.Next, it was washed in deionized water and left to dry.A total of 20 μL of catalytic suspension was added to the surface of the working electrode; the solvent was evaporated through hot air flow, using a hair dryer. Methods to set the physico-chemical characteristics X-Ray diffraction (XRD) The X-ray diffractograms were taken in Rigaku diffractogram model ULTIMA IV by using Cu Kα radiation.The crystallite sizes and network parameters of the electrocatalysts were calculated based on XRD analysis results of the plane (220) of the platinum face-centered cubic (fcc) through Scherrer's equations 1 and 2, respectively: 24,25 (1) (2) wherein d is the mean crystallite size; p is the lattice parameter; λ is the wavelength of the used radiation, in which Cu Kα is equal to 1.54056 Å; k is a constant equal to 0.9, when the crystallites are assumed to have spherical morphology; B 2θ is diffraction peak width at half height in radian; and θ is the Bragg angle, in degrees, to the maximum height point of the analyzed peak. 25ergy-dispersive X-ray (EDX) Electrocatalyst atomic compositions were set through X-ray dispersive energy analysis conducted in the scanning electron microscope Phenom-World ProX. CO stripping Electrocatalyst active areas were set according to the CO stripping method. 25,26The adopted system was the same one used in cyclic voltammetric analysis, although it was performed in exhaust hood.The CO was bubbled into the supporting electrolyte solution for 5 min, next, nitrogen was bubbled for 10 min. 25,26The method has been used since 1960 26 and allows to determine the electric charge (mC) required to remove CO monolayers from the surface of catalysts during an anode oxidation. 25,26clic voltammetry and chronoamperometry Cyclic voltammetry and chronoamperometry were performed in an Autolab PGSTATXX (Metrohm) equipment coupled to a computer.Electrocatalyst voltammograms were taken in a one-chamber electrochemical glass cell containing a reference hydrogen electrode (RHE) prepared with the same sulfuric acid solution used as electrolyte carrier, as well as with a working (glassy carbon), a counter and a platinum electrodes.Ethanol electrooxidation studies were carried out in 0.1 mol L -1 ethanol solutions in 0.5 mol L -1 H 2 SO 4 medium at range 0.03-1.0V, and scanning speed 10 mV s -1 in system purged with N 2 .19 Online DEMS The online DEMS analysis was performed according to the methodology described in the literature, 27 with a single compartment cell with input for the electrodes: work, auxiliary and reference, plus gas inlet and temperature controller. The electrodes were prepared through sputter deposition (gold) on a Teflon membrane (50 nm thick) for the DEMS analysis.The Teflon membrane was added with 180 μL of aqueous suspension containing the catalytic material.A mixture containing 2 mg of catalyst powder and 25 μL of Nafion was done to assure catalytic material adhesion. 27olatiles deriving from babassu coconut mesocarp ethanol oxidation were monitored according to m/z 44 and 22, which correspond to the ionized [CO 2 ] + and doubly ionized [CO 2 ] 2+ , in addition to the acetaldehyde signals m/z 29 and 44, which correspond to [CHO] + and [CH 3 CHO] + , respectively.It was decided to monitor CO 2 and acetaldehyde formation through m/z 22 and 29, which correspond to the species [CO 2 ] 2+ and [CHO] + , respectively, since the m/z 44 signal may correspond to the ionized [CO 2 ] + and [CH 3 CHO] + species. 27,28 Results and Discussion Physico-chemical characterization of the synthesized eletrocatalysts XRD Diffractograms of metal alloys Pt80Rh20/C, Pt80Ru20/C and Pt/C are shown in Figure 1.Peaks at approximately 2θ = 39, 45, 67 and 81° are assigned to planes (111), ( 200), (220), and (311), respectively, of platinum face-centered cubic (fcc) and of platinum containing metal alloys. 24,25ectrocatalyst crystallite sizes and the lattice parameters were calculated based on the peak associated with the platinum plane (220), because this plane is less influenced by the carbon carrier C (002). 25 The mean crystallite size values of each electrocatalyst are shown in Table 1.It is possible to notice that the crystallite sizes of the bimetallic electrocatalysts are smaller than those of the Pt/C monometallic electrocatalyst.Therefore, the bimetallic electrocatalysts have larger surface areas and, consequently, more catalytic activity. 24,28attice parameters values are shown in Table 1.These results evidence that bimetallic electrocatalysts record lower net parameter values than the Pt/C monocatalyst.Such result indicates Pt80Rh20/C and Pt80Ru20/C formation. 24The lattice parameters values of the electrocatalysts are similar to those found in literature, 24,28 which used catalysts prepared through the alcohol reduction method to assess ethanol oxidation. EDX Synthesized and evaluated electrocatalyst compositions were defined through the EDX technique, which allows us to evaluate the atomic composition of a compound in a given region of the sample (Table 2).Because EDX is a point technique, it gives us indications about the content of chemical elements found in a certain point of the sample studied. Electrochemical characterization of synthesized electrocatalysts Determining the active areas of electrocatalysts through CO stripping Figure 2 shows the CO adsorption cyclic voltammograms of different electrocatalysts in 0.5 mol L -1 H 2 SO 4 solution. It was possible to set the active areas of the electrodes (AAE) through equation 3 based on the electrical charges of adsorbed CO (CO ads ) (Table 3) found after integrating the highlighted areas in the voltammograms. 25,26) wherein (mC) is the CO adsorption charge of the cyclic voltammetries in Figure 2; and Q CO is the theoretical electrical charge required to oxidize a CO monolayer in Pt (420 μC cm -2 ).26 Determination through the cyclic voltammetry of the amount of charge (Table 3) required to oxidize a CO monolayer adsorbed by a platinum electrode must take into consideration that the CO molecule is adsorbed by each platinum atom.25,26 The determined AAE values are shown in Table 3. Alcohol oxidation in the fuel cell may present higher catalytic sample activity 24 in the case of bimetallic catalysts. Electrocatalytic performance Figure 3 shows the cyclic voltammetry curves of different electrocatalysts supported on Vulcan carbon in 0.5 mol L -1 H 2 SO 4 medium, in absence of ethanol.The profile of the synthesized bimetallic electrocatalysts presents the hydrogen region (0.03-0.4 V) in a much more clear fashion than the monometallic electrocatalyst; there are peaks showing hydrogen adsorption and desorption. 12,24e bimetal electrocatalysts supported on Vulcan carbon Pt80Rh20/C and Pt80Ru20/C showed slight increase in currents in the electric double layer (0.4-0.8 V) region in comparison with the Pt/C monometallic catalyst.Oxides formed on the catalyst surface, due to the addition of other metal to platinum in the catalyst composition, helped increase the active area of the synthesized electrocatalysts and, consequently, gave them greater oxidative power. 12,24igure 4 shows the voltammetric profiles of different electrocatalysts (Pt/C, Pt80Rh20/C and Pt80Ru20/C) deposited on Vulcan carbon in the presence of 0.1 mol L -1 commercial ethanol (Figure 4a) and 0.1 mol L -1 babassu ethanol (Figure 4b).The current densities were normalized by the electrocatalytic surfaces of the electrocatalysts, which were estimated through the electrooxidation of a CO monolayer adsorbed on the working electrode.4 evidenced that the electrocatalyst Pt80Ru20/C proved to be more efficient in the oxidation of both herein studied alcohols (babassu and commercial ethanols).The process triggers oxidation at potentials lower than that of other electrocatalysts: approximately 100 mV less than the platinum monometallic electrocatalyst; besides, it presents higher current density in the studied alcohols.Similar results were recorded by Ribeiro et al. 12 In order to determine the oxidation initiation potential in all catalysts, the point where no current oscillation occurs was verified.From this point, the current went up until the complete oxidation of the products. Data in Table Figure 4 shows that the maximum current density in all electrocatalysts of the babassu coconut mesocarp ethanol occurs in potentials lower than 0.8 V, while for commercial ethanol, the maximum current occurs only after 0.8 V.According to Pech-Rodrígues et al., 25 who assessed ethanol oxidation, the ethanol molecule undergoes complex molecule dissociation and adsorption mechanisms, C-C binding breakage and dehydrogenation to achieve complete ethanol oxidation into CO 2 .There is intermediate formations such as carboxylic acid, aldehyde and carbon monoxide, which poison the Pt catalyst. 25,28thanol equilibrium equations 4-6 in acidic medium, with main products and the balance of generated electrons are herein presented. The electrocatalyst presenting the highest electric current density was the bimetallic electrocatalyst Pt80Ru20/C in the two studied alcohols; this electrocatalyst showed its high catalytic power.On the other hand, the Pt/C monometallic electrode presented lower electrical current density in the oxidation of both alcohols. However, the catalytic activity of an electrocatalyst is given by the potential in which the alcohol redox process is triggered, i.e., the lower the oxidation/reduction initiation potential, the greater its catalytic power. 19It is known that the Pt/C monometallic electrocatalyst is not a good oxidizing agent to electrooxidize ethanol, because of the poisoning caused by the strong adsorption of reaction intermediates such as CO ads . 12,29Thus, the monometallic catalyst was the least efficient synthesized catalyst against alcohol oxidation reaction among all the synthesized ones.It initiated oxidation at potentials above 0.25 V, thus revealing that the use of other metal next to platinum on the carbon support significantly increases the catalytic power of the electrocatalyst and decreases the cost with rare earth metals. 29ssumingly, both herein assessed alcohols have very similar profiles, their oxidation starts at very close potentials in all the synthesized electrocatalysts tested in the present study. Chronoamperometry Electrode equilibrium time at 0.6 V potential and electrocatalyst poisoning in active areas of continuous operation for 15 min were assessed through chronoamperometry analysis.Figure 5 shows how the chronoamperometry curves (current density vs. time) of different synthesized electrocatalysts have their current density normalized by the active area of the respective electrodes. The curves of the assessed commercial ethanol and babassu seen in Figure 5 show remarkable decay in the electric current density during the first seconds of the analysis.Such decay was followed by slow decay throughout the following minutes; however, it remained constant during the remaining analysis time. Ribeiro et al. 12 studied alcohol oxidation and concluded that ethanol molecules can be adsorbed onto sites that were initially covered by water molecules at potentials below 0.4 V.After adsorption, the ethanol molecule can dissociate itself and produce CO molecules strongly adsorbed by the electro surface along with other reaction intermediates.Oxygenated species, such as OH ads , adsorbed by the electrode surface are necessary for the ethanol molecule to be completely oxidized, a fact that leads to CO 2 formation or to the formation of acetic acid molecules. 3,12esults of the chronoamperometry analysis validate the herein conducted cyclic voltammetry analysis, since the electrode with the highest electric current density in the commercial ethanol oxidation was the electrocatalyst Pt80Ru20/C, followed by Pt80Rh20/C and Pt/C.The same was recorded for babassu mesocarp ethanol oxidation, in which the Pt80Ru20/C electrode recorded the highest electrical current density, followed by the Pt80Rh20/C and Pt/C electrodes.These results follow the same order of cyclic voltammetric analysis at 0.6 V potential, which is used in chronoamperometry studies. Online DEMS of babassu coconut mesocarp ethanol Load signal m/z 22 calibration was necessary to quantify part of the total electrooxidation current of ethanol into CO 2 , according to results recorded through online DEMS. 3,27,28,30,31Figure 6 shows the stripping of CO results from acidic medium, 0.5 mol L -1 H 2 SO 4 , 10 mV s -1 , of the used electrocatalysts.The CO stripping was used as electrochemical reaction reference, since the number of electrons exchanged during CO electrooxidation adsorbed by the surface of the working electrode, CO 2 is already well known, as can be observed in equation 7. 30,32 CO ads + H 2 O → CO 2 + 2H + + 2e - Figure 6 shows the signals of the m/z 22 and 44 analyzed through DEMS during the acid stripping of CO, 0.5 mol L -1 H 2 SO 4 , 10 mV s -1 .The m/z 22 signal is attributed to doubly ionized CO 2 ([CO 2 ] 2+ ) production during the electrooxidation of ethanol in the online electrochemical mass spectrometry. 28,30,31The m/z 44 signal is also used in the literature to quantify ionized CO 2 ([CO 2 ] + ); however, this signal is also attributed to acetaldehyde ([CH 3 CHO] + ).Therefore, the m/z 22 signal was used in order to not compromise CO 2 quantification results. 28,30,31e ionic currents of signal m/z 22 and the faradaic current recorded through stripping of CO of different electrocatalysts can be correlated through equation 8: 3,31 (8) I m/z 22,CO is the current of the mass/charge signal m/z 22; I f,CO is the faradaic current of the CO stripping; 2 is the number of electrons exchanged during CO electrooxidation into CO 2 ; and K * 22 is the calibration constant of signal m/z 22, which is required to quantify part of the current deriving from ethanol electrooxidation into CO 2 . 30,31The K * 22 values of different electrocatalysts used in the present study are shown in Table 5. Ethanol oxidation reaction (EOR) It is known that the EOR into CO 2 (at balance 12e -per ethanol molecule) can be incomplete due to different paths.Such result leads to acetaldehyde (at balance 2e -per ethanol molecule) and acetic acid (at balance 4e -per ethanol molecule) formation.The kinetic mechanisms involved in EOR have been widely studied. 3,27,31hus, products from the electrooxidation reaction of babassu coconut mesocarp ethanol using Pt/C, Pt80Rh20/C and Pt80Ru20/C electrocatalysts were monitored through online DEMS.The CO 2 formation was monitored through mass/charge signal m/z 22, which corresponds to the double ionized ion [CO 2 ] 2+ .Acetaldehyde formation was followed by signal m/z 29, which corresponds to [CHO] + . 3,27,31igure 7 shows the cyclic voltammograms of DEMS recorded during electrooxidation experiments involving carbamate coconut mesocarp ethanol at 0.1 mol L -1 in 0.5 mol L -1 H 2 SO 4 acid medium, in the synthesized electrocatalysts Pt/C, Pt80Rh20/C and Pt80Ru20/C.These results evidence that bimetallic electrocatalysts present higher current densities and initiate potential ethanol oxidation at 0.3 V, on average, whereas Pt/C monometallic electrocatalyst initiates ethanol oxidation at potential 0.4 V, on average.Such phenomenon can be explained by the fact that the dehydrogenation process more often occurs in bimetallic electrocatalysts. 3,27,31igure 7 shows mass signals m/z 22, 29, and 44, which were analyzed through DEMS during babassu coconut mesocarp ethanol oxidation reaction experiments.Results show that CO 2 (m/z 22, [CO 2 ] 2+ ) formation starts at 0.5 V, on average, in all the used electrocatalysts.These electrocatalysts present ionic current values at potential close to 0.8 V, but the bimetallic electrocatalysts record higher ionic current densities. On the other hand, acetaldehyde formation, accompanied by signal m/z 29, which corresponds to the [CHO] + fragment, starts at potentials lower than 0.4 V.It presents ionic current densities higher than the ionic current densities attributed to CO formation.The bimetallic electrocatalysts Pt80Rh20/C and Pt80Rh20/C stand out in relation to the Pt/C monometallic electrocatalyst.All the electrocatalysts presented maximum near-current densities of approximately 0.7 V.These results show that acetaldehyde (m/z 29 [CHO] + ) is the major product in the process of alcohol oxidation of babassu coconut mesocarp, since this product presented higher currents and initiation of potential formation smaller than those shown for the CO 2 signals. 27he CO 2 current efficiency in the different herein used electrocatalysts was set based on the faradaic and ionic current values recorded during babassu coconut mesocarp ethanol oxidation reaction through equation 9: 3,27,31 (9) wherein is the CO 2 current efficiency expressed in percentage; the factor 6 refers to the number of electrons needed for the formation of one CO 2 molecule from ethanol; I m/z 22 is the ion current corresponding to the signal m/z 22; If is the faradaic current resulting from the ethanol oxidation reaction; and K * 22 is the calibration constant of signal m/z 22. 3,27,28 The CO 2 current efficiency results, , of different electrocatalysts used during the oxidation reaction of babassu coconut mesocarp alcohol are shown in Table 6. The values shown in Table 6 show that electrocatalysts Pt/C, Pt80Rh20/C and Pt80Ru20/C contain CO 2 current efficiency below 1%.It evidences that the largest portion of babassu coconut mesocarp ethanol oxidation is formed by acetaldehyde, which has less environmental impact than the CO 2 released into the atmosphere during fuel burning.Studies such as that by Queiroz et al. 27 present CO 2 efficiency calculations performed through DEMS to assess commercial ethanol oxidation.It was done by using different platinum-based electrocatalysts values, and between 2 and 20% of currents are attributed to CO 2 formation, which suggests that the oxidation of commercial ethanol using platinum-based electrocatalysts leads to CO 2 formation up to 20 times higher than the babassu coconut mesocarp ethanol oxidation using the same electrocatalyst types. 27The research shows the oxidation of babassu coconut mesocarp ethanol is low polluting. Conclusions The method to prepare the electrocatalysts proved to be very effective and of good metal dispersion on the Vulcan carbon support.The bimetallic electrocatalysts showed better performance than the Pt/C monometallic catalyst when it comes to the oxidation reaction of the alcohols assessed in acid medium.The bimetallic electrocatalysts Pt80Rh20/C and Pt80Ru20/C presented very approximate compositions of the values obtained by EDX, the largest active areas and the best catalytic activity, thus demonstrating that the addition of a second metal to platinum helps get better electrocatalyst performance. DEMS studies showed that all electrocatalysts assessed during the alcohol oxidation reaction of babassu mesocarp presented CO 2 current efficiency close to 1%.It means that most evaluated ethanol oxidation products are acetaldehyde, which is less harmful to the environment than CO 2 released during the oxidations of fuels. Results recorded in the present study show that babassu coconut mesocarp ethanol has potential to be used in fuel cells.Due to its ability to oxidize producing small amount of CO 2 and higher amount of acetaldehyde, it is less harmful to the environment.In addition, babassu coconut ethanol has similar behavior to commercial ethanol in terms of its energy density. Figure 2 . Figure 2. Cyclic voltammetry to find the active areas in the studied electrocatalysts: (a) Pt/C, (b) Pt80Rh20/C and (c) Pt80Ru20/C in 0.5 mol L -1 H 2 SO 4 , 10 mV s -1 scan rate, purged by CO and N 2 for 5 and 10 min, respectively. Table 1 . Lattice parameters and crystallite size of electrocatalysts Pt/C, Pt80Rh20/C and Pt80Ru20/C Table 2 . Experimental compositions of Pt/C, Pt80Rh20/C and Pt80Ru20/C electrocatalysts set through EDX Table 3 . Load needed to oxidize CO (C) monolayer and to activate electrocatalysts areas determined through CO stripping Table 4 . Initial oxidation potential and current at potential 0.6 V in electrocatalysts Pt/C, Pt80Rh20/C and Pt80Ru20/C in commercial ethanol medium and 0.1 mol L -1 babassu ethanol in 0.5 mol L -1 H 2 SO 4 Table 6 . Effective current rate of CO 2 (Aq CO 2 ) during the electrooxidation of babassu coconut mesocarp ethanol at concentration of 0.1 mol L -1 in different synthesized electrocatalysts
5,833.8
2018-08-01T00:00:00.000
[ "Materials Science" ]
SAR Tomography as an Add-On to PSI : Detection of Coherent Scatterers in the Presence of Phase Instabilities The estimation of deformation parameters using persistent scatterer interferometry (PSI) is limited to single dominant coherent scatterers. As such, it rejects layovers wherein multiple scatterers are interfering in the same range-azimuth resolution cell. Differential synthetic aperture radar (SAR) tomography can improve deformation sampling as it has the ability to resolve layovers by separating the interfering scatterers. In this way, both PSI and tomography inevitably require a means to detect coherent scatterers, i.e., to perform hypothesis testing to decide whether a given candidate scatterer is coherent. This paper reports the application of a detection strategy in the context of “tomography as an add-on to PSI”. As the performance of a detector is typically linked to the statistical description of the underlying mathematical model, we investigate how the statistics of the phase instabilities in the PSI analysis are carried forward to the subsequent tomographic analysis. While phase instabilities in PSI are generally modeled as an additive noise term in the interferometric phase model, their impact in SAR tomography manifests as a multiplicative disturbance. The detection strategy proposed in this paper allows extending the same quality considerations as used in the prior PSI processing (in terms of the dispersion of the residual phase) to the subsequent tomographic analysis. In particular, the hypothesis testing for the detection of coherent scatterers is implemented such that the expected probability of false alarm is consistent between PSI and tomography. The investigation is supported with empirical analyses on an interferometric data stack comprising 50 TerraSAR-X acquisitions in stripmap mode, over the city of Barcelona, Spain, from 2007–2012. Introduction Persistent scatterer interferometry (PSI) [1][2][3][4][5][6][7] is nowadays an operational geodetic technique for the monitoring of surface deformation with spaceborne synthetic aperture radar (SAR) data stacks.These stacks typically comprise several repeat-pass SAR acquisitions, spanning from months to years.PSI techniques attempt to extract the interferometric phase components correlated with the scatterer motion.The quality of the deformation estimates is tied to the precision of the interferometric phases.Temporal and geometric decorrelation, as well as uncompensated platform motion and atmosphere-induced optical path delay variations, are among the factors that cause random instabilities in phase.For these reasons, a quality control is necessary during the processing as well as when reporting the final results. The single dominant scatterers that exhibit long-term phase stability are generally termed as persistent scatterers (PS).PSI processing approaches often use a classifier to identify a priori a set of PS candidates, e.g., the permanent scatterers [1] approach uses the dispersion index as a proxy for phase stability.The PSI approaches based on the interferometric point target analysis (IPTA) framework, as in [3,8], employ low spectral diversity [3, [9][10][11] as a proxy for phase stability in addition to the stability of the backscattering amplitude.Low dispersion index and low spectral diversity are indicative of good phase quality.The observed differential interferometric phases are fit to a phase model and the unknown parameters, such as the deformation velocity and the residual topography, are thereby estimated.The dispersion of the residue of the fit is a means to characterize the quality of the estimates.It is often used to compute the multi-interferogram complex coherence (MICC) [1,12,13] which can in turn be used as a test statistic to perform statistical detection i.e., to decide among the hypotheses whether a given PS candidate is a phase coherent single scatterer or if it comprises noise only.The statistics of the noise impact the probability of false alarm in the detection process. An inherent limitation associated with PSI techniques is the fact that a phase-only model cannot consider multiple coherent scatterers with different complex reflectivity interfering in the same range-azimuth resolution cell.The cumulative phase response in this case is mismatched to the interferometric phase model, which is essentially based on the assumption of a single scatterer.Consequently, it may lead to erroneous estimation of the deformation parameters.Therefore, PSI processing approaches typically reject the cells that contain backscattering contributions from multiple scatterers, as for the case of layovers. The aforementioned limitation can be alleviated by SAR tomography [14][15][16][17], which exploits both the amplitude and the phase of the received signal, thereby permitting a higher order analysis [18].It allows 3-D reconstruction of the scene reflectivity-a feature that renders it possible to resolve the layover problem [19][20][21][22].Additionally, differential SAR tomographic methods [23][24][25] allow a joint spatio-temporal inversion of the coherent scatterers in layover, i.e., the position along the elevation axis as well as the deformation velocity of the interfering scatterers are simultaneously estimated.Therefore, differential SAR tomography has been proposed as an add-on to PSI techniques to improve deformation sampling by resolving the scatterers in layover that are rejected in the PSI processing [26][27][28][29].Inevitably, a detection strategy is again required to classify whether the detection of one or more scatterers in the same resolution cell is true or false.In this context, it is pertinent to carry forward the same quality criteria as used in the prior PSI analysis so that the combined use of PSI and tomography holds compatibility. The prevailing detection mechanisms for SAR tomography, such as the generalized likelihood ratio tests in [13, 24,30], consider an additive noise model for the received complex signal vector.The source of the noise is attributed to the clutter in the resolution cell.However, the instabilities in the observed interferometric phases, albeit considered additive in the phase-only model, naturally represent themselves as multiplicative noise in the tomographic signal model.Therefore, in order to carry the impact of the phase instabilities from an interferometric to tomographic analysis, the detection strategy employed for hypothesis testing needs to account for the phase instabilities as a multiplicative disturbance in tomographic inversion. Keeping in view the aforementioned concerns, this paper describes a strategy for the detection of single and double scatterers with SAR tomography whereby the hypothesis testing is directly linked to the MICC-based test statistic for PS detection in the prior PSI processing.As a whole, this paper is a follow-up to the earlier works in [12,27,31].Section 2 presents the mathematical models typically used for SAR interferometry and tomography, as well as the associated detection mechanisms.Section 3 presents the processing methodology adopted in the paper.The data stack for empirical analysis is introduced in Section 4. The results obtained are presented in Section 5, followed by a discussion in Section 6. Models We consider the availability of a coregistered, single-reference interferometric SAR data stack comprising M layers of repeat-pass interferograms.For a given range-azimuth resolution cell in an interferometric layer, we denote the received single-look complex (SLC) signal as y m = z m exp (−jϕ m ), where z m = |y m | ∈ R is the amplitude of the received signal, and ϕ m is the observed interferometric phase.The subscript m, where m ∈ {0, 1, . . . ,M − 1}, is used to indicate a specific layer in the interferometric stack.In the following text, an underlined symbol represents a quantity that has been modeled as stochastic, or when the distinction between observables versus observations is emphasized.Bold symbols represent vectors, or matrices when capitalized. Interferometric Phase Model The interferometric phase observable, ϕ m , is generally modeled as a sum of several phase contributions [32,33]: where ϕ disp is the phase change due to the linear displacement of the target as a function of time within the resolution cell: λ is the wavelength, v is the deformation velocity in the line of sight (LOS), and t m is the temporal baseline for the m th interferogram.ϕ geo is the phase variation due to sensor-to-target geometry.Neglecting higher order terms [16,34], where b ⊥ m and b m are the orthogonal and parallel components of the spatial baseline for the mth interferogram, respectively.ρ 0 is the range distance from the sensor to the target location for the reference acquisition.s represents the elevation, i.e., the position of the target in the axis perpendicular to the LOS.In case of thermal expansion, the additional phase variations are linearly modeled as follows [27,35]: where T m is the temperature change (with respect to the temperature for the reference layer), and η is the phase-to-temperature sensitivity.The term 2π p, where p ∈ Z, is added to account for phase wrapping.The phase variations ϕ atm m are due to the optical path length variations while propagation through the atmosphere.They are modeled as stochastic variables due to the temporally varying nature of atmospheric refractivity [36][37][38][39].The phase decorrelation term, ϕ decor m is, by definition, a random quantity, which is typically modeled as an additive phase noise.The parameters s, v and η are treated as deterministic unknowns in this work. The interferometric phase model in Equation ( 1) is implicitly assuming the presence of a single coherent scatterer in the resolution cell.In case of multiple coherent scatterers in the same resolution cell, it is not possible to write the interferometric phase, ϕ m as a sum of the aforementioned sources of phase variations, independently of the reflectivity of the individual scatterers. PSI: Model of Observation Equations While several approaches to parameter estimation with PSI have been proposed over time, as in [1][2][3][4][5][6], the functional model of interferometric phase observation equations common to these approaches is as follows [33]: where ϕ is the M × 1 vector of interferometric phase observables, A is the design matrix, and p is the vector of the aforementioned unknown parameters.w is the M × 1 vector of phase residuals which collectively represent the phase instabilities owing to decorrelation, uncompensated atmospheric phases and model imperfections.The residuals in each layer are assumed to be zero-mean and independent random variables: E {w} = 0; and D {w} = E ww H = Q ww is the covariance matrix for the residuals.If it can be assumed there are no phase unwrapping issues, and the data stack can be phase calibrated by compensating for the atmospheric phase with external data-although both assumptions are simplistic-then the remaining unknowns are s, v and η.The design matrix is then constituted by the coefficients of these parameters (from Equations (2-4)) [1,33].Under Gauss-Markov conditions, the best linear unbiased estimate of the parameter vector using weighted least squares is given as [33]: The covariance matrix of the estimated parameter vector, Q p p = D p is as follows: The quality of the estimates is, therefore, dependent on the dispersion of the residuals.The vector of the estimated phase residuals is as shown below: PSI: Statistics for PS Detection For each PS candidate, we distinguish between the following two hypotheses: H 0 -the null hypothesis.The range-azimuth resolution cell does not contain any coherent scatterer and comprises merely clutter; H 1 -the alternative hypothesis.The cell contains a phase coherent single scatterer, i.e., a PS. In the presence of a coherent scatterer whose phase response is well-matched to the model in Equation (1), the phase residuals are expected to have a low dispersion around the expected value of zero.Contrarily, in the absence of a coherent scatterer, the observed phase and the residuals are expected to have a wider dispersion.With these considerations, we assume that the phase residuals generally follow a von Mises (circular normal) distribution.The probability density function (PDF) is given by [40]: where the support of the distribution is any 2π interval.The parameter µ = E {w} represents the 'preferred direction', which we consider to be zero under both H 0 and H 1 .The support is then the interval [−π, π) and the distribution is symmetric about zero.The parameter κ ≥ 0 is a measure of 'concentration' of the distribution around the mean value, i.e., κ −1 behaves analogously to the dispersion of a linear random variable.I o (κ) is the modified Bessel function of the first kind and order zero.Under H 1 , we consider the residuals to exhibit a higher concentration around µ. NB: The term circular distribution as used in this paper refers to a directional distribution with support on the circumference of unit circle [40]. Test Statistic A commonly used statistic to test among the two hypotheses is the ensemble coherence, as defined below [5,32]: An unbiased estimator of the coherence, given M interferometric layers, is the multi-interferogram complex coherence (MICC) [12,32]: where X and Y are the sum of cosine and sine terms in the expression, respectively, and the length of the resultant, R = The overscore indicates sample mean.Hereafter, we refer to MICC simply as the sample coherence.The phase residuals ŵm are assumed to be independent and identically distributed (i.i.d.) random variables. In the context of interferometry, we typically use the coherence values normalized between 0 and 1, i.e., | γ|, instead of the resultant R = M| γ|.However, in the directional statistics literature, the use of the term R is more common.Here, we state both to facilitate cross-referencing with the literature.The sample mean direction μ, computed with sample coherence for any random sample (w 1 , w 2 , . . ., w m ) from a von Mises population, is the maximum likelihood estimator of the preferred direction µ when R is well-defined [41,42].This property is characteristic of von Mises populations on a circle, analogous to a similar property holding for Gaussian distribution on a real line whose location parameter is estimated with maximum likelihood by the sample mean [40,43]. Statistics under H 0 The statistics of the sample coherence depend on the distribution of the phase residuals.With reference to Equation (11), the phase residuals can be considered as angles subtended by phasors of unit length.Under H 0 , when the phasors have no preferred direction, we consider the limiting case of von Mises distribution when κ → 0 [40]: where U (w) is the circular uniform distribution.In this case, E {x} = E {y} = 0; therefore, E γ = 0. The second order moments are E x 2 ; H 0 = E y 2 ; H 0 = 1 2 .The terms x and y are not independent (as x 2 + y 2 ≡ 1), but they are uncorrelated as E {x • y} = 0 [12,40].The variance of the addends in the Equation ( 12) is finite.Therefore, under the assumption of a large sample size, multivariate central limit theorem holds, and we consider the joint distribution of ( X, Ȳ) to be converging to a Gaussian distribution, N 2 0, Σ γ where R is then approximately Rayleigh-distributed, and its PDF is as follows [40]: where 0 ≤ r ≤ M. Referring to [12,40], the probability of false alarm can be computed as the upper tail of the Rayleigh distribution, as follows: It can be equivalently expressed as where T γ is the detection threshold such that 0 ≤ T γ ≤ 1. Statistics under H 1 In case of H 1 , the probability distribution of R is given by [40], J 0 is the Bessel function of the first kind and zero-order.A closed form expression for the PDF is not available.We again assume a large sample size and invoke the multivariate central limit theorem.It allows us to consider the joint distribution of X and Ȳ to be asymptotically normal, and expectation and the variance of the sample coherence can be approximated as follows [44]: var where ν j = E {cos (jw)}: For sufficiently large κ, the von Mises distribution for the phase residuals can be approximated by a linear normal distribution with σ 2 w = κ −1 [40].The coherence in this case is given by [12,31]: For a discussion on the details about the corresponding probability of detection, interested readers are referred to earlier works in the literature [12,13]. Since exact closed-form expressions for the PDF of | γ| are not available, we resort to numerical methods to compare the estimate of the coherence magnitude for the general case of κ > 0 against the estimate in case of the aforementioned linear normal approximation.For selected values of κ between [1, 10], we perform 10 5 Monte Carlo simulations of the residual phase vector, w (comprising M instances of von Mises distributed random variables), and compute the coherence magnitude. The results are shown in Figure 1 for three different values of M. The estimate under the normal approximation (Equation ( 24)) is also shown.It can be seen that the normal approximation for the limiting case tends to overestimate the coherence magnitude.The overestimation decreases for increasing values of κ.For κ > 3, the difference between the coherence estimate under the assumption of von Mises distribution and the normal approximation is less than 5% on average.With increasing number of acquisitions, the variance in the estimation of the coherence magnitude decreases (in agreement with Equation ( 21)). SAR Tomography: Mathematical Model In the absence of noise, for a given range-azimuth resolution cell, the mathematical model for SAR tomography (3-D SAR) can be written as [16,19,21,26,45]: where α is the complex reflectivity and I s is the support of s.This model assumes there has been no displacement in the line of sight during the observation time period.Differential SAR tomography [23,25] with extended phases models [25,27,46] allows modeling linear displacement as well as seasonal or temperature-induced motion: where ψ m is the sum of the deterministically modeled phase components as a function of the unknown parameters, i.e., It is assumed that the phase terms (and hence the spatial and temporal baselines, and temperature changes) are mutually independent of each other.A general mathematical model for SAR tomography can be defined as follows [27,47]: where P represents the support of the parameter vector (i.e., the parameter space), and p ∈ P. It is analogous to a multi-dimensional Fourier transform [48].In case the resolution cell contains a single point source with dirac delta response, α(p) = τ 1 δ(p − p 1 ), with τ 1 ∈ C, Equation (28) reduces to the following: For the general case of Q point sources in the presence of clutter, the tomographic model is further extended as follows: where n m represents additive noise which is typically modeled as zero-mean complex Gaussian (with symmetric variances for the real and imaginary parts).We assume the noise samples are i.i.d.across the stack, i.e., D {n} = σ 2 n I M , with σ 2 n > 0. d m represents the coherent sum of the deterministic components in the signal vector.τ q is the reflectivity, and ψ m (p q ) is the modeled phase for the qth scatterer. SAR Tomography: Model Inversion and Parameter Estimation We use single-look beamforming for the inversion of the general tomographic model to estimate the unknown scatterer reflectivity as a function of the parameter vector p for a given range-azimuth resolution cell as follows [13,16]: where ., .represents the inner product, a (p) is the steering vector as a function of p, and y is the vector comprising the SLC observations: The steering vector is structured as follows: For the estimation of the unknown parameters, we use the estimated absolute reflectivity as the objective function in the following maximization: As more than one coherent scatterer may be present in the same resolution cell, successive maxima after the global maximum may indicate the presence of more scatterers.Assuming a maximum of two scatterers, an estimate of the parameter vector for the second scatterer is obtained as follows: where δp indicates the Rayleigh resolution for the tomographic profile along each of the unknown parameters.Equation (32) implies that noise in the SLC vector will cause errors in the reconstructed target reflectivity.As a consequence, errors will propagate in the estimation of the parameters using the aforementioned maximizations.Therefore, a scatterer detection strategy is needed to classify whether a given resolution cell contains one or more phase coherent scatterers, or is merely clutter. SAR Tomography: Statistics for Scatterer Detection A commonly used test statistic for coherent scatterer detection in the context of tomography is the absolute value of the estimated reflectivity, | α|.The same hypotheses are carried forward as introduced in Section 2.3, except for the change that now we consider them for multiple coherent scatterer candidates for each pixel.We consider a maximum of two candidates per pixel.In case only one of the candidates fulfills H 1 , we call the pixel a single scatterer.In case both the candidates fulfill H 1 , the pixel is called a double scatterer. Statistics under H 0 In case the received signal is merely clutter, the received signal vector y = n.Using Equation ( 32), where the third equality follows from rotational invariance of the Gaussian distributed samples, and, therefore, the inconsequential difference between ǹ and n will be dropped.Since ϕ m = ∠n m under H 0 , the observed interferometric phase (and the residual phase in this case) follows a uniform distribution [12].Along similar lines as in Section 2.3, the joint distribution of the real and imaginary parts of α is a zero-mean Gaussian with the following covariance matrix: The PDF of | α| in this case is Rayleigh, and the right tail probability to compute the probability of false alarm is as follows: where y 2 2 = ∑ m z 2 m is the squared L2-norm of the observed signal vector. Statistics under H 1 In general, the received signal contains clutter besides the possibility of backscattering contribution from point-like sources.We assume that, under H 1 , the deterministic backscatter from the point sources is dominant over the clutter, i.e., |d m | |n m | ∀ m.This assumption allows us to consider that the observed phase owes primarily to the vector sum of the backscatter from point-like sources (and not the clutter).Using Equations ( 30) and (32), the expression for the estimated reflectivity can then be stated as follows: Formally, the origin of phase instability, ŵm in Equation ( 43) is not the clutter, rather it is phase disturbances such as uncompensated atmospheric phase delay variations or residual motion [31], or phase model imperfections.Using Equations ( 10) and ( 39), From Equation (44), it is clear that phase instability is disturbing tomographic reconstruction in a multiplicative sense.The ensemble coherence has a direct impact on the expected value of the retrieved reflectivity profile, and thereby on the hypothesis testing.Closed-form expression s for the PDF of | α| are not available when the residuals are assumed to follow a von Mises distribution with κ > 0. A Rician approximation can be taken, as suggested in [31], when the residuals can be considered to be normally distributed (i.e., the limiting case when κ → ∞).The probability of detection, f D for a fixed false alarm rate can then be studied as the area under the upper tail of the Rician distribution [49]. Nonetheless, we resort to Monte Carlo simulation to study the probability of detection numerically in terms of the inverse coefficient of variation (iCV) as defined below for the text statistic α: This definition has been referred to as the signal-to-noise ratio (SNR) in [31].Although, in the field of signal processing iCV is often referred to as the SNR, we avoid referring it so.In our context, formally the denominator in Equation ( 46) is not representing the noise power, neither additive (σ 2 n ) nor multiplicative (σ 2 w ), but rather the dispersion of the test statistic.Considering n ≈ 0, and dropping the dependence on p to simplify notation, Using the assumption that the residual phases are i.i.d.random variables, the covariance term in Equation ( 47) simplifies as follows: cov e j ŵl , e −j ŵk = (1 where δ [ ] is the unit sample function.Using this result, Equation (47) reduces to the following [31]: Therefore, Since y 2 ≤ y 1 ≤ √ M y 2 [50], we reach the following bounds on the iCV for a given level of coherence: iCV is a function of the ensemble coherence as well as on the ratio of the L1 to L2 norm of the signal vector.While the coherence is in turn a function of the concentration of the phase residuals (as shown in Figure 1), the L1-L2 ratio is influenced by the (1) number of acquisitions and (2) the number of point-like scatterers in the same resolution cell.Figure 2 shows the variation of the empirically estimated iCV against the concentration parameter for different numbers of scatterers, for M = 50 acquisitions as an example.In addition, 10 5 realizations of the phase residue are generated under a von Mises distribution for each value of κ selected between (0, 20].The dashed lines in Figure 2 highlight the upper and lower bounds on iCV.The upper bound is reached theoretically when Therefore, the greater the number of acquisitions, the higher is the achievable iCV.At a given concentration of phase residuals, the iCV decreases for an increasing number of scatterers.The iCV estimates for Q = 1 converge at the upper bound.The impact of the number of scatterers on the iCV is further discussed in Appendix A. Concentration parameter, κ iCV (dB) Monte Carlo runs with M = 50 acquisitions iCV against concentration parameter Figure 3a is a plot of the numerically estimated f D against the iCV.The detection thresholds are set to ensure a fixed level of probability of false alarm, f F ∈ 10 −2 , 10 −3 , 10 −4 given M = 50 acquisitions.Lower levels of f F provide higher f D , indicating the trade-off typically observed for statistical detectors [49].At the same time, we observe a slight dependency of f D on the number of scatterers.Even for a fixed level of iCV, the f D is lower for a higher number of scatterers.However, this dependency diminishes as the level of the false alarm is relaxed. Figure 3b shows f D against iCV while fixing f F at 10 −3 for single and double scatterers, for different number of acquisitions in the stack, M ∈ {25, 35, 50, 75}.We observe slight dependency of f D on M, though it tends to diminish as the number of acquisitions in the stack grow larger.The aforementioned simulations have been performed in the absence of clutter.We repeat them next with varying levels of clutter, expressed in terms of the signal-to-clutter ratio (SCR): d 2 2 /σ 2 n .Samples to simulate clutter are generated as instances of zero-mean Gaussian noise with variance σ 2 n .Figure 4a shows the iCV observed for the case of single and double scatterer for three different, but fixed, levels of SCR ∈ {6, 3, 0} dB.As expected, the iCV decreases with decreasing SCR.The case of SCR = 0 dB, i.e., when the intensity of the deterministic backscatter from point scatterers equals that of the clutter, contradicts the assumption used in deriving Equation (43).Nonetheless, we perform the simulation as a worst-case analysis.Figure 4b shows f D against the iCV for this case.The plots shown are nearly identical to those shown in Figure 3a.This is an auspicious finding as it implies that, for fixed levels of iCV, f D can be characterized nearly independently of the origin (additive or multiplicative) and level of noise. The simulation results in Figures 3 and 4 collectively imply that when the number of acquisitions are sufficiently large and the false alarm setting is not too strict, the empirically estimated iCV can be considered to fully characterize the f D , even in the presence of clutter. Methods This section presents the overall methodology adopted for the interferometric and tomographic processing of a real interferometric data stack.The models discussed in the previous section form the basis of this methodology.The data undergoes several preprocessing steps.A reference scene is selected, and a multilooked intensity image of the reference scene is used to geocode and coregister all the acquisitions in the stack.An external digital elevation model (DEM) is used in the process [51,52].A suitable reference point is selected to compute double-differenced interferograms. Interferometric Processing with IPTA We use the IPTA [3,8] framework for the PSI processing, whereby parameter estimation and phase calibration of the data stack are performed side by side using an iterative approach to least squares regression.An initial list of PS candidates is prepared on the basis of high temporal stability of the backscattering and low spectral diversity.The phase model assumed is as given in Equation (1).Point differential interferograms are obtained by subtracting the topographic phase computed using the DEM.A multiple linear regression is used for each candidate to obtain an initial estimate of s and v, as well as the phase unwrapping integer, p.The quality of the estimates is assessed in terms of the root-mean-square (RMS) phase deviation, σw of the residual phase.At the initial stage, atmospheric phases in each interferometric layer have not be corrected, and the possible temperature-induced phase variations of candidates on structures experiencing thermal expansion have also not been accounted for.Therefore, the residual phase typically exhibits a high dispersion.The PS candidates for which σw is higher than a pre-selected threshold, σ c are masked out.The residue of the remaining candidates is analyzed further.Assuming the atmospheric phase screen (APS) to be spatially low-frequency and temporally uncorrelated, we estimate it by spatial filtering and unwrapping of the phase residue in the neighborhood of the candidates that satisfied the quality criterion.The estimated APS is subtracted and point-differential interferograms are re-computed for the full list of PS candidates, and this time the phases related to the initial estimates of residual height, linear deformation and the atmospheric phase are subtracted as well.The resulting point differential interferograms are unwrapped and the regression is iterated.It is expected that the quality of the candidates would improve since an estimate of the atmospheric phase has been subtracted prior to the regression.σ w is computed again for all the candidates, and compared against σ c to mask out those with relatively low quality.For the retained candidates, the newly estimated regression coefficients (residual height and deformation velocity) act as 'corrections' on the previous estimates.The new phase residue is added to the previous estimate of the atmospheric phase, re-filtered and unwrapped to give a new estimate of the atmospheric phase.The process is iterated several times.In this way, there is progressive improvement in the quality of the estimates in consecutive iterations.For more details on various time-series processing strategies using the IPTA framework, the interested readers are referred to earlier works [3,8,9,53]. For the candidates that are potentially undergoing thermal expansion, another regression-based routine is used that models it assuming that the corresponding phase variations are linearly dependent on the temperature changes [54][55][56].The estimated regression coefficient is the phase-to-temperature sensitivity, η.Further details are available in the earlier work in [35]. After several iterations, the APS is well isolated and we obtain iteratively-refined estimates of the parameter vector p for the PS candidates that satisfy the quality criterion.Assuming that these PS are of sufficiently good quality that the limiting case of von Mises distribution for the phase residuals being approximated by a linear normal distribution is justified, we compute the sample coherence threshold corresponding to σ c using Equation ( 24) as follows: In turn, the corresponding probability of false alarm (theoretical) is computed using Equation ( 18).It is important to mention here that the aforementioned assumption is not mandatory to choose the threshold; in fact, a threshold can be set directly on the coherence (as typically done for interferometric processing) [2,12,57].In our context where we perform PSI processing with the IPTA toolbox (which allows quality assessment in terms of the residual phase statistics), the relation in Equation (52) provides a means to compute the coherence threshold corresponding to the quality criteria in our PSI processing. Single-Look Differential SAR Tomography with Extended Phase Model Prior to tomographic inversion, the interferometric data stack requires a precise phase calibration.For the pixels containing PS, we already have an estimate of the atmospheric phases from the PSI processing.Given a sufficient distribution of the PS over the imaged scene or the region of interest, we interpolate these phases over the surrounding pixels that may or may not have been PS candidates.Single-look differential tomographic inversion is applied for each pixel.The extended phase model, given in Equation (27), is used to set up the steering vectors.The reflectivity profile, α (p) is estimated as a function of the unknown parameters.Scatterer localization and parameter estimation for a maximum of two scatterers in each resolution pixel is performed, as stated in Equations ( 35) and (36).The amplitude of the estimated reflectivity is compared against a threshold for each potential scatterer to accept or reject the null hypothesis. We propose to set the detection threshold in such a way that the desired probability of false alarm from PSI processing is carried forward for the detection of coherent scatterers at this stage.Equating the Equations ( 18) and ( 40), we set the detection threshold for tomography as follows: In turn, the decision between H 1 and H 0 is made for each candidate as follows: |α| (p) In this way, the same quality criterion that is used for setting the threshold T γ c in the PSI processing also determines the threshold for scatterer detection in tomographic processing.Hence, a consistency is achieved for the synergistic use of tomography as an add-on to PSI. It is to be noted that Equation ( 53) is independent of how the threshold T γ c for PSI processing was selected, whether as a direct choice on the coherence, or using the standard deviation of the residual phase according to Equation ( 52) under the assumption of linear normal distribution of the residual phases for the PS.Therefore, this assumption is not a limiting factor for the application of the proposed detection strategy in general. Data The interferometric data stack used in the work comprises 50 TerraSAR-X stripmap acquisitions over the city of Barcelona, Spain in repeated passes.It is the same stack as used in our earlier work in [27].The temporal span of the acquisitions extends from 2007 to 2012.The images have been oversampled by a factor of 2 to allow for more accurate coregistration.The resolution in range and azimuth is 1.2 m and 3.3 m, respectively.The orthogonal component of the total spatial baseline is 503 m, which provides resolution in elevation axis of ∼19 m.The distribution of the spatial and temporal baselines, as shown in Figure 5a, is highly non-uniform.The corresponding 2-D point spread function (PSF) is shown in Figure 5b.The PSF represents the impulse response of the tomographic system for the given distribution of the baselines, for an ideal point scatterer at zero elevation and with no deformation.The footprint of the acquisitions in map coordinates is shown in Figure 5c.Apart from a dense urban stretch, some part of the viewed scene extends over the Balaeric sea. Results on Real Data This section presents the results obtained on the real interferometric data stack introduced in the previous section. Interferometric Processing An initial list of PS candidates was prepared on the basis of low spectral diversity and high stability of the backscattering amplitude that is characteristic of single dominant scatterers [3].There was no candidate in unexpected areas, such as the water surface or radar shadows.After several iterations of the least squares regression within the IPTA framework, as outlined in Section 3.1, a subset of the initial candidates is retained such that σ w ≤ σ c = 1.1 rad for each candidate.Figure 6 shows these candidates from the last iteration.These are 936,649 in number, and spread over an area of nearly 4 km 2 .In sub-figure a-c, the color coding represents the estimated parameters, namely residual height, deformation velocity in the LOS and phase-to-temperature sensitivity, respectively.The sample coherence for these candidates is as shown in sub-figure d. Corresponding to σ c = 1.1 rad, the coherence threshold T γ c = 0.55 according to Equation ( 52), and the theoretical probability of false alarm according to Equation ( 18) is 3.3 × 10 −7 .As stated in Section 3.1, the use of Equation ( 52) to convert a threshold in terms of residual phase standard deviation to corresponding threshold on coherence requires the assumption that the von Mises distribution can be approximated by a linear normal distribution (for the case of ideal, noise-free PS, with κ → ∞).In order to assess the suitability of this assumption, we require estimates of the concentration parameter.Using Equations ( 20) and ( 22), E {| γ|} I 1 (κ) I 0 (κ) ; to estimate κ, this expression needs to be inverted, which involves inversion of the ratio of modified Bessel functions (first kind) of first and zero order.We do not have a closed-form expression for such an inverse relation; we use the following piece-wise defined approximation [43,58]: The concentration parameter is estimated for each PS, and a histogram of the parameters is shown as an inset in Figure 6d.The mean and the median values are 5.4 and 4.1, respectively.In existing literature in the field of directional statistics, we can find precedence where concentration parameters greater than 2 are considered reasonable to approximate von Mises distribution as wrapped normal distribution (i.e., linear normal distribution wrapped between −π to π rad) [58]. Tomographic Processing and Empirical Analysis of False Alarms The APS isolated in the IPTA-based PSI processing is extrapolated over the scene and compensated for over the entire scene in each layer of the interferometric stack.In this way, each pixel is considered to be phase calibrated so that tomographic inversion can be applied next.Given that the city of Barcelona has several high-rise buildings, the elevation extent, I s is set as [−60, 300] m.The parameter space for the deformation parameters is as follows: I v ∈ [−10, 10] mm/yr and I η ∈ [−1, 1] rad/K.The discretization in each dimension is 1/2.5 times the Rayleigh resolution, followed by a local refinement of the estimated reflectivity around the two candidate peaks at one-tenth the resolution.Using Equations ( 52) and (53), and keeping σ c = 1.1 rad, we threshold the reflectivity of the two candidates to perform the detection process.The point cloud of single scatterers thus detected is shown in Figure 7. 1454 false alarms occur over the water surface.52) and ( 53)).The color coding represents the estimated height.Some false alarms can be seen over the water surface , as highlighted in the inset. A significant portion of the viewed sea extends over the sea, which is favorable in our context as it can be used as a test bed to conduct an empirical analysis of the false alarm rate.We perform sample coherence-based detection, as well as tomographic inversion and detection, for the range-azimuth pixels over the sea and observe the variation of the false alarm rate.These pixels constitute 1.4 million independent resolution cells.The results are shown in Figure 8.The solid lines in the figure represent different cases of tomographic inversion and detection: (1) [α (s, v, η); 3-D inv.]: 3-D inversion and detection on the reflectivity, α retrieved as a function of elevation (s), deformation (v) as well as thermal expansion (η) where the support in each dimension is as for the results shown in Figure 7, (2) [α (s, v); 2-D inv.]: 2-D inversion i.e., thermal expansion is not considered, (3) [α (s); 1-D inv.]: 1-D inversion, whereby the reflectivity is retrieved only along the elevation profile, (4) [α (s); reduc.supp.]:1-D inversion with the elevation support reduced to [−25, 50] m, and (5) [α (no fitting)]: 1-D inversion without the maximizations to detect peaks in the reflectivity, i.e., no fitting is performed in the parameter space to estimate the unknown elevation and deformation parameters.(6) [ γ (no fitting)]: The dot-dashed line represents the PSI case whereby the thresholds are applied on the sample coherence without any parameter fitting.(7) [ γ (analytical)]: The black curve with diamond symbols shows the probability of false alarm (theoretical) according to the Equation (18).The bottom x-axis in the figure shows the detection thresholds, T γ and T α (normalized between 0 and 1 as per Equation ( 53)), while the top x-axis shows the equivalent standard deviation of the residual phase according to Equation (52).The area shaded in gray indicates the region in the figure where the results may not be sufficiently accurate due to a limited number of independent range-azimuth resolution cells over the water surface.Given we have only 1.4 million of these cells, and assuming the test statistics are normally distributed over the scene, we can estimate a probability of false alarm no less than 1.1 × 10 −3 with a relative absolute error of 5% for 95% of the time [49].(no fitting) fitting) (analytical, eq.18) Figure 9 shows the point cloud of single scatterers obtained with tomographic inversion and detection with σ c = 1.0 rad.In comparison with Figure 7, we can see a reduction in the false alarms.Now, we observe only 194 false alarms.3-D tomographic inversion has been applied with the same support in each dimension as for the results shown in Figure 7.We have estimates of height, deformation velocity as well as the phase-to-temperature sensitivity.Figure 10 shows the point cloud of double scatterers obtained with the same threshold.They are separated as lower and upper scatterers, according to the estimated height for each of the two scatterers in layover.2.14 × 10 6 single scatterers and 1.01 × 10 4 double scatters (lower + upper) have been detected.The inset in the sub-figures in Figure 10 shows a commercial complex, namely Diagonal Mar, in focus.The red polygon encloses a high-rise building, which is partly in layover with the roof of a nearby building.These are the same test sites as in our earlier work in [27].7 where a more relaxed detection threshold (corresponding to σ c = 1.1 rad) is used, fewer false alarms are observed here, as highlighted in the inset. Discussion This section provides an itemized discussion of the results presented in the previous section. Interferometric Processing The PSI solution, as shown in Figure 6, provides a good coverage over the viewed scene, which is typical with high resolution X-band interferometric imagery over urban areas such as Barcelona city [59,60].The PS heights fit reasonably with actual 3-D structures, as shown for selected buildings in our earlier work in [26,27].The PSI solution reveals deformation along the shoreline, which was partly observed in [59] as well.Several PS on high-rise buildings show temperature-dependent phase variations, which can be attributed to thermal expansion of the structures [30,35,46,61,62].The observed coherence is high, and the estimated concentration parameters are all non-zero.With reference to Figure 1, the fact that the mean and the median value of κ are greater than 3 substantiates the assumption of linear normal statistics for the PS (since the approximation of von Misses as linear normal distribution is accurate to within 5% error on average). Interestingly, we do not observe false alarms over the sea patch in the scene.This is due to the fact that we have used high stability of the backscattering amplitude and low spectral diversity as pre-classifiers to set up the initial PS candidate list.These classifiers are proxies for temporally coherent, single dominant scattering; therefore, they already preclude PS candidates from appearing on the water surface.Hence, no PSI solution has been sought (no regression fitting) on the pixels over the sea patch.In the context of tomography, these pre-classifiers cannot be used since they would tend to reject double scatterers as well. Tomographic Processing and Empirical Analysis of False Alarms We applied tomographic processing over the entire scene, regardless of any surface classification.The point cloud shown in Figure 7 is obtained using the same cut-off phase standard deviation, σ c = 1.1 rad, as for the iterative least squares based PSI processing.Nevertheless, several false alarms are visible over the sea patch.A simple mask (based on SAR multi-look intensity with spatial constraints for example) could have allowed us to remove the sea patch from the processing, but we choose to show these false alarms to highlight that similar false alarms may arise (due to noise) within the urban stretch as well though they may remain unnoticed. Figure 8, which shows the results of a false alarm analysis exclusively conducted over the sea patch, reveals that the false alarm rates can typically be higher in practice in comparison with the theoretical probability of false alarm (as the area under the upper tail of Rayleigh distribution).The maximizations (Equations ( 35) and ( 36)) allow degrees of freedom to fit the data; when the noise is fit incorrectly with the data model, it may lead to a false alarm.The false alarm rate can be seen to decrease from 3-D to 2-D inversion, as reducing the dimensionality reduces the degrees of freedom to fit the data.Similar reduction in false alarms is observed when moving from 2-D to 1-D inversion, or when we reduce the support of the elevation in case of 1-D inversion.These findings imply that in case some a priori information is available-e.g., if significant thermal expansion is not expected (as is usually the case for buildings of low height [27]), or if the support of deformation velocity can be reduced on the basis of local leveling measurements, or if the support for height corrections can be reduced given a digital surface model is available-then a reduction in false alarm rate can be achieved in practice. Figure 8 also shows the case where no parameter fitting is performed, for both tomography as well as sample coherence based detection.The latter case, i.e., [ γ (no fitting)], matches closely with the theoretical relationship in Equation (18), indicating that the area under the upper tail of the PDF of | γ| approaches that of a Rayleigh distribution.However, in the former case, i.e., [α (no fitting)], it can be observed that the estimated false alarm rate is slightly lower than the probability of false alarm according to the analytical expression for MICC-based detection, in turn implying deviation from the statistics of a Rayleighian process.It can be explained following the findings in an earlier work in [13].In this work, a generalized likelihood ratio test (GLRT) was compared against MICC for scatterer detection in the presence of additive noise with Gaussian statistics.It is to be noted that in our case the false alarm analysis is conducted on cells over the water surface; therefore, the origin of noise in the observed SLC values lies in the backscattering characteristics (rather than phase mis-calibration).In this particular context, an additive noise model is appropriate, and, consequently, the detection for a scatterer under Equation (54) in our work becomes identical to the GLRT in [13].It was found in [13] that the GLRT provides a lower probability of false alarm compared to MICC (as we observed).For a discussion on the performance analysis of radar detectors where the actual PDF of the amplitude of complex-valued noise/clutter deviates from Rayleigh statistics, interested readers are referred to [63][64][65][66]. Figures 9 and 10 show the single and double scatterers, respectively, detected with σ c = 1.0 rad.As expected, we observe fewer false alarms, and at the same time fewer scatterers are detected.Double scatterers constitute <1% of the total scatterers detected over the scene.The gain in deformation sampling due to double scatterer detections [27], relative to the PSI solution, are around 2% for Diagonal Mar complex and 4% for the selected building marked in red, respectively.If the threshold is relaxed to σ c = 1.1 rad, the gain improves to 6.4% for Diagonal Mar and 17% for the individual building.The interferometric data stack and the test sites in this work are the same as in our earlier work in [27].The detection strategies are, however, different.The sequential GLRT with cancellation (SGLRTC), as proposed in [24], was used for hypothesis testing in the earlier work.The quality of the detected scatterers was empirically evaluated only after the detection, and in turn compared with the quality of the PS (obtained independently in the prior PSI processing).In other words, the detection threshold for hypothesis testing had to be adjusted a posteriori to achieve comparable quality.The results thus obtained in [27] show a gain in deformation sampling of around 2.5% for Diagonal Mar complex and 10% for the selected building.On the other hand, the detection strategy proposed in this work allows the use of quality criterion during the hypothesis testing itself.Nonetheless, it needs to be noted that the SGLRTC and the proposed strategy are not directly comparable.SGLRTC explicitly assumes an additive noise model for SAR tomography, thus it cannot formally address multiplicative noise arising due to phase instabilities such as atmospheric disturbances.Moreover, it is a subspace method where the first scatterer is canceled out before a second scatterer is searched for [24].Therefore, the test statistics (and the corresponding threshold settings) for double scatterer detection under the proposed detection strategy are not the same as in SGLRTC. Conclusions In the context of SAR tomography as an add-on to PSI to potentially improve deformation coverage, following the directions set in earlier works in [12,27,31], this paper reports the application of a detection strategy that allows for extending the same quality considerations to tomography as used in the prior PSI processing.In interferometric processing, the quality is typically assessed on the basis of the residual phase, either in terms of the phase dispersion (phase standard deviation) or the ensemble coherence computed using the residue of the fit.In both cases, under the proposed detection strategy, the quality parameters can be used to set up the threshold for hypothesis testing of coherent scatter candidates following tomographic inversion.Moreover, the theoretical probability of false alarm remains the same between the PSI and tomography.The paper also highlighted that while the instabilities in phase are typically modeled as additive noise, their impact on tomography is multiplicative in nature.The experiments performed in this work with simulated data consider both multiplicative noise as well as additive disturbances (clutter) in the tomographic model.It is shown that the inverse coefficient of variation is a suitable parameter to assess the probability of detection, irrespective of the origin of noise.The proposed detection strategy is also tested on real data.An assessment of the variation of the observed false alarm rates against the thresholds set according to the proposed detection strategy has been conducted.An interferometric data stack comprising 50 Terra-SAR-X acquisitions over the city of Barcelona, Spain is used.Single-look beamforming for 1/2/3-D tomographic inversion, depending on whether the phase model used considers only the scatterer height, or height plus deformation velocity, or additionally thermal expansion, is performed.The results show that higher dimensionality and larger support sizes in each dimension lead to higher false alarm rates due to larger parameter space that may incorrectly fit noise to the data model.These results also suggest that in case a priori information can reduce the dimensionality and/or support sizes, it should be adopted by the user to reduce the false alarm rate in practice.For the case of 3-D tomographic inversion, with detection thresholds set in accordance with residual phase standard deviation below 1.1 rad for the prior PSI processing, the empirically estimated false alarm rate is <1.1 × 10 −3 .The gain in deformation sampling (due to layover resolutions) is 17% for a selected high-rise building.For a commercial complex in Diagonal Mar locality, it is 6.4%.As a whole, the number of double scatterers detected in the urban scene are <1% of the total detected scatterers.These results show that, for urban areas like Barcelona, when using interferometric data stacks comprising the typical stripmap products, the application of SAR tomography as an add-on to PSI is mainly useful for a detailed analysis of selected urban zones or individual buildings in layover. Figure 1 . Figure 1.Estimates of the coherence magnitude obtained with 10 5 Monte Carlo iterations assuming the residual phases have a von Mises distribution with concentration parameter, κ.Each solid line indicates the estimates for a specific number of acquisitions, M in the data stack.The vertical bars represent ± 1-σ from the mean.The dashed line shows the coherence magnitude under the assumption that the residual phases follow a linear normal distribution, cf.Equation (24) (assuming σ 2 w = κ −1 ). Figure 2 . Figure 2. Empirically estimated inverse coefficient of variation (iCV) of the test statistic α against concentration parameter for von Mises distributed phase residuals, for different number of scatterers, Q in the same resolution cell.The dashed lines enclosing the gray region indicate the theoretical bounds on the iCV (cf.Equation (51)) , where ν 1 = I 1 (κ)I 0 (κ) (Equation (22)). Figure 4 . Figure 4. Numerical analysis of the inverse coefficient of variation (iCV) of the test statistic α when point scatterers are embedded in different clutter levels, for M = 50 acquisitions.(a) iCV against concentration of the phase residuals for different levels of signal-to-clutter ratio (SCR) and number of scatterers, Q ∈ {1, 2}; (b) probability of detection against iCV for fixed levels of false alarm, f F ∈ 10 −2 , 10 −3 , 10 −4 , and Q ∈ {1, 2, 3}. Figure 5 . Figure 5. Data characteristics.(a) distribution of spatial (orthogonal component) and temporal baselines; (b) 2-D point spread function (PSF); (c) footprint of the reference acquisition over Spain. Figure 6 . Figure 6.PSI solution obtained with iterative least-squares regression-based processing using the interferometric point target analysis (IPTA) toolbox.The colored dots are the PSs identified in the PSI processing.(a) estimated height, relative to the WGS-84 reference ellipsoid; (b) deformation velocity in the line-of-sight; (c) phase-to-temperature sensitivity; (d) sample coherence, and histogram of the estimated concentration parameter (shown as inset). Figure 7 . Figure 7. Point cloud of single scatterers obtained with differential SAR tomography.The detection threshold is set corresponding to σ c = 1.1 rad under the proposed detection scheme (see Equations (52) and (53)).The color coding represents the estimated height.Some false alarms can be seen over the water surface , as highlighted in the inset. Figure 8 . Figure 8. False alarm rate observed over the sea patch in the viewed scene at different detection thresholds.The colored solid lines represent the case of 3/2/1-D tomographic inversion.The detection is performed on the retrieved reflectivity, |α| according to Equation (53).The dot-dashed lines shows the case of PSI whereby the detection is performed on the sample coherence, | γ| without fitting any phase model to the observed interferometric phases. Figure 9 . Figure 9. Point cloud of single scatterers obtained with differential SAR tomography.The detection threshold is set corresponding to σ c = 1.0 rad under the proposed detection scheme, see Equations (52) and (53).(Top) Estimated height, relative to the WGS-84 reference ellipsoid.(Middle): Deformation velocity in the line-of-sight.(Bottom) Phase-to-temperature sensitivity.In comparison with Figure 7 where a more relaxed detection threshold (corresponding to σ c = 1.1 rad) is used, fewer false alarms are observed here, as highlighted in the inset. Figure 10 . Figure 10.Point cloud of double scatterers obtained with differential SAR tomography.The detection threshold is set corresponding to σ c = 1.0 rad under the proposed detection scheme.(Top) Estimated height, relative to the WGS-84 reference ellipsoid.(Middle) Deformation velocity in the line-of-sight.(Bottom)Phase-to-temperature sensitivity.The left column shows the lower layer and the right column shows the upper layer of the double scatterers, respectively.The inset focuses on a commercial complex (Diagonal Mar).The red polygon encloses a single building, part of which is in layover with a nearby building of shorter height.
12,803
2018-06-25T00:00:00.000
[ "Mathematics" ]
Selected Compounds Modulate Various Inflammatory Biomarkers in Lipopolysaccharide-Induced Macrophages of PPAR-α Knockout Mice Background: Inflammation has been implicated in cancer, diabetes and cardiovascular disease. We have recently screened several compounds that modulate inflammatory biomarkers (TNF-α, IL-1β, IL-6, and nitric oxide) in response to a variety of stimuli. Our hypothesis is that compounds with those anti-inflammatory properties will be useful for treatment of diabetes, cardiovascular disease, and other diseases based on inflammation. Introduction We have recently described that naturally-occurring compounds play an important therapeutic role in modulating the inflammatory biomarkers in cardiovascular disease, diabetes and cancer [1].Those compounds were able to inhibit or activate secretion of TNF-α, and inhibit production of nitric oxide (NO) in murine cell line (RAW264.7),and in lipopolysaccharide-induced thioglycolate (TG)-elicited peritoneal macrophages prepared from C57BL/6 (wild type; control group), BALB/c, double subunits knockout (LMP-7/MECL-1 -/-) mice [1].However, in TG-elicited peritoneal macrophages obtained from peroxisome proliferator activated receptor-α (PPAR-α) knockout female mice, the secretion of TNF-α was activated by some of the compounds rather than inhibited, as compared to control (C57BL/6), and other groups [1].The important role played by lipopolysaccharides (LPS) in up-regulating inflammation is well-established [2].In short, LPS is expressed on the outer membrane of gram-negative bacteria.LPS induces several pro-inflammatory cytokines, such as tumor necrosis factor-α (TNF-α), interleukin-1β (IL-1β), IL-6, IL-8 and production of nitric oxide [2].Journal of Clinical and Experimental Research in Cardiology In order to find potent inflammatory biomarkers, we have selected 32 compounds of different categories of organic chemistry as shown in Table 1.The PPAR-α knockout female mice were selected for present study due to their different effects in LPSinduced macrophages of δ-tocotrienol, riboflavin, quercetin on secretion of TNF-α activation compared to corresponding wild type (C57BL/6) control (inhibition) group, and also due to the prolonged response to inflammatory stimuli [3].Moreover, the PPARs mice contain nuclear receptors, which bind to fatty acid-derived ligands and activate the transcription of genes that govern lipid metabolism.The primary sites of activation of PPAR-α, which recognizes monounsaturated and polyunsaturated fatty acids and eicosanoids, are present in liver, heart, muscle, and kidney. According to its role in regulating fatty acid metabolism, PPAR-α activates gene expression involved in fatty acid uptake (fatty acid binding protein), β-oxidation (medium chain acyl-CoA dehydrogenase, carnitine palmitoyl transferase I, and acyl-CoA oxidase), transport into peroxisomes (ATP-binding cassette transporters D 2 and D 3 ), and omega-oxidation of unsaturated fatty acids (cytochrome P-450, 4A1 and 4A3).Moreover, PPAR-α also induce fatty acid catabolism and prevent hypertriglyceridemia, and its activation decreases glucose uptake, and causes a shift in glucose use to fatty acid oxidation in cardiac muscle.Therefore, selective PPAR-α agonists that increase fatty acid catabolism without using lipid accumulation in the heart might be effective treatment for dyslipidemia [3]. The objective of present study was to evaluate the effects of 22 compounds (Table 1A) on the inhibition/activation of proteasome activities, secretion of TNF-α production of nitric oxide (NO), certain anti-inflammatory/pro-inflammatory cytokines (IL-1β, IL-6) and iNOS enzyme activity by using TG-elicited peritoneal macrophages prepared from PPAR-α knockout female mice.It has been reported that proteasome is a hollow, complex, regulatory protein consisting of three proteolytic subunits, X, Y, Z, with chymotrypsin-like, trypsin-like, and post-glutamase activities, respectively.Lactacystin is a potent known inhibitor of chymotrypsin-like activity of 20S proteasome and therefore we included lactacystin as a positive inhibitor in the current study [4].Lactacystin (lactone)as a proteasome inhibitor also known to be responsible for secretion of TNF-α, and nitric oxide production [5].As described earlier, that nitric oxide production increases during ageing process, which could be due to a diminished activation of NF-κB signaling [6][7][8].Therefore, it was suggested that above mentioned compounds may also block the activation of NF-κB, thus resulting lowering of serum TNF-α and nitric oxide (NO) levels in experimental models.The important role of NF-κB in various biological functions has been reported [9].The data on the effect of these modulators in secretion of TNF-α, and inhibition of nitric oxide production by these compounds may be of clinical relevance in host defense mechanisms against various infections, and therapy for several inflammatory diseases [10,11].α-Lipoic Acid 20 Coenzyme Q10 21 Materials and Methods The deep rough chemotype LPS (Re LPS) from E. coli D31M4 was purified as reported earlier [4].Dulbecco's Modified Eagle Medium (DMEM) heat-inactivated low-endotoxin fetal bovine serum (FBS), and gentamicin were purchased from Cambrex (Walkersville, MD, USA) for tissue culture studies.Thioglycolate (TG) was purchased from Sigma, Aldrich Chemical Co.(St.Louis, MO, USA) and RNeasy mini kit from QIAGEN Sciences (Germantown, MD, USA).Most of the compounds were purchased from Sigma-Journal of Clinical and Experimental Research in Cardiology Aldrich Chemical Co (St. Louis, MO, USA).Codeine and dopamine-HCL were obtained from the Department of Pathology, School of Medicine, Kansas City, MO after completing all the required formalities to use this compound in the laboratory. The PPAR-α knockout female mice were selected for the present study due to their differential effects with δ-tocotrienol, riboflavin, quercetin on the secretion of TNF-α compared to corresponding wild type (C57BL/6) control group in LPS-induced macrophages [1].The 6-week-old C57BL/6 female mice (Wild Type; control group) were purchased from the Jackson laboratory (Bar Harbor, ME, USA), and peroxisome proliferator-activated receptor-α (PPAR-α) knockout female mice were bred at UMKC's Animal Facility (Kansas City, MO, USA).Mice used in this study received humane care in compliance with the principles of laboratory animal care formulated by the National Society of Health Guide for the "Care and Use of Laboratory Animals" by the US National Society of Health (NIH Publication No 85-23, revised 1996).The experimental procedures involving animals were reviewed and approved by the "Institutional Animal Care and Use Committee of UMKC", Medical School, MO.USA. Animals All 6-week-old C57BL/6 (n = 8), and PPAR-α knockout female mice (n = 20) were acclimatized to new environment for 14 days before beginning experimentation.The mice were fed ad libitum regular commercial mouse diet and had free access to water throughout the experiment.A 12 h light and 12 h dark cycle was maintained during feeding period. Effects of selected compounds on chymotrypsin-like activity of 20S rabbit muscle proteasome The TG-elicited peritoneal macrophages were adhered to bottom of 100 mm tissue culture plates (1 x 10 7 cells/well in 1.0 ml medium) for 4 h, the supernatants were removed, and cells were washed extensively with medium three times.The cells were cultured overnight in fresh media after final wash.After overnight incubation at 37 °C in presence of CO 2 , cells were treated with various concentrations of each compound (100 μl of 20 μM to 320 μM dissolved in 0.2% DMSO) and LPS (10.0 μl of 1.0 μg/ml of stock solution; 10.0 ng/well) of each compound.The supernatants were collected after 4 h of incubation at 37 °C in presence of CO 2 , and after 4 h, the levels of TNF-α in supernatants were measured by Quantikine M ELISA kit (R&D System, Minneapolis, MN, USA) according to manufacturer's instructions.The lower limit of detection for TNF-α in this method is 5.0 pg/ml [12].The viability of peritoneal macrophages treated with various compounds plus LPS were also determined by trypan blue dye exclusion or a quantitative colorimetric assay with 3-(4,5)-dimethylthiozol-2,5-diphenyl-tetrazolium bromide (MTT) as described previously [5]. Methodology for use of selected compounds on secretion of TNF-α in LPS-induced TG-elicited peritoneal macrophages of 8-week-old female C57BL/6, and PPAR-α knockout mice Similarly, TG-elicited peritoneal macrophages (1 x 10 7 cells/well in 500 μl medium [DMEM]) were adhered to bottom of 100 mm tissue culture plates for 2 h.After 2 h, the cells were treated with 100 μl of various concentrations of each compound (dissolved in 0.2% DMSO) plus LPS (10.0 ng/well in 400 μl).The assay mixtures were incubated at 37 °C in presence of CO 2 for 36 h.After 36 h, the levels of nitric oxide (NO) were determined by measuring the amount of nitrite, a stable metabolic product of nitric oxide, as previously reported [10].The assay mixture contained medium (100 μl) plus Griess reagent (100 μl), and absorption was measured at 570 nm using a "Microplate Reader" (MR 5000; Dynatech Labs, Inc. USA).The amount of nitrite was determined by comparison of unknowns using a NaNO 2 standard curve.The NO detection limit was 0.20 nM [11]. Methodology for effect of selected compounds on production of nitric oxide in LPS-induced TG-elicited peritoneal macrophages of 8-week-old female PPAR-α knockout mice The TG-elicited peritoneal macrophages prepared from 8-week-old female PPAR-α knockout mice were adhered to the bottom of 100 mm tissue culture plates (1 x 10 7 /well in 1.0 ml medium) for 4 h.After four h, supernatants were removed, and cells were washed with medium three times.Cells were cultured overnight in fresh medium after the final wash.After overnight incubation at 37 o C in presence of CO 2 , the cells were treated with various compounds (dissolved in 0.2% DMSO) and LPS (10.0 ng/well).The supernatants were collected after 1 h, 2 h, and 3 h incubation at 37 o C in presence of CO 2 to carry out TNF-α estimation by using ELISA assay kit.The cells viabilities were also determined by MTT [12]. Methodology for effect of selected compounds on time-dependent secretion of TNF-α in LPS-induced TG-elicited peritoneal macrophages of 8-week-old female PPAR-α knockout mice Annex Publishers | www.annexpublishers.comVolume 3 | Issue 1 The various concentrations of each compound were dissolved in medium containing 0.2% DMSO.TG-elicited peritoneal macrophages were prepared from 8-week-old female PPAR-α knockout mice as described previously [1,4].The macrophages (1 x 10 6 cells/well in 500 μl medium) were adhered in wells with various concentrations of each compound for 2 h.Then all wells were challenged with LPS (10.0 ng/well; 400 μL), and incubated at room temperature for 4 h.After 4 h, assay mixtures were centrifuged at 2,000 rpm for 20 min.Cells were harvested, and total cellular RNA was extracted from each pellet with RNeasy mini kit (QIAGEN Sciences; Germantown, MD, USA) according to instructions of manufacturer.The RNA of each treatment was transcribed and resulting data was amplified and analyzed by real-time polymerase chain reaction (RT-PCR) to quantitate gene expression of TNF-α, IL-1β, IL-6 and iNOS by using 1-step RT-PCR kit (QIAGEN, Chatsworth, CA, USA) according to the manufacturer's instructions [4,13,14]. Detection of cell viability Stat View software (version 4.01, Abacus Concepts, Berkeley, CA) was used for the analyses of treatment-mediated effects as compared to control group.Treatment-mediated differences were detected with a one-way ANOVA, and when the F test indicated a significant effect, differences between means were analyzed by a Fisher's protected least significant difference test.Data were reported as means ± SD in text and Tables.The statistical significant level was set at 5% (P < 0.05). Results As mentioned earlier, lactacystin was included in this study, because it is a known selective inhibitor of chymotrypsin-like activity of proteasome and was used as a positive control.We first screened several compounds for chymotrypsin-like activity of 20S rabbit muscle proteasome.The 20S proteasome activity was measured after treatment with these compounds at concentrations ranging from 2.5 μM to 160 μM compared to activity of control.Results of this study revealed dose-dependent decreases of chymotrypsin-like activity of 20S rabbit muscle proteasomes between 2.5 μM and 40 μM for most of the compounds, except for acetylsalicylic acid (aspirin, 160 μM) compared to respective controls (Table 2).The most effective decrease in chymotrypsinlike activity of 20S proteasome was observed with thiostrepton (5 μM > 50%), rifampicin (20 μM), 25-hydroxycholesterol (20 μM), and trans-resveratrol compared to respective controls (Table 2).The effects of (-) corey lactone, ouabain, ampicillin (broadspectrum antibiotic), ascorbic acid (vitamin C), codeine, and amiloride-HCL showed increases between (10 μM -40 μM) in chymotrypsin-like activity of 20S proteasome (Table 2).These results indicate the capacity of these compounds to inhibit or activate 20S chymotrypsin-like activity at various concentrations.Therefore, in subsequent studies, most effective single dose of these compounds (14 out of 22) was selected to evaluate effects on inflammatory biomarkers, as shown in Table 3 and Figure 1. We next determined effects of these 14 compounds (Table 3) on secretion of TNF-α using concentrations between 2.5 μM -160 μM for all the compounds except acetylsalicylic acid (10 μM -640 μM) in LPS-induced TG-elicited peritoneal macrophages obtained from 8-week-old female PPAR-α knockout mice.There were dose-dependent increases in the secretion of TNF-α by (-) corey lactone, ouabain, ampicillin and ascorbic acid, and decreases by rest of compounds compared to respective controls (Table 4), which were similar to 20S activity (Table 3) as observed earlier.The most effective doses for the increases or decreases were between 20 μM -40 μM for all the compounds, except acetylsalicylic acid (aspirin) showed maximum decrease with 320 μM (Table 4).The results of these compounds are similar as reported for dexamethasone, mevinolin, δ-tocotrienol, riboflavin and quercetin-HCL in LPS-induced peritoneal macrophages of PPAR-α knockout mice, which are different as compared to its control C57BL/6 (Wild Type) mice [1].The thioglycolate-elicited peritoneal macrophages of each mice were adhered to the bottom of 100 mm tissue culture plates (10 7 cells/well in 1.0 ml media) for 4 h, supernatants were removed, and the cells were washed extensively with medium three times.The cells were cultured overnight in fresh medium after the final wash.After overnight inc at 37 o C in presence of 5% CO 2 for 4 h.The cells were treated with various compounds (100 μl, dissolved in 0.2% DMSO) of various concentrations and LPS (10 ng/ml; 400 μl) was added to culture solution (LPS final concentration, 10.0 ng /μl).The supernatants were collected after 4 h of incubation at 37 o C in presence of 5% CO 2 for 4 h.The supernatants were assayed for TNF-α by using ELISA assay kit.Cells viability were > 95% in all treatments 4: Effects of selected compounds on the secretion of TNF-α (pg /ml) in LPS-stimulated thioglycolate-elicited peritoneal macrophages obtained from 8-week-old female PPAR-α knockout mice 1 The above results prompted us to determine the impact of at least a single dose (20 μM) of lactacystin, thiostrepton, 2-hydroxyestradiol, 2-methoxyestradiol, and 40 μM for remaining compounds in LPS-induced TG-elicited peritoneal macrophages obtained from 8week-old female C57BL/6 (Wild Type-WT; its control group).Most of the compounds showed significant decreases 33% to 54% (P < 0.01 -0.001) in the secretion of TNF-α in peritoneal macrophages of C57BL/6 (Wild Type, control group) female mice (Table 5; Figure 2A). Figure 2A, B: Effects of selected compounds on the secretion of TNF-α in LPS-induced TG-elicited peritoneal macrophages of 8-week-old female C57BL/6 (A), and PPAR-α knockout (B) mice TG-elicited peritoneal macrophages were prepared from 8-week-old female C57BL/6 (Wild Type; 2A), and PPAR-α knockout (2B) mice as described previously [1].The macrophages of each mouse were adhered to the bottom of 12 well plates (1 x 10 7 cells /well in 1 ml media) for 4 h.After 4 h, supernatants were removed, and the cells were washed extensively with medium three times.The cells were cultured overnight in fresh medium after final wash.After overnight incubation at 37 o C in presence of 5% CO 2 , the cells were treated with various compounds (100 μl dissolved in 0.2% DMSO) of different concentrations of 3. lactacystin Similarly, we have determined the effects of same 14 compounds (Table 3) on production of nitric oxide (NO) using concentrations between 20 μM -40 μM for all compounds except acetylsalicylic acid (320 μM) in LPS-induced TG-elicited peritoneal macrophages obtained from 8-week-old female PPAR-α knockout mice.There were dose-dependent decreases in production of nitric oxide by all compounds in this system (Table 6).The significant (P < 0.01 -0.001) reduction in production of nitric oxide was observed with (-) corey lactone (50%), rifampicin (47%), 2-hydroxyestradiol (58%), 2-methoxyestradiol (55%), 25-hydroxycholesterol (63%), nicotinic acid (45%), and trans-resveratrol (50%) compared to controls (Table 6, Figure 3).The treatment with acetylsalicylic acid (aspirin) showed 23% (P < 0.01) decrease, even with a very high dose (320 μM) compared to control (Table 6, Figure 3).Perhaps, that is why high dose of > 200 mg/d aspirin are prescribed most of the time, because acidic pH may have inhibited the activity.The thioglycolate-elicited peritoneal macrophages (1 x 10 7 cells/well in 500.0μl medium) of each mice were adhered to the bottom of 100 mm tissue culture plates for 4 h.After 4 h, supernatants were removed, and the cells were washed with medium three times.The cells were treated with various concentrations of each compound (100 μl dissolved in 0.2% DMSO), and induced with LPS (10 ng /well; 400 μl).The assay mixtures were incubated at 37 o C in presence of 5% CO 2 for 36 h.The supernatants were assayed for the production of nitric oxide by measuring the amount of nitrite using Griess reagent.Data are presented as the percent of nitric oxide (NO) levels compared to control 6: Effects of selected compounds on the production of nitric oxide (NO; μM) in LPS-induced thioglycolate-elicited peritoneal macrophages of 8-week-old female PPAR-α knockout mice 1 The earlier experiments showed dose-dependent increases with treatment of (-) corey lactone, ouabain, ampicillin and ascorbic acid in secretion of TNF-α and decreases in induction of nitric oxide with all remaining 10 compounds (Tables 5,6) tested in TGelicited peritoneal macrophages derived from PPAR-α knockout female mice.Therefore, we thought of value to estimate timedependent effects of these compounds (Tables 5,6) for secretion of TNF-α by using identical conditions using same macrophages, as described in previous paragraphs.In the present experiment, we have checked the effects of these compounds (Tables 5,6) after incubation of 1 h. 2 h, and 3 h only, because estimation of TNF-α was carried out earlier after incubations of 4 h.The treatments with (-) corey lactone, ouabain, ampicillin and ascorbic acid showed time-dependent increases in secretion of TNF-α varies between 104% -130% as shown in Table 7.The rest of the ten compounds showed time-dependent significant decreases in TNF-α secretion between 1 h--23% to 3 h--43% (Table 7).The maximum decrease was observed with thiostrepton (43%) and 2-hydroxyestradiol (41%) among these compounds (Table 7).When these increases or decreases were compared values of secretion TNF-α with 4 h earlier incubation (Table 5) resulted slightly higher increases (127% to 190%), and decrease were closely similar to 3 h of incubation, indicated that 4 h of incubation time used earlier experiments was correct for these biomarkers for all these compounds.The thioglycolate-elicited peritoneal macrophages of each mice were adhered to the bottom of 100 mm tissue culture plates (10 cells/well in 1.0 ml medium) for 4 h.After 4 h, supernatants were removed, and the cells were washed extensively with medium three times.The cells were cultured overnight in fresh medium after the final wash.After overnight incubation at 37 o C in CO 2 , the cells were treated with various compounds (dissolved in 0.2% DMSO) 2 of various concentrations and LPS (10 μl of 1.0 μg/ml stock solution) was added to culture solution (LPS final concentration, 10.0 ng /ml).The supernatants were collected after 4 h of incubation at -37 °C in CO 2 to carry out TNF-α ELISA assay kit; Cells viability were > 95% in all treatments The (-) corey lactone, ouabain, ampicillin and ascorbic acid treatment significantly up-regulated in mRNAs expression of TNF-α (31%, 36%, 60%, 46%), respectively compared to control in LPS-induced TG-elicited peritoneal macrophages derived from PPAR-α knockout female mice (Table 8, Figure 4).On the other hand, treatments with rest of the compounds markedly down-regulated gene expression of TNF-α and other biomarkers.Each of the above compounds significantly down-regulated gene expression of IL-1β (5% -69%), IL-6 (4% -37%) and iNOS activity (11% -33%) compared to control (Table 8, Figures 5-7).Lactacystin, a positive inhibitor tested in macrophages obtained from PPAR-α knockout mice markedly down-regulated mRNA expression for TNF-α (31%), IL-1β (69%), IL-6 (84%), and iNOS activity (39%) compared to their respective controls (Table 8, Figures 4-7).The acetylsalicylic acid treatment of these macrophages down-regulated gene expressions less profoundly compared to other compounds of these four biomarkers 10%, 13%, 9%, and 11%, respectively compared to controls (Table 8).In summary, these results of mRNA expression studies were generally consistent with those of TNF-α secretion and NO production, though the enhanced secretion of TNF-α by LPS-induced macrophages from PPAR-α knockout mice treated with (-) corey lactone, ouabain, ampicillin and ascorbic acid was not explained by a corresponding increase in mRNA levels.The thioglycolate-elicited peritoneal macrophages of each mice were adhered to the bottom of 12 well plate (10 6 cells/well in 1.0 ml medium) for 2 h.After 2 h, the cells were treated with 100 μl of various compounds (dissolved in 0.2% DMSO) and LPS (10.0 ng /well in 400 μl), and incubated at room temperature for 4 h.Total RNAs were extracted from each cell pellet, and real-time PCR was conducted to quantitate TNF-α, IL-1β, IL-6, and iNOS genes from each experiment.Data are presented as the percent of mRNAs of the genes analysed compared to respective controls The experiments described above demonstrated that out of fourteen compounds, (-) corey lactone, ouabain, ampicillin and ascorbic acid showed increases in the secretion of TNF-α and remaining compounds resulted decreases in LPS-induced TGelicited peritoneal macrophages from PPAR-α knockout female mice.In contrast, all these compounds showed decreases in nitric oxide production in LPS-induced TG-elicited peritoneal macrophages derived from PPAR-α knockout female mice.In order to determine whether these changes resulted from alterations in transcription of the relevant genes, we measured the effect of all these compounds on mRNA levels for various cytokines (TNF-α, IL-1β, IL-6) and iNOS enzyme activity in LPS-induced.TG-elicited peritoneal macrophages from 8-week-old PPAR-α knockout female mice.The concentrations (in μM) of each compound and conditions were similar to those used in earlier experiments, and macrophages were treated with selected compounds plus LPS (10 ng/well), and incubated for 4 h.Total cellular RNA was then extracted by using RNAeasy mini kit and reverse-transcribed, and gene analyses were carried out by RT-PCR.The primary objective in the present study was to evaluate anti-inflammatory and pro-inflammatory properties of several compounds in macrophages derived from PPAR-α knockout mice.As a result of these studies we have identified several compounds that could potentially decrease or increase levels of secretion of inflammatory cytokines and production of nitric oxide that may contribute to treatment of several human diseases.First, we have demonstrated that thiostrepton, rifampicin (broad spectrum antibiotics), 2-hydroxyestradiol, 2-methoxyestradiol, 25-hydroxycholesterol, nicotinic acid, vitamin D 3 , and trans-resveratrol caused significant decreases in chymotrypsin-like activity of 20S rabbit muscle proteasomes, and followed by significant increases with (-) corey lactone, ouabain, ampicillin, ascorbic acid, codeine, amiloride-HCL chymotrypsin-like activity of 20S rabbit muscle proteasomes compared to respective control groups.These results indicated that there are two distinct sets of compounds, one set of compounds causes decrease, and second set of compounds resulted in increase in 20S activity.This conclusion was further supported in the secretion TNF-α by both two set of compounds in LPS-induced TG-elicited peritoneal macrophages derived from PPAR-α knockout mice as observed for 20S activity.In marked contrast to observations with TNF-α, all compounds suppressed nitric oxide (NO) production in LPS-induced TG-elicited peritoneal macrophages from PPAR-α knockout mice.The effect of both sets of compounds on gene expression of TNF-α, IL-1β, IL-6, and iNOS in LPS-induced macrophages from PPAR-α knockout mice were down-regulated or up-regulated, which were generally consistent with at the protein levels of secretion of TNF-α and production of nitric oxide (NO). Discussion As described earlier that PPAR-α knockout mice have exaggerated inflammatory responses to a variety of stimuli, because activation of PPAR-α leads to anti-inflammatory effects [15].The mechanisms leading to these exaggerated inflammatory responses are believed to be at least partially attributable to increased NF-κB activity [15,16].Consequently, one would expect TNF-α secretion by LPS-induced macrophages from PPAR-α knockout mice to be highly up-regulated and relatively resistant to inhibition by some of these compounds that degrade p-IκB, and decrease NF-κB activity.Therefore, (-) corey lactone, ouabain, ampicillin, ascorbic acid, codeine, amiloride-HCL failed to suppress TNF-α secretion by LPS-induced macrophages from PPAR-α knockout mice. Apart from present results, importance of other inflammatory biological functions of these compounds were realized by checking the published reports of other investigators .Lactacystin is a potent proteasome inhibitor and played important role in ubiquitin-proteasome pathway in various cellular inflammatory processes, such as cell cycle, apoptosis, and the degradation of regulatory or membrane proteins [17].(-) Corey lactone is a hydroxyl-lactone intermediate for the synthesis of prostaglandins and prostaglandin analogs, and ouabain plays an active role in the transport of Na + -K + -ATPase in the brain, and also involved in the regulation of several inflammatory cell functions (proliferation, hypertrophy, apoptosis, mobility, and metabolism) [18,19].Ouabain is endogenously produced in mammals and circulates in plasma as a hormone in normal condition and disease.It induces Na + -K + -ATPase signaling in cytogenesis of autosomal dominant polycystic kidney disease, hypertension, and also provides cardioprotection against stressful stimuli such as ischemia [20,21]. Thiostrepton is a natural potent proteasome inhibitor, and induces apoptotic cell death in human cancer cells.It also induces oxidative and proteotoxic stress by up-regulating the stress-related genes as well as endoplasmic reticulum stress genes [22].Whereas, rifampicin is one of most potent broad spectrum anti-inflammatory antibiotics against bacterial pathogens used to treat tuberculosis by inhibiting the bacterial RNA polymerase by blocking the pathway of elongating RNA in humans [23].Ampicillin is a potent anti-inflammatory antibiotic to treat respiratory tract infections, urinary tract infections, meningitis, salmonellosis, and endocarditis [24]. Annex Publishers | www.annexpublishers.comVolume 3 | Issue 1 2-Hydroxyestradiol plays an important inflammatory role in breast carcinogenesis by increased cell proliferation and formation of reactive oxygen species, which is due to increase deoxyribonucleic acid mutations [25].2-Methoxyestradiol, an estrogen hormone metabolite is also a potent cancer chemotherapeutic agent, and causes induction of apoptosis in transformed and exhibit an antiproliferative effect on tumor growth.Moreover, its anticancer activity has been attributed to its anti-tubulin, anti-angiogenic, pro-apoptotic and ROS induction properties [25,26].25-Hydroxycholesterol plays important role in maintenance of cholesterol homeostasis for supplying tissues with proper amount of cholesterol and prevent accumulation that may affect health of the individual [27].Moreover, 25-hydroxycholesterol and one of its metabolites are also involved in regulation of immunity.Therefore, 25-hydroxycholesterol may be much more important as regulator of immunity than as a regulator of cholesterol metabolism in humans [27]. Acetylsalicylic acid (aspirin) is most widely used anti-inflammatory medicine "over-the-counter" in world [28].Aspirin has been used as analgesic (pain reliever) by blocking the action of COX enzyme, which produces prostaglandins needed for pain response; therefore, it is used to treat headaches, aches and pains [28].Its anti-inflammatory property makes it an effective medicine to treat arthritis and other rheumatologic diseases [28,29].It also reduces the risk of stroke and heart attacks by reducing the platelet aggregation [30].It has been established that platelet aggregates adhere to walls of blood vessels and block blood flow resulting in heart attack.Aspirin is very effective in thinning the blood, thus preventing stroke in humans [30].It has a very important property of antipyretic (fever-reducing) by acting on hypothalamus (a small gland in the brain) that helps to regulate the body temperature [31].Aspirin has been used as inflammatory agent to treat various types of cancers, diabetes, and Alzheimer disease (dementia) in humans [32,33]. Ascorbic acid is involved many cellular reactions.Its mechanism of action is involved in synthesis of collagen, and its major function seems to keep prosthetic metal ions in their reduced form [34].It has a very good anti-oxidant property, and protects several tissues from harmful oxidative products by keeping certain enzymes in their required reduced form [34]. Nicotinic acid is synthesized from essential amino acid (tryptophan) in plants and animals.The important role of this vitamin is to lower cholesterol and particularly triglycerides level in blood [35].It also plays important roles in diabetes, arthritis, and blood pressure.This vitamin is required for healthy state of nervous system, and is essential for synthesis of sex hormones (estrogen, progesterone, testosterone), andocortisone, thyroxin and insulin [35].There are two major vitamin D derivatives, ergocalciferol (D 2 ), and cholecalciferol (D 3 ) found naturally in fish-liver oil, egg yolks, and liver and vitamin D 3 is fat soluble..These vitamins are photosynthesized in skin of vertebrates by solar ultraviolet radiation, and transported to liver, where it is converted to 25-hydroxyvitamin D [36,37].This 25-hydroxyvitamin D is converted into 1,25-hydroxyvitamin D in the kidneys, and its biological function is due to 1,25-hydroxyvitamin D in humans [38].The production of 25-hydroxyvitamin D and 1,25-hydroxyvitamin D is tightly regulated in the liver and kidneys.The activity of vitamin D-hydroxylase is down-regulated by vitamin D and its metabolites, thus limiting its increase in the circulating concentration of 25-hydroxyvitamin D following intakes (fortified milk or food products) or production of vitamin D after exposure to sunlight [38].trans-Resveratrol is a potent inflammatory agent, found in red grapes, and well known for its chemo-preventive efficacy against several types cancers, and also involves in several other cellular processes as described in the present study.All these published studies clearly indicate the importance of these compounds in the area of cardiovascular, diabetes, and various types of cancers [39,40]. In short, our present results demonstrated that thiostrepton, rifamcipin, 2-hydroxyestradiol, 2-methoxyestradiol, 25-hydroxycholesterol, nicotinic acid, vitamin D 3 , and resveratrol are potent inhibitors for the inhibition of production of nitric oxide tested in vitro.These compounds also down-regulated inflammatory cytokines and gene expression of TNF-α, IL-1β, IL-6, and iNOS enzyme activity after response to LPS.The possible mechanism as reported earlier that these compounds blocked LPS-induced activation of NF-κB, and also blocked TNF-α induced phosphorylation and degradation of IκBα through the inhibition of IκBα kinase activation [41]. The present study describes that there are distinct two sets of compounds tested in the study, first set consisted of (-) corey lactone, ouabain, ampicillin, ascorbic acid, codeine, amiloride-HCL, which are potent activators of chymotrypsin-like activity of 20S rabbit muscle proteasomes, and second set consisted of thiostrepton, rifamcipin (broad spectrum antibiotics), 2-hydroxyestradiol, 2-methoxyestradiol, 25-hydroxycholesterol, nicotinic acid, vitamin D 3 , and resveratrol are potent inhibitors of chymotrypsin-like activity of 20S rabbit muscle proteasome.These two sets also have similar effects on levels of secretion of TNF-α tested in vitro using LPS-induced TG-elicited peritoneal macrophages of PPAR-α knockout mice.However, all compounds also blocked production of nitric oxide due to activation of NF-κB, and degradation of p-IκB as described earlier [1].Similarly, these compounds either up-regulate/down-regulated gene expression of secretion of TNF-α and down-regulated production of nitric oxide, IL-1β, IL-6 and iNOS tested in LPS-induced TG-elicited peritoneal macrophages of PPAR-α knockout mice.The results of present study have provided two sets of compounds, pro-inflammatory (might be useful for treatment of other diseases based on no inflammation), and anti-inflammatory (might be useful for control of diabetes and cardiovascular disease). 2 DMSO = Dimethyl sulfoxide 3 Percentage of control values are in parenthesis *Average of triplicate analyses of each sample Table 2 Percentage of control values are in parenthesis 3 DMSO= Dimethyl sulfoxide.*Average of triplicate analyses of each sample * _ **Values in a column sharing a common asterisk are significantly different at *P < 0.01; **P < 0.001 Table 2 DMSO = Dimethyl sulfoxide 3 Percentage of control values are in parenthesis Table7: Effects of selected compounds on time-dependent secretion of TNF-α in LPS-induced thioglycolated-elicited peritoneal macrophages of 8-weekold female PPAR-α knockout mice1 2 DMSO = Dimethyl sulfoxide 3 Percentage of control values are in parenthesis Table8: Effects of selected compounds on gene expression of TNF-α, IL-1β, IL-6 and iNOS in 8-week-old female PPAR-α knockout Mice1 Figure 5 : 4 Figure 6 : Figure 5: The effect of different compounds on gene expression of interleukin-1β (IL-1β) in LPS-induced TG-elicited peritoneal macrophages from 8-week-old female PPAR-α knockout mice: The procedure to quantitate gene expression of IL-1β was exactly same as described in Figure 4 Figure 7 : Figure 7: The effect of different compounds on gene expression of iNOS enzyme in LPS-induced TG-elicited peritoneal macrophages from 8-week-old female PPAR-α knockout mice: The procedure to quantitate gene expression of iNOS was exactly same as described in Figure 4. Table 3 : Effects of various compounds on the chymotrypsin-like activity of 20S rabbit muscle proteasome 1 Chymotrypsin-like activity was carried out by using synthetic dipeptide sustrate III (Suc-Leu-Leu-Val-Tyr-AMC) in Tris buffer pH 7.5, 0.02 M. Rabbit muscle proteasome was used.Time of incubation was 30 minutes
7,380.8
2017-04-01T00:00:00.000
[ "Medicine", "Biology" ]
An Analysis of Promoting High-Quality Development of Regional Economy by Vigorously Development Digital Economy : Digital economy has become a driving force for the high-quality development of regional economy that cannot be ignored. Under the background of the continuous progress and promotion of digital technology, the digital economy industry is developing rapidly, and more enterprises are turning their attention to the digital transformation. Digital economy has become a new engine to promote the regional economy. Digital economy can bring new impetus to regional economy and enhance regional economic competitiveness and development level. At the same time, the digital economy can also promote the optimal allocation of resources, improve efficiency, reduce costs, and bring new vitality to the regional economy. Micro-Level Level On the basis of "intelligent + enterprise" guidance, the use of advanced digital technology, can greatly improve the operation of "intelligent + enterprise", so that it has more operability, and can be more accurate matching, further reduce the production and transaction costs of enterprises, enhance its market competitive power. (1) With the rapid development of artificial intelligence, many companies have begun to purchase advanced robots, manipulator and intelligent machinery to reduce the burden on traditional workers, and also greatly reduce the employment cost, thus effectively alleviating the employment dilemma, greatly improving the work efficiency and reducing the production cost. (2) In addition, the company can also use cloud computing technology to realize the usual management work, the previous tedious, time-consuming, meaningless daily affairs, as well as more interesting new technologies, thus greatly improving the operation of the company. With the continuous innovation of science and technology, enterprises can use these scientific tools to get the best business opportunities, in order to invest the most sufficient time, the most precious energy, the most advanced resources, and the most advanced equipment, to constantly improve the service quality, and meet the requirements of consumers, it will bring great improvement for the comprehensive competitive force of enterprises. In addition, the use of big data, mobile Internet and other cutting-edge science can help traditional enterprises overcome the simplification of products, and quickly and accurately find products to meet the current market needs, which will greatly reduce the cost of R & D, marketing, management and other aspects of enterprises, and bring considerable benefits for them. There are three major improvements needed: (1) removing barriers between the company and potential customers, and being able to understand and meet needs faster. (2) Enhance the company's market insight ability.(3) With the development of the mobile Internet, many companies have begun to adopt this new business method to promote the development of the market. These methods can not only help companies reduce their operating costs, but also enhance their market competition. Through these methods, companies are able to escape previous exhibitions and face-to-face visits to market more effectively. With the development of society, the needs have become more and more complex, and the application of big data can help companies to discover and promote their products faster, thus enhancing their competitiveness. In addition, the Internet can also find more economically valuable resources and apply them to the + Internet, so as to promote more competitive new business models, such as sharing economy and O2O, so as to greatly expand the scope of marketing, enhance its profitability, and achieve better market positioning [3] . Macro Level On a global scale, the rise of digital economy has brought great changes to various regions, not only promoting the continuous improvement of production technology, but also bringing comprehensive changes to the industrial structure, promoting the economic growth and promoting the social sustainability of various regions. Through the introduction of digital economy, the factor allocation of regional industries can be effectively improved, so as to transcend the traditional way of relying on labor, land and resources to realize the effective utilization of various elements, so as to achieve the best production effect. With the rapid development of digital economy, especially the emergence of blockchain technology, local governments and enterprises have brought more possibilities, which helps to optimize the distribution and circulation of factors, reduce the consumption of time and space, and help to promote the growth of local wealth. This will help promote the innovation and development of the financial industry, and make the financial products can be improved, thus bringing more investment opportunities to the local economy. In addition, local governments will also promote cooperation between local enterprises, and adopt the most cutting-edge anti-counterfeiting technology to protect local security, so as to improve the reliability of local economy, effectively alleviate the financing difficulties of local industries, and promote the development of local industries. In a word, with the help of optimizing the distribution of elements, strengthening the reform of the local financial system, and broadening the space of innovation, these can promote the sustainability and stability of the local economy at the macro level. Thanks to the booming development of technology, such as big data, new technologies, mobile networks, virtual reality, and blockchain, these cutting-edge technologies are changing lives and making society more prosperous. The application of these sciences and technologies will not only help to improve our living standards, but also help to promote social change. The Digital Economy Promotes Sustainable Industrial Growth In the past, due to the lag of technology and insufficient production efficiency, the traditional industries could not achieve sustainable growth. However, with the rapid rise of the digital age, big data technology, the popularization of artificial intelligence technology, makes the industry can be more intelligent, independent, thus greatly improve the industrial structure, greatly improve the work efficiency, also greatly improve the workers' working environment, so as to realize the sustainable growth of industry. Specifically, the development of the digital economy has changed the long-term dependence of traditional industries on human resources and reduced the marginal cost. At the same time, with the support of artificial intelligence technology, the mutual combination of digital and resources can be realized, and the efficiency of services and product quality can be improved based on the Internet of Things. Especially in some agriculture-oriented areas, digital economy can promote the deep integration of mechanization and informatization, so that land, the main factor of production, is fully applied, and promote the improvement of grain production efficiency and quality. Digital Economy Reduces Market Transaction Costs In the process of market transactions, more or less will produce a certain cost. The transaction cost theory can explain the reasons for a large number of idle resources in the society. In the process of product trading, information asymmetry is a common problem, which is also an important reason for the increase of transaction costs, leading to the continuous improvement of product transaction value, the effectiveness of resource allocation, and the market efficiency in a relatively low state. The emergence and development of the digital economy can effectively alleviate this problem and reduce the costs incurred in trading activities. At present, from product production to sales to aftersales service, digital economy has been widely penetration, this to a large extent, solve the problem of information asymmetry, make the trading parties can trust each other, at the same time reduce the risk of the trading process, eventually make the market transaction costs are reduced, created conditions for regional economic development with high quality. The Digital Economy Shares the Data With the support of the digital economy, the government can build an intensive big data platform in the process of carrying out various management work, based on which the relevant data and information can be made public, and the sharing of resources can be realized on this basis. With the popularity of the Internet, people can easily access the data and information released by the government and understand the latest policies and regulations. Therefore, the digital economy development strategy has been effectively implemented. To this end, we should establish an integrated platform for data resources, strengthen the construction of infrastructure, and create "Internet + government services", so as to promote the digitalization and informatization of all walks of life, and realize the transformation from the traditional information age to the intelligent age. With the disclosure of government data, social resources can be fully utilized, from which enterprises can obtain valuable information, so as to make better and correct decisions, and thus bring greater economic benefits. Consumer Demand Continued to Drive GDP Growth The rapid development of digital economy has provided new impetus and opportunities for economic development, but also promoted the rise of consumer demand. Driven by the digital economy, consumer demand shows a trend of diversification and individuation, and the demand for products and services is more pursuit of high quality, high added value and high experience. This change in consumer demand not only promotes the development of the digital economy, but also plays a positive role in the overall economic growth and transformation and upgrading. On the one hand, the development of digital economy brings consumers more convenient, effective and personalized services and commodities, enhancing consumer satisfaction and loyalty; on the other hand, the development of digital economy also provides more innovation space and profit opportunities for enterprises, and promotes the development and innovation of enterprises. This change in consumer demand and the development of the digital economy reinforce each other, continuously contributing to GDP growth. In the long run, the digital economy will provide more new opportunities and challenges for the upgrading of consumer demand, promote consumer demand to continue to drive GDP growth, and promote high-quality economic development. Therefore, local governments should actively promote the development of digital economy, promote the upgrading of consumer demand, and inject new impetus and vitality into economic development [4] . Per Capita Disposable Income Grew Rapidly The development of the digital economy has brought new opportunities and new impetus to the increase of per capita disposable income. The rapid development of the digital economy brings new vitality to the economic development, driving the growth of new industries, increasing employment opportunities, and thus promoting the growth of per capita disposable income. In addition, the development of digital economy brings more convenient, effective and personalized services and commodities to consumers, improving people's quality of life and consumption ability, and thus further promoting the increase of per capita disposable income. In addition, the development of the digital economy promotes the optimization of the labor market, improves the value of the labor force and the wage level of the people, thus boosting the growth of per capita disposable income. The development of digital economy and the increase of per capita disposable income promote each other. The rapid development of digital economy brings greater opportunities and challenges to people's life and work, and also provides a broader space for people. To this end, local governments should actively promote the development of the digital economy, strengthen talent training, and enhance their innovation capacity, so as to provide more solid support for per capita disposable income. Growth of Online Consumption is Becoming Stable The rapid development of the digital economy has injected new impetus and opportunities into online consumption. With the popularization of digital technology and information network, network consumption has shown a trend of rapid growth and has become an important engine of economic growth. The development of digital economy has provided a more convenient, efficient and safe consumption environment for online consumption, and at the same time has provided online consumers with more diversified and rich consumption choices. It has brought about a broader market space and promoted the rapid growth of online consumption [5] . It provides more efficient payment and logistics services, and improves the convenience and satisfaction of online consumption. The growth trend of online consumption has not only promoted the development of the digital economy, but also has a positive impact on the transformation and upgrading of the whole economy. The growth trend of online consumption will further promote the development of the digital economy and promote the high-quality development of the economy. Therefore, local governments should actively promote the development of digital economy, strengthen the supervision and protection of online consumption, and provide more powerful support for the healthy development of online consumption. The New Force of Consumption is Emerging With the advent of the digital era, China's market is undergoing profound changes. The people of Generation Z are playing an extremely important role with their unique vision and lively thoughts. They have a very dynamic consumption desire, which is promoting the market to play its strongest vitality. With the emergence of Gen Z, the younger generation of the post-1995 and post-2000 generations have the opportunity to participate in the daily life of today's society, and their consumption behavior will become more lively in the future. To Achieve Greater Development by Promoting the Deep Integration of IT and the local Real Economy In the context of the rapid development of digital economy, the regional real economy should be fully integrated with information technology, and promote the integration between ICT and the real industry on this basis. First of all, IT technology should be integrated into the whole process of enterprise development, and new attempts should be made in artificial intelligence manufacturing, industrial Internet and agricultural modernization, so as to make the supply system of the digital economy more perfect. Secondly, the Internet industry, digital technology service industry and traditional industries should be integrated with each other to achieve the effect of cross-border integration. The three industries have different characteristics, and they can do their best on the basis of mutual integration and communication. Regional governments build a sound system based on the three levels of policy, industry and technology, and cultivate new industrial subjects of digital economy based on the principle of integrated development. Improving and Upgrading the Regional Economic Structure In order to give full play to the advantages of digital economy, it is necessary to promote the optimization and upgrading of regional economic structure. Specifically, it is necessary to seize the opportunity of digital technology, innovate intelligent equipment and intelligent technology, and at the same time, use modern information technology to improve the efficiency of resource integration, optimize the production process of manufacturing and other industries, and in this way, the value of data can be realized, and then promote the development of regional economy. At the same time, it is necessary to improve the operation efficiency of knowledge and information in the market, improve the efficiency of inter-regional transactions, so that the regional space can achieve the effect of fine division of labor. At the same time, the regional workflow should be reasonably allocated, so as to finally achieve the goal of optimizing and upgrading the economic structure. Building an Environment Conducive to the Development of Digital Technology The high-quality development of regional economy needs the digital economy as the driving force, while the development of digital economy needs to rely on the innovation of digital technology. First, the development of digital technology should be promoted based on legal and policy perspectives. In order to promote the development of local digital economy, it is necessary for regional governments to formulate corresponding policies to clarify the industrial layout and future strategic goals of digital economy. On this basis, it is also necessary to develop detailed development plans, and establish information standards and technical specifications. Secondly, the mode of "mutual assistance between government, industry, university and research" should be constructed to realize the transformation of scientific and technological achievements of digital economy productivity. In this process, the scientific research institutions of universities should be taken as the foundation, market-oriented development as the guidance, and industrialization development should be taken as the goal, seek the win-win situation of all partners, build a new cooperation system, and jointly promote the innovation of digital technology and the development of digital economy. Conclusion To sum up, digital economy can promote the circulation of regional production factors, improve product quality, reduce market transaction costs, promote the disclosure of government data and realize resource sharing, and ultimately promote the high-quality development of regional economy. In the future, advanced digital technology will be used to promote the sustainability of the digital economy, open up new business opportunities, launch new products and business models, and create a sound and all-round ecosystem supporting the development of the digital economy.
3,794
2023-01-01T00:00:00.000
[ "Economics", "Computer Science", "Business" ]
Blockchain-Based Smart Farm Security Framework for the Internet of Things Smart farming, as a branch of the Internet of Things (IoT), combines the recognition of agricultural economic competencies and the progress of data and information collected from connected devices with statistical analysis to characterize the essentials of the assimilated information, allowing farmers to make intelligent conclusions that will maximize the harvest benefit. However, the integration of advanced technologies requires the adoption of high-tech security approaches. In this paper, we present a framework that promises to enhance the security and privacy of smart farms by leveraging the decentralized nature of blockchain technology. The framework stores and manages data acquired from IoT devices installed in smart farms using a distributed ledger architecture, which provides secure and tamper-proof data storage and ensures the integrity and validity of the data. The study uses the AWS cloud, ESP32, the smart farm security monitoring framework, and the Ethereum Rinkeby smart contract mechanism, which enables the automated execution of pre-defined rules and regulations. As a result of a proof-of-concept implementation, the system can detect and respond to security threats in real time, and the results illustrate its usefulness in improving the security of smart farms. The number of accepted blockchain transactions on smart farming requests fell from 189,000 to 109,450 after carrying out the first three tests while the next three testing phases showed a rise in the number of blockchain transactions accepted on smart farming requests from 176,000 to 290,786. We further observed that the lesser the time taken to induce the device alarm, the higher the number of blockchain transactions accepted on smart farming requests, which demonstrates the efficacy of blockchain-based poisoning attack mitigation in smart farming. Introduction Normally, as the population of the world grows, so does our need for agricultural improvement.Farmers work to produce crops that will provide food for people all over the world as the economies of most countries are primarily dependent on the agricultural industry [1].Moreover, many nations have agricultural departments that work to strengthen their country's economy, especially through agriculture.Over the past few decades, it has become clear that the growth of the IoT has revolutionized the way farming is done and advanced the operational capabilities of the agricultural sector [2,3].The integration of the IoT into agricultural growth is known as smart farming, and it is quickly fitting in as the new normal as connected devices, smart things, and robots exhibited around the globe are expected to be valued at around $15.93 billion in 2028, representing a yearly growth ratio of about 20.31% between 2021 and 2028 [4].Similarly, modern agricultural frameworks are integrated into rural regions and competitors are targeting them for cyberattacks.For example, a ransomware outbreak at the food transportation division of the meat management company JBS halted operations at 13 meat industrial sites.To remain operational, the company had to spend approximately $11 million [5].As a result, we can Sensors 2023, 23, 7992 2 of 13 all agree that security is seen as a key concern in industries such as agriculture, where the advancement of rural security measures is vital. Poisoning attacks are a form of cyber-attack that targets the data used by IoT devices [6].In order to damage the system or take over the devices, the attacker injects harmful or false data into the system.Data poisoning and sensor poisoning are the two basic forms of poisoning attacks.Data poisoning involves changing the data used to train or calibrate an IoT device.This can be done by introducing false data into the system or by manipulating existing data.Data poisoning aims to cause the device to generate false results.Sensor poisoning involves tampering with the sensors used by an IoT device.This can be done by physically altering the sensors or by hacking into the sensors' software.The purpose of sensor poisoning is to cause the device to collect false data.Attacks involving poisoning have a major impact on IoT systems [7].They can cause devices to malfunction, produce inaccurate data, or even allow the attacker to take control of the device. The cybersecurity structures now advocated in smart farming typically include chain management of food supply and testing of several accomplishments through data analysis techniques, cloud computing technologies as well as verification and authorization arrangements for sophisticated IoT devices based on machine learning/artificial intelligence [8,9].It has also been observed that real IoT devices identified on the Internet were infiltrated and employed as a means to launch full denial-of-service (DoS) assaults and further harmful engagements, such as information leakage related to management and sensor data [10].On the other hand, blockchain has emerged and evolved in a fascinating way and is currently being used in decentralized network systems such as IoTs [11].Researchers have made a separate assessment of the blockchain advancement gaps for IoT security and safety difficulties, and they have advised and urged us to use blockchain-based monitoring for the general security of smart agriculture [12].Traditionally, in order to extend the limitations of the current system and make progress in terms of security with blockchain-based system needs, we use blockchain arrangements to continuously handle information and store irregularities in blockchain transactions [13,14].In this study, the AWS cloud, an Arduino gadget package with a Wi-Fi component (in Figure 1), and an Ethereum smart contract were used as an end-to-end action. we can all agree that security is seen as a key concern in industries such as agricul where the advancement of rural security measures is vital. Poisoning attacks are a form of cyber-attack that targets the data used by IoT dev [6].In order to damage the system or take over the devices, the attacker injects harmf false data into the system.Data poisoning and sensor poisoning are the two basic fo of poisoning attacks.Data poisoning involves changing the data used to train or calib an IoT device.This can be done by introducing false data into the system or by man lating existing data.Data poisoning aims to cause the device to generate false results.sor poisoning involves tampering with the sensors used by an IoT device.This ca done by physically altering the sensors or by hacking into the sensors' software.The pose of sensor poisoning is to cause the device to collect false data.Attacks involving soning have a major impact on IoT systems [7].They can cause devices to malfunc produce inaccurate data, or even allow the attacker to take control of the device. The cybersecurity structures now advocated in smart farming typically include c management of food supply and testing of several accomplishments through data an sis techniques, cloud computing technologies as well as verification and authorizatio rangements for sophisticated IoT devices based on machine learning/artificial intellig [8,9].It has also been observed that real IoT devices identified on the Internet were i trated and employed as a means to launch full denial-of-service (DoS) assaults and fur harmful engagements, such as information leakage related to management and se data [10].On the other hand, blockchain has emerged and evolved in a fascinating and is currently being used in decentralized network systems such as IoTs [11].Resea ers have made a separate assessment of the blockchain advancement gaps for IoT secu and safety difficulties, and they have advised and urged us to use blockchain-based m itoring for the general security of smart agriculture [12].Traditionally, in order to ex the limitations of the current system and make progress in terms of security with bl chain-based system needs, we use blockchain arrangements to continuously handle in mation and store irregularities in blockchain transactions [13,14].In this study, the A cloud, an Arduino gadget package with a Wi-Fi component (in Figure 1), and an Ether smart contract were used as an end-to-end action.Given that blockchain technology has the potential to revolutionize smart agricul [16] by improving transparency, traceability, and efficiency in the agricultural su chain, it is important to note that the suitability of a particular blockchain technology smart agriculture depends on the specific requirements, scalability needs, and exis infrastructure of the agricultural system [17].In order to select the best platform, it is portant to weigh the advantages and disadvantages of each.Smart farming often us variety of blockchain technologies, including: Given that blockchain technology has the potential to revolutionize smart agriculture [16] by improving transparency, traceability, and efficiency in the agricultural supply chain, it is important to note that the suitability of a particular blockchain technology for smart agriculture depends on the specific requirements, scalability needs, and existing infrastructure of the agricultural system [17].In order to select the best platform, it is Sensors 2023, 23, 7992 3 of 13 important to weigh the advantages and disadvantages of each.Smart farming often uses a variety of blockchain technologies, including: • Ethereum: Ethereum is one of the most popular blockchain platforms for decentralized applications (dApps) and smart contracts [18] as it provides a stable and adaptable environment for creating agricultural blockchain solutions.Ethereum's default cryptocurrency, Ether (ETH), enables secure and open transactions across the entire ecosystem.In addition, the scalability issues it faces, particularly high transaction costs and slow processing times, could limit the use of smart agriculture in high-volume environments [19]. • Hyperledger Fabric: Hyperledger Fabric is an open source, enterprise-grade blockchain platform with a modular design that gives designers and developers more freedom to create and implement smart farming solutions [20].Fabric uses a permissioned network, giving users and collaborating companies limited access and privacy.It also includes pluggable consensus processes that allow for modification based on specific use cases and is focused on enterprise solutions, making it suitable for widespread smart agriculture installations. • Corda: Corda is a distributed ledger platform that emphasizes privacy and security by limiting data access to only participating parties.Corda is designed for commercial applications using "CorDapps", smart contracts that enable secure and direct transactions throughout the agricultural supply chain [21].Without exposing transaction information to the whole network, its original "notary" approach ensures consensus.Corda's emphasis on privacy and facilitating direct interaction makes it suitable for sensitive and challenging smart farming environments. • IBM Food Trust: IBM Food Trust is a blockchain-based platform designed specifically for the food sector, including agriculture.It enables end-to-end traceability of food, providing accountability and transparency.It also combines the benefits of enterprisegrade functionality and permissioned networks, using Hyperledger Fabric as the underlying blockchain technology.As a result, farmers, distributors, retailers, and consumers can use the platform to access reliable information about the origin and movement of their food.The quality and reliability of the information shared is enhanced by IBM Food Trust's integration with multiple data sources, such as IoT devices and sensors [22]. • VeChain: VeChain is a blockchain platform focused on product authenticity and supply chain management.It uses a two-token structure, with VET as the native coin and VTHO as the fee and smart contract execution token.Throughout the supply chain, VeChain provides features such as Near Field Communication (NFC) chips and QR codes to track and validate agricultural products.In addition, its ecology and adoption potential are strengthened by its links with multiple businesses and government organizations.VeChain's focus on product authenticity and supply chain management makes it ideal for ensuring food safety and quality in smart agriculture [23]. This study is expected to have a promising significance to farmers, the government, and also cybersecurity and assurance specialists as it renders various scenarios of data and information attacks that have been encountered by smart farm administrators globally.The research is also aimed at recognizing possible cybersecurity alarms in smart farming and presenting scenario-specific cyber-attacks.It also intends to provide a comprehensive evaluation of current cybersecurity analyses, as well as present a preventive measure through a blockchain technology consensus in an intelligent farming ecosystem. Data breaches caused by improperly set up access controls are a security concern associated with cloud-based data storage.Blockchain, however, makes it possible to store documents securely without spending money on storage, whereas our scalable cloud approach provides answers to a number of smart farm security use cases.In addition to ensuring the security of insurance claims and data-corruption-free security investigations to secure farm assets, the immutability of blockchain transactional alarm data can be used as evidence in court cases.For example, natural disasters can have a significant impact on agricultural land, and transaction data can be used to store evidence of the when, what, where, and how, which can then be used in insurance claims.Once a transaction is recorded on the blockchain, a farm can no longer claim ownership.Moreover, sensors continuously track the physical health of farms and transmit this information so that farmers can optimize their operations to improve yields, reduce losses, and increase productivity.The health of the sensors on these devices must be monitored regularly, as they are vulnerable to passive and active attacks.A mobile application must also alert the farmer when the health of the device is compromised.The farmer can then identify the root cause of the problem and fix it. Literature Review Light was shed [24] on security issues in the IoT in general and smart agriculture in particular, where the production of coatings and notable conceivable cyber threats in smart agriculture were presented.Additionally, their research provides certain cyber-attack situations characterized by data, features, and other attacks.A predominant attack called "The Night Mythical Serpent" is a framework that allows network intruders to get huge amounts of data from several petrochemical corporations.The growing number of connected devices has created a lot of safety and security challenges within the smart farming ecosystem in rural areas because farmers cannot endure severe damages to their crops.Maria and partners' [25] report, where they explained dangers and potential vulnerabilities in the emerging IoT terrain, highlights the importance of data security in smart farming.Their research focused on security, intelligence, and accessibility models for information security in agriculture, as well as unique advances in smart farm systems, such as on-farm equipment verification, inaccessible sensing approaches, and machine learning.Moreover, the risks associated with the use of IoT technology in agriculture have been clearly identified [26]. Recently, an expert from the security firm Sucuri [27] discovered that a DoS botnet may send 50,000 HTTP requests per second, causing DDoS attacks on many domains.Cloud computing's integration with Smart Farming is critical for establishing IoT identifying information capacity and analysis, as well as tallying big data demands.Thus, researchers proposed strategies for solving IoT-based Smart Farming problems using cloud computing [28].Furthermore, a research paper proposed a blockchain-based device management system for smart city security considerations, using a private blockchain to verify device integrity and record results.It provides four end node management protocols, including two for heavy and light end nodes.Also, the framework provides bi-directional update protocols, device firmware monitoring, information sharing, and security threat response.The architecture has the potential to dramatically increase network availability and security, potentially extending to high-reliability applications [29].Similarly, a comprehensive authentication method for IoT devices using CoAP is proposed in another study as the Cyber Secured Framework for Smart Agriculture (CSFSA), which ensures the authenticity and integrity of data [30].The CSFSA is effective for memory-constrained systems and robust against resource exhaustion and cybersecurity attacks.It can significantly reduce both food waste and financial loss. Recent research [31] has proposed an architecture that provides efficient, secure data exchange in a distributed environment using a blockchain-based IoT data communication system with an event-driven smart contract.With IoT device connection setup and client subscription taking only a few seconds, the Ethereum-based simulation tool enables testing in different IoT configurations.The architecture is a good option for IoT devices with limited resources, as it provides a consistent data connection with low latency and resource consumption.A paper [32] presented SP2F, a secure, privacy-preserving integrated framework which blends deep learning and blockchain technologies for smart farming.The framework has a privacy engine with two layers: the first layer uses a blockchain and smart contract-based enhanced proof of work (ePoW) and the second layer uses the sparse autoencorder (SAE) technique to transform data into a new encrypted format. A lot of experts have explored the use of blockchain technology for IoT advancement owing to the several benefits it provides, which include green computing [10,33,34].Cloud solutions in smart farming refer to the use of cloud computing technology to enhance farming operations and improve crop yields.Also, cloud solutions enable farmers to accumulate, store, and evaluate data from multiple sources, such as drones, soil sensors, and weather sensors, and then use this information to make data-driven choices about pest control, irrigation, and fertilization.With cloud solutions, farmers can access realtime data from anywhere and use it to optimize their farming practices and increase productivity.In addition, cloud solutions can help farmers reduce costs and minimize waste by providing accurate predictions of crop yields and enabling them to fine-tune their operations accordingly.Overall, cloud solutions are becoming an increasingly important part of modern agriculture, helping farmers achieve greater efficiency, sustainability, and profitability.Figure 2 illustrates smart farming applications in cloud-based IoT while Table 1 compares the related work in the literature. Methodology The proposed methodology aims to improve the security and monitoring of the smart agriculture system.The Ethereum blockchain is used to track smart contracts and trigger events when discrepancies in security checks are detected.Figure 2 shows the layered design of the proposed approach.These IoT devices continuously generate events, such as device status, device information, and so on.The generated events are sent to the cloud through a wireless gateway or switch connected to the device.The cloud layer consists of components that continuously monitor the device events and process the event data to extract the required data in the system.MQTT is the industry standard for end-to-end packet data transmission.In the AWS cloud, we developed a Lambda function to analyze data from the AWS IoT main component and extricate the relevant data from sensor devices attached to the farms.When the Lambda identifies a security warning in the device data generation, it initiates an Infura API POST request to update the Ethereum blockchain.Moreover, the improved exchange may include anomalous values of device information, device location, etc. Infura operates Ethereum hubs and provides an API for upgrading ex-variations from customer accounts if they have one.Also, upgraded blockchain ex-variations will be made available on all Ethereum hubs.Though Figure 3 did not illustrate the client layer, the GUI could examine transactions from the Ethereum hub by means of an API call. Methodology The proposed methodology aims to improve the security and monitoring of the smart agriculture system.The Ethereum blockchain is used to track smart contracts and trigger events when discrepancies in security checks are detected.Figure 2 shows the layered design of the proposed approach.These IoT devices continuously generate events, such as device status, device information, and so on.The generated events are sent to the cloud through a wireless gateway or switch connected to the device.The cloud layer consists of components that continuously monitor the device events and process the event data to extract the required data in the system.MQTT is the industry standard for end-to-end packet data transmission.In the AWS cloud, we developed a Lambda function to analyze data from the AWS IoT main component and extricate the relevant data from sensor devices attached to the farms.When the Lambda identifies a security warning in the device data generation, it initiates an Infura API POST request to update the Ethereum blockchain.Moreover, the improved exchange may include anomalous values of device information, device location, etc. Infura operates Ethereum hubs and provides an API for upgrading ex-variations from customer accounts if they have one.Also, upgraded blockchain exvariations will be made available on all Ethereum hubs.Though Figure 3 did not illustrate the client layer, the GUI could examine transactions from the Ethereum hub by means of an API call. Components Used in Our Approach The portrayal of most of the parts utilized within the projected approach is discussed below: • Ethereum: Form works on the POS agreement component to favor and incorporate ex-variations to the Ethereum blockchain.When a safety event is detected, a Web3 frontend request is conducted to survey and warn the farmers. Components Used in Our Approach The portrayal of most of the parts utilized within the projected approach is discussed below: • Ethereum: Form works on the POS agreement component to favor and incorporate ex-variations to the Ethereum blockchain.When a safety event is detected, a Web3 frontend request is conducted to survey and warn the farmers. • Infura API: This is a feature of Ethereum API that allows smart contracts to be performed in Ethereum hubs and performs Ethereum-based ex-variations.Once we have collected and prepared the farming device data, we use the Infura API calls to connect with Ethereum hubs. • AWS IoT core: Several IoT devices' sensors are available in the smart agricultural environment.To gather messages from diverse IoT devices, a message-processing framework is necessary to supplement IoT message protocols such as the MQTT and suit the organized transfer speed.Furthermore, to benefit from the smart agricultural IoT data preparation, we chose the AWS IoT core.The AWS IoT core enables minimal inactivity and maximum throughput execution, which aids in the development of real-time production-level IoT monitoring frameworks. • AWS Lambda: The IoT data should be collected, prepared, and sent into the system as input data.As a result, AWS Lambda performs the cryptography in the background and saves the smart farming data to the blockchain.AWS Lambda is a serverless computing service from Amazon Web Services (AWS) that lets you run code without deploying or managing servers.With AWS Lambda, we can write and upload our code in the form of functions, while it takes care of the underlying infrastructure required to run those functions.Some of the key features of AWS Lambda include: serverless architecture, event-driven execution, broad language support, automatic scaling, integration with AWS services, easy deployment and management, and pay-per-use pricing.AWS Lambda provides a flexible and scalable way to execute code without worrying about infrastructure management as it is widely used for building serverless applications, event-driven architectures, and implementing various backend tasks in the AWS ecosystem [35].Figure 4 shows the security framework activity diagram. PEER REVIEW 8 of 13 Implementation and Results Discussion To quickly inform the farmer of a problem, we trigger a quick device alarm.In certain situations, it is critical to notify the user of a problem as soon as it occurs.This is especially true for safety-critical devices such as IoT industrial and medical equipment.With a quick alert, the user can be made aware of the problem and to act to mitigate it before it gets worse.First, we need to trigger a device alarm based on the organized sleep in seconds and check the number of blockchain transactions accepted for smart farming requests.We Implementation and Results Discussion To quickly inform the farmer of a problem, we trigger a quick device alarm.In certain situations, it is critical to notify the user of a problem as soon as it occurs.This is especially true for safety-critical devices such as IoT industrial and medical equipment.With a quick alert, the user can be made aware of the problem and to act to mitigate it Sensors 2023, 23, 7992 8 of 13 before it gets worse.First, we need to trigger a device alarm based on the organized sleep in seconds and check the number of blockchain transactions accepted for smart farming requests.We then set up a device alarm to determine the conditions that should trigger the device alarm based on the organized sleep in seconds.For example, if the sleep time exceeds a certain threshold, we trigger the alarm and then use the appropriate hardware or software components to generate the alarm signal, such as a buzzer, notification, or API call.Then we track the organized sleep time in seconds.This could involve using a timer or timestamp mechanism within our code or application that monitors the sleep period and keeps track of its duration.Next, we monitor the blockchain transactions by integrating a solution that allows us to monitor blockchain transactions related to smart farm requests.This could involve interacting directly with the blockchain network or using APIs provided by the blockchain platform or service we are using.In addition, we implement logic within our application to count the number of blockchain transactions accepted for smart farm requests.This may involve querying the blockchain for relevant transactions, filtering based on certain criteria (such as transaction types or smart contract interactions), and incrementing a counter for each accepted transaction.Finally, if the organized sleep duration exceeds the specified threshold and the number of accepted blockchain transactions meets our audit criteria, we trigger the device alarm.This could involve invoking the alarm mechanism we set up in step 1, such as sounding the buzzer, sending a notification, or making an API call to our device control system.Figures 5 and 6 show alerts for smart-contract web applications' frontend and the frontend GUI for smart-contract web application.Our data were obtained from the device alarm triggering phase, when organized sleep is in seconds, to verify the number of blockchain transactions accepted based on smart farming requests.Organized sleep is a period of time when the blockchain network does not accept new transactions.This can be done to conserve energy or to prevent the network from becoming overloaded.The specific length of organized sleep depends on a Our data were obtained from the device alarm triggering phase, when organized sleep is in seconds, to verify the number of blockchain transactions accepted based on smart farming requests.Organized sleep is a period of time when the blockchain network does not accept new transactions.This can be done to conserve energy or to prevent the network from becoming overloaded.The specific length of organized sleep depends on a number of factors, such as the number of devices connected to the network, the amount of data being transferred, and the desired level of security.Organized idleness also improves the efficiency of blockchain transactions by coordinating the idle time of devices on the network.This can be achieved by a software agent that monitors the network and aggregates idle devices for transaction processing.A blockchain-based system rewards devices for their idle time, making them more available for transaction processing.The data could be used to determine the effectiveness of the blockchain-based smart farming system.For example, if the number of accepted blockchain transactions is high, this could indicate that the system is working well and that farmers are able to use it to manage their farms efficiently.The data could also be used to identify any problems with the system.For example, if the number of blockchain transactions accepted is low, it could indicate that there is a problem with the system, such as a lack of connectivity or a security breach.Our tests were carried out in six phases as shown in Table 2, and the data obtained are shown in Section 4.1. Test Results Presentation The data above show a varying trend based on a series of tests carried out.The number of accepted blockchain transactions on smart farming requests fell from 189,000 to 109,450 after carrying out the first three tests (phases 1 to 3).Figures 7 and 8 show the testing stages and the time taken to induce the device alarm and the number of accepted blockchain-based transactions on requests as well. Test Results Presentation The data above show a varying trend based on a series of tests carried out.The number of accepted blockchain transactions on smart farming requests fell from 189,000 to 109,450 after carrying out the first three tests (phases 1 to 3).Figures 7 and 8 show the testing stages and the time taken to induce the device alarm and the number of accepted blockchain-based transactions on requests as well.However, the next three testing phases showed a rise in the number of blockchain transactions accepted on smart farming requests from 176,000 to 290,786.For real-time alert reporting, the authors [10] reported an average network latency of 0.11 s, which is cutting-edge.Our test also achieved the same latency.We further observed that the less the time taken to induce the device alarm, the higher the number of blockchain transac- However, the next three testing phases showed a rise in the number of blockchain transactions accepted on smart farming requests from 176,000 to 290,786.For real-time alert reporting, the authors [10] reported an average network latency of 0.11 s, which is cutting-edge.Our test also achieved the same latency.We further observed that the less the time taken to induce the device alarm, the higher the number of blockchain transactions accepted on smart farming requests.This demonstrates the efficacy of blockchain-based poisoning attack mitigation in smart farming.The device alarm helps minimize poisoning attacks on the blockchain network concerning smart farming requests. Conclusions In this study, we presented a different method to mitigate and prevent a poisoning attack in the smart farming system which notifies farmers of security and safety concerns, as well as the status of sensor devices.The end-to-end query implementation as demonstrated used an Arduino device pack, an AWS cloud environment, an Ethereum blockchain smart contract, and a web application GUI.The system can provide real-time notifications to farmers, enable remote observation of the cultivation and farming ecosystem, and connect the farming society through this smart farm blockchain-based security framework.In our approach, the execution evaluation in terms of organized idleness becomes obvious, and it can be stated that the delay can be avoided by executing powerful exchange blockchains such as Cardano.In addition, the six test results showed varying trends in the number of accepted blockchain transactions and the time taken to induce the device alarm system.The first three tests revealed a decline in the number of accepted blockchain transactions and a rise in the time taken to induce the device alarm.However, the last three tests revealed a rise in the number of accepted blockchain transactions and a fall in the time taken to induce the device alarm.The less time it takes to induce a device alarm, the higher the number of accepted blockchain transactions on smart farming requests and vice versa.This further validates the prevention of blockchain-based poisoning in smart farming and also enhances blockchain transactions and development.In the future, we intend to use a sidechain to separate the blockchain connected to the main blockchain.This can help improve scalability and performance as the sidechain can handle transactions that are not critical to the main blockchain.The data presented is a limitation of this work as we only conducted six experiments and did not fully understand the utility of a large amount of data.To analyze and predict a specific attack, one can also use neural networks and other classification approaches. Figure 6 . Figure 6.Frontend GUI for smart-contract web applications. Figure 6 . Figure 6.Frontend GUI for smart-contract web applications. Figure 7 . Figure 7. Testing stages and the time taken to induce the device alarm. Figure 7 . 13 Figure 8 . Figure 7. Testing stages and the time taken to induce the device alarm. Figure 8 . Figure 8. Number of accepted blockchain-based transactions on requests.
7,105
2023-09-01T00:00:00.000
[ "Computer Science", "Agricultural and Food Sciences", "Environmental Science", "Engineering" ]
CLaC at SemEval-2020 Task 5: Muli-task Stacked Bi-LSTMs We consider detection of the span of antecedents and consequents in argumentative prose a structural, grammatical task. Our system comprises a set of stacked Bi-LSTMs trained on two complementary linguistic annotations. We explore the effectiveness of grammatical features (POS and clause type) through ablation. The reported experiments suggest that a multi-task learning approach using this external, grammatical knowledge is useful for detecting the extent of antecedents and consequents and performs nearly as well without the use of word embeddings. Introduction Conditional statements are of interest to investigations into the semantics of natural language text because they inform on factivity of statements as well as establish reasoning and argumentation in text. In formal logic, conditionals obey the principle of explosion (ex falso quod libed), and counterfactuals allow any conclusion. In language, however, assuming that facts had been different is frequently used to explore, for instance, causal relations relevant to the ongoing argument. Both cases differ significantly from basic assertions and flagging and classifying these very specific passages of text will enhance other semantic annotations. The basic structure of a conditional is thus found in many utterances, often as a way to specify presuppositions or assumptions for a given statement. Conditionals consist of two parts, the antecedent and the consequent, as illustrated in Example 1, where the antecedent (the if part) is underlined and the consequent spans the remainder of the sentence. Counterfactuals are conditionals where the antecedent does not hold true. Example 1 If there were peace, I wouldn't spend another second here. While only few NLP systems attempt to model inference, a significant number is concerned with attributing degrees of factuality to different statements. Before conditional statements can be mined for their contribution to factivity judgments, they have to be detected. SemEval 2020 Subtask 5.2 (Yang et al., 2020) is concerned with identifying the span of antecedent and consequent clauses in text. The data samples consist largely of single sentences, but may involve several sentences. We stipulate that this is a mainly a structural task and experiment with the grammatical notion of clause boundaries. Encoding various clause types on top of POS tags and Glove Word Embeddings (WEs) (Pennington et al., 2014), we find that a clause type layer improves the performance of a baseline of only Glove WEs but barely improves on a two layer architecture encoding WEs and POS only. However, a two layer architecture encoding only POS and clause type shows competitive performance at a drastically reduced parameter space. Our system represented the median in the officially scored systems and demonstrates that simple grammatical notions can be stable contributors to this task. target labelsŶ i = ŷ i 1 ,ŷ i 2 , . . . ,ŷ in , whereŷ i k ∈ {A, C, O} for each token k in sample i. The task data, however, is presented as a sequence of characters and gold annotations use intervals of character offsets: Ant i = ch ant i 1 , ch ant i 2 , . . . ch ant i l is the interval of antecedent characters labelled A in S i , Con i = ch con i 1 , ch con i 2 , . . . ch con im is the character interval for the consequent, labelled C. Preprocessing We define a strict input mapping f between character labels and token labels, with The corresponding output mapping trivially maps the label of a token to all its constituent characters. Grammatical features The task describes a strict grammatical pattern. (Kuncoro et al., 2018) observe that LSTMs are not strong on grammatical relations and they show that providing grammatical information can significantly improve LSTM performance. In this spirit, we extract grammatical features using a GATE pipeline with the ANNIE English Tokeniser (Cunningham et al., 2002), OpenNLP POS tagger (Apache Software Foundation, 2014), Stanford Parser (Klein and Manning, 2003) to extract POS tags and constituent tags S, SBAR, and SINV. Example 2 CLaC system token annotations: T=input token, P=POS, C=clause, L=output label T: If there were peace I would n't spend another second here . P: POS tag Content words do not greatly impact this task, thus the reduction of input tokens to their POS tags should illuminate the structural patterns. The Stanford Parser uses the Penn Treebank tagset with 45 tags (36 main tags and 9 tags for punctuation, parenthesis, etc.) The Penn Treebank tag IN includes prepositions like on and subordinating conjunctions like that, masking an important clue for the potential start of a consequent. We thus introduce an additional POS tag SC for subordinating conjunctions. This brings the number of POS tags used to 46. POS Penn Treebank tagset (45 tags) POS1 assigns that to new POS tag SC POS2 assigns that and then to SC In ablation studies on a single layer architecture (that is, making the POS sequence the only input stream), POS2 performs better than POS1 and POS (see Table 1). Clause tag Antecedent and consequent are clauses. To assist detection of the correct clause boundaries, we train a layer for relevant clause boundaries as determined by the Stanford parser. The Penn Treebank tagset has 5 tags for clause constituents: S for simple declarative clauses, SBAR for complement clauses possibly introduced by a subordinating conjunction, SBARQ for direct questions introduced by a wh-word or a wh-phrase, SINV for inverted declarative sentences and SQ for inverted yes/no questions, or SBARQ for main clauses of a wh-question, following the wh-phrase (Bies et al., 1995). In the general case, the antecedent is a subordinate clause, while the consequent is the main clause. 1 Subordinate clauses are labelled as SBAR, we select the lowest SBAR label on the path from a token to the sentence root for SBAR annotations in the input sequence. The same is true for the SINV label. Main clauses, however, are parents to subordinate clauses, not sisters, requiring a different processing. We experiment with four variants, distinguished by the number and type of clauses included (φ specifies the number of tags encoded): CL1 encodes S, SBAR, φ = 2 CL1-1 encodes S, SBAR, SINV, φ = 3 CL1-2 encodes S, SBAR and additionally recodes SINV as SBAR, φ = 2 CL2 encodes S, S m , SBAR, φ = 3 CL2-1 encodes S, S m , SBAR, SINV, φ = 4 CL2-2 encodes S, S m , SBAR and additionally recodes SINV as SBAR, φ = 3 Clause level constituent tags are extracted from the parse tree. Let path(w i k ) be the the ordered multiset of constituent labels on the path from token w i k to the sentence root. Clause level constituent tag encoding CL1 Let w i k be a input token. is modelled on the simplest conditional statements. A refined version distinguishes a wider variety of patterns, namely between S for root clauses, S m for embedded main clauses, and SBAR and SINV for subordinate clauses. Clause level constituent tag encoding CL2-1 and there is exactly one and there is more than one S ∈ path(w i k , root) Architecture Word embeddings We initialize the embedding layer of the respective models with Glove 2 pre-trained word vectors, fine-tuned during training. For input sample S i = w i k , ..., w in let X i = x i 1 , x i 2 , . . . , x in denote a sequence of word embeddings x i k ∈ R d , d = 300, where x i j is the embedding for token w i j . Multi-task stacked Bi-LSTMs Our submitted system used 3 layers of Bi-LSTMs stacked on top of one another, all with input dimensionality d input = 300 and hidden dimensionality d h = 150. The first layer of the Bi-LSTM receives the embedding sequence X i . The output stream of layer 1 feeds into layer 2, the output stream of layer 2 feeds into layer 3. 3 Inspired by (Søgaard and Goldberg, 2016), the output at each layer is supervised for a different sequence labeling task by making predictions at each time step (see also Example 2). layer 1 POS supervision, W 1 ∈ R 300×ψ , where ψ is the number of POS tags layer 2 clause supervision, W 2 ∈ R 300×φ , where φ is the number of clause tags layer 3 main task supervision, W 3 ∈ R 300×3 The predicted labelŷ l i k for time-step k at layer l is determined by a simple linear classifier, parameterized by W l :ŷ l i k = Sof tmax(W l T x l i k ) k = 1, . . . , n; l ∈ {1, 2, 3} where x l i k is the representation for token w i k at layer l. W l is the classifier weight matrix at layer l. Training paradigm At each forward pass, a layer is randomly selected based on a uniform distribution and the loss is calculated for the task corresponding to the selected layer. When performing backpropagation, the parameters of the selected layer as well as the parameters of all lower layers are updated. Our model is implemented using the PyTorch library (Paszke et al., 2017). The losses at all layers are computed using Cross-Entropy loss and the network is optimized by the Adam optimizer (Kingma and Ba, 2014). The learning rate is lr = 5 × 10 −4 and the network is trained for 7 to 10 epochs. Post-processing There can only be one antecedent and only one consequent per data sample. Thus, if our system output shows disjoint regions for either antecedent or consequent, a post-processing step smooths over the gap. Example 3 Post processing: T=token, L=system prediction, F= final, post-processed label Results We divide the original training set into training (3251 samples) and validation (300 samples) sets. We present here only a small subset of extensive ablation on our variants. Note that the analysis presented here refers to our experiments on the validation data only. Single layer baselines Our first observation is that when used in a single layer architecture, POS1 trails our best three layer systems by less than .05 in F1 measure and less than 10 exact matches. POS2 outperforms Glove WE as a single layer baseline in F1, if not in exact matches, see Table 1. Table 2. The clause encodings do not perform as well in single layer architectures, but in combination with WE, they demonstrate an increase in exact matches, as shown in Table 2. Interestingly, two layer architectures using only POS and clause features rival combinations using WEs but reduce the parameter space to 46 × 4. Table 3 outperform two layers in F1 as well as in exact matches for most cases, indicating that the grammatical information encoded is not sufficient and WEs stabilize and improve performance. Three layer architectures The three layer architectures shown in WE+POS1+CL1-1 is our submitted model (highlighted in Table 3). Other versions have identical F1, but superior exact matches. We see effects of overfitting on our validation set, since the ranking of our methods on the validation set does not always correspond with the ranking on the actual test set. For instance, the overall best performer on the validation set (WE+POS2+CL1-2) does not perform equally well on the test set. Noteworthy is the performance of a two layer architecture with no word embeddings on the test set: POS2+CL2-2 (Table 2). This presents a much reduced feature space for a very strong performance. We interpret the consistency in results for precision and recall as a measure of the robustness of the system. The three layer architecture with clause level encoding was not strictly necessary for our performance, as WE+POS1 performed better on the test set in exact matches. However, the less balanced precision (0.864) and recall (0.763) for WE+POS1 on the test set suggests more volatile behaviour. Inversely, this suggests that clause type encoding has a stabilizing effect and Table 3 shows that including clause features has the potential for better performance. Conclusion Our goal was to test the possibility of grammatical information to improve exact matches of antecedent and consequent span detection. Ablation studies show that grammatical features by themselves form a solid baseline in a two layer Bi-LSTM architecture and dramatically reduce the parameter space for the task. Our experiments demonstrate that multi-task stacked Bi-LSTM models can effectively super-encode the grammatical features POS and clause type, improving performance for both F1 and exact match scores. For this task, the difference in outcome barely justifies the increase in complexity for three layers. However, the combination results in stable systems that operate at the precision-recall break even point and that (slightly) outperform single and two layer models. They form thus a promising basis for semantically more complex tasks.
2,994.6
2020-01-01T00:00:00.000
[ "Computer Science" ]
Uniformly Valid Asymptotics for Carrier ’ s Mathematical Model of String Oscillations In the paper, an asymptotic analysis of G.F. Carrier’s mathematical model of string oscillation is presented. The model consists of a system of two nonlinear second order partial differential equations and periodic initial conditions. The longitudinal and transversal string oscillations are analyzed together when at the initial moment of time the system’s solutions have amplitudes proportional to a small parameter. The problem is reduced to a system of two weakly nonlinear wave equations. The resonant interaction of periodic waves is analyzed. An uniformly valid asymptotic approximation in the long time interval, which is inversely proportional to the small parameter, is constructed. This asymptotic approximation is a solution of averaged along characteristics integro-differential system. Conditions of appearance of combinatoric resonances in the system have been established. The results of numerical experiments are presented. Equation (1.1) is obtained by introducing certain mathematical and physical preconditions limiting the applicability of the model. For example, mathematical preconditions regarding the smallness of the function v(x, t): |v(x, t)| 1, analyticity or at least smoothing v(x, t) ∈ C 2 ([0, l] × [0, ∞)), physical preconditions regarding string weightlessness, nontensility and absolute tensility. In the opposite case, we should account for the effect of gravitational forces and for the fact that the string's length in the process of its motions may change, therefore the Hooke tensility law does not work. If we reject at least one above-mentioned or other "obvious" precondition, we shall obtain various generalizations of the linear wave equation (1.1) and of the related model. For example, upon rejecting the conditions of the small deviation and given that v(x, t) ∼ 1, the equation of motion begins to depend on the first-order derivative v x (x, t) and becomes nonlinear. We can show that in this case from the Lagrangian function, corresponding to such an oscillating string with the usual boundary conditions v(0, t) = v(l, t) = 0, there follows a more complicated equation of motion Employing mechanical reasonings, this equation was obtained [22]. A broader context of modelling is presented in the monographs [12,31]. The monograph [31] deals with the general properties of the mathematical modelling whereas the monograph [12] helps us to find asymptotic solutions of the strongly nonlinear systems of differential equations. The monograph is dedicated to a very important fundamental problem: the inversion of Lagranges theorem on the stability of equilibrium not a trivial task, over which researchers had struggled for more than half a century. The oscillation equations of the both mentioned models have their regions of validity. Equation (1.2) is nonlinear, valid also in the case when the value of the transversal deviations |v(x, t)| 1. However, in the case transversal deviations are too large, equation (1.2) becomes misleading: in case of large deviations, the deflection of a string element may be not only transversal, but also longitudinal. This means that large deviations are characterized by two functions: transversal v(x, t) and longitudinal u(x, t). It is the idea that has stimulated a number of authors to formulate the corresponding nonlinear models of a fluctuating string not for one but for two functions -v(x, t) and u(x, t) (see, e. g., [1,2,7,21]). For example, this attracted G.F.Carrier's attention in [5,6] (see also [8,25]); he has formulated a model of the nonlinear motion of a string, in which he analyzes both transversal and longitudinal string oscillations, describing them by a system of two nonlinear second-order partial differential equations: here ρ is the substance density, A -the transection area, E -the Young modulus of string, T -the initial tension. Differently from the classical rectilinear motion, an element of a non-stimulated string is characterized by the spatial axial coordinate x and the time moment t. Deviations of the stimulated string element are characterized by the longitudinal u(x, t) and transverse v(x, t) displacements. The string moves in the (u, v) plane. Such a note may be regarded as a parametrical curve which depends not only on the spatial but also on the time coordinate for which the radius-vector r = (u(x, t), v(x, t)). In case when the condition The second term will be much larger than the third one if its numerator will be significantly larger than the third term numerator, i.e. and it is from here that the condition (1.5) follows. It is well known (see, e. g., [4]) that T EA is a deformation or, in our case, the relative lengthening of the string ∆l l . Condition (1.5) has a simple geometrical interpretation: (1.8) Inequality (1.8) and expansion (1.6) will be satisfied if ∆l < 3 2 l. Let us summarize: when a swaying string lengthens more than 3/2 times, the Carrier's model (1.3) should be used. When the lengthening does not exceed more than 3/2 of the string length but is rather large, e.g., ∆l ∼ l, model (1.2) may be applied. Last but not least, when the condition ∆l l and correspondingly v 2 x 1, equation (1.2) turns into the classical rectilinear swaying equation (1.1). Note that the presented limitations of the application of the models are valid in the bounded region of changing the independent variables (x, t) ∈ [0, l] × 0, t 0 , when the constants l, t 0 do not depend on the asymptotic relations (1.5), (1.7), etc. The more subtle asymptotic analysis performed in this paper shows that when a large (x, t) region of variable changes is analyzed, transition to a simpler model may be groundless even at small oscillation amplitudes. Coming back to system (1.3), let us note that when at the initial moment of time t = 0 the system solutions have small amplitudes (proportional to the small parameter ε), the problem is reduced to a system of two nonlinear wave equations. In this paper, we construct an asymptotic approximation uniformly valid in the long time interval t ∼ ε −1 and determine the appearance conditions of the resonant wave interactions. Analysis of the model by the small parameter method Suppose that at the initial moment of time t = 0 the following conditions are satisfied: We shall search for the system (1.3) solution in the form u =εũ, v =εṽ. Substituting u, v in (1.3) and selecting the terms ofε power-series expansions, we obtain:ε Let us note that By omitting members of the O ε 3 order and neglecting the sign (˜), we receive the asymptotic integration problem with the initial conditions corresponding to (2.1). When in system (2.3) ε = 0, we obtain a couple of two independent travelling waves (see, e. g., [30]): Functions (2.4) are obtained also from the linear model (1.1). However, when ε > 0, function (2.4) will be close to the solution of the system (2.3), in the general case only in a short time interval t O ε −1 . The system (2.1)-(2.3) has a classical (continuously differentiable with respect to the variables t ir x) solution u(t, x; ε), v(t, x; ε) in the time interval We are constructing an asymptotic solution At a certain moment of time t = τ 1 ε , there may appear the discontinuous derivatives u t , u x , v t , v x (gradient catastrophe) is constructed and its asymptotic approximation does not describe the solution. This means that the time interval under analysis t ∈ 0, τ 0 ε and τ 0 < τ 1 . Estimation (2.5) is an analog of the well known in asymptotic analysis N.N. Bogoliubov's theorem [24]. The aspects of mathematical substantiation of asymptotic approximations constructed in the paper have been analyzed [13,14]. 3 Construction of the uniformly valid asymptotics Let us rewrite system (2.3) in the Riemann invariants r j of the linear part of the system. Note that the asymptotic method does not use the quasi-linear system invariants which may be constructed by the methods of [9] [10]. Then we obtain (3.1) We shall analyze system (3.1) when functions (2.1) are 2π-periodic: If to looking for the asymptotic solution of (3.1) as a regular small parameter ε power series expansion we shall see that in the general case (3.3) we will have the secular terms εt (see, e. g., [26]). Therefore, approximation (3.3) may be used only when the values εt are small, i. e. t ε −1 , or in a short time t interval. On the other hand, equations (3.1) have a classical (diferentiated) solution when t ∈ 0, O ε −1 , i. e. in a long time interval. Let us note that in papers [5,6] concerning the model (1.3) the author analyzed the asymptotics, but they had secular terms. It is a non-trivial problem to construct an asymptotic solution uniformly valid in a long time interval t ∈ 0, O ε −1 . Such asymptotics is found as a solution of the following system averaged along the characteristics [15,16]: . When all numbers λi−λj λ k −λj are irrational, the right sides of system (3.4) will be equal to zero and the solutions are expressed by formulas (2.4), i. e. travelling by independent waves at the velocity λ j . In the opposite case, the right sides of system (3.4) may not equal zero, and in this case the solution of this system depends on the slow time τ . In this paper, we shall limit ourselves to the case when a and b are integer numbers. We shall obtain an integral differential system which we integrate from 0 to 2π. Sometimes such averaging is called internal [15], and this is its essential difference as compared with various partial derivatives averaging schemes [23,24] which may be called the external averaging. The systems obtained by internal averaging often remain without a subsequent study and left in the literature as a certain theoretical result of the asymptotic analysis [3,28]. Let us note that system (3.4) does not directly depend on the small parameter ε and thus has no problems of asymptotic integration. Therefore, the problem may be successfully solved by numerical methods [15,16]. Let us note some papers [11,20,27,29] in which, like in our work, the method of two scales and averaging methods are applied, however, the Fourier analysis is performed without directly constructing the averaging system. In our paper, for the definite equations (1.3), we have applied the general method of averaging along characteristics the weakly nonlinear hyperbolic systems. The constructed averaging system allows finding uniformly valid asymptotic approximations of a polynomial form by applying the methods of our earlier work [17]. On the other hand, the theoretical analysis of the obtained averaged system (see [18,19]) allows determining the conditions of appearing combinatoric resonances. Analysis of the averaged system We shall analyze the solution of system (3.4) by the Fourier series: (4.1) The functions R jkj (τ ) (3.2) satisfy the initial conditions R jkj (0) = r (0) jkj . When the right sides of system (3.4) are equal to zero, correspondingly the functions R jkj (τ ) ≡ r (0) jkj , i. e. do not depend on τ . Otherwise we shall have the resonant wave interaction and the functions R jkj (τ ) will describe the slowly changing wave amplitude. Let us denote the sets of resonant harmonics: The resonant wave interaction in the system does not appear if R 1 = R 2 = R 3 = R 4 = ∅. The sets R j are interdependent and are empty only in case when the parameters of the problem a = E ρ and b = T ρA meet certain requirements. Suppose that the sets R 3 and R 4 have no such harmonics k 3 , k 4 for whom is valid. In this case, we shall have R 1 = R 2 = ∅. On the other hand, the sets R 3,4 will be empty, if they and the sets R 1,2 have no such harmonics for which is valid. Note also that for resonances to appear the following relations among all harmonics depending to the set R j are necessary: Let us note that the relations (4.3), (4.4) imply that the corresponding coefficients of the Fourier R jkj (τ ) should not be equal to zero. Numerical experiments In practice, we limit ourselves to the first Fourier lines (4.1) harmonics |k j | ≤ K and look for the functions R jkj (τ ) by the following approximations: We find the polynomial coefficients by the method of undetermined coefficients. The same results we obtain by successive iterations. We shall present the calculated data using the computer algebra system Maple coefficients (5.1): We see that the sets R Figure 1 presents graphs of the functions R j (τ, y), j = 1, 2, 3, 4 when τ = 0 and τ = 0.9, and y 1 = y 2 = y 3 = y 4 = y are taken. At the initial time moment τ = 0, the functions R 1 and R 2 are absent, and at τ = 0.9 there appear waves, their amplitudes depending on τ . The resonant interactions of waves cause also the dependence of the amplitudes of waves R 3 , R 4 on the slow time τ . The amplitudes slowly change, however, at τ = 0.9 in the graph the new waves R 3 , R 4 are clearly seen. The calculated coefficients of polynoms (5.1) are presented in Table 1. We see that the sets R j in this case have more elements: This means that not only the transverse oscillations induced the longitudinal ones as in example 1, but also the longitudinal oscillations induced a resonance (not only amplitude) effect on the transverse ones. This effect caused the first harmonic -the basic tone -of the transverse oscillations. The constructed approximations of the Riemann invariants allow to write down the approximations of the required functions u, v, i.e. wave profile approximations: u(x, t; ε) ≈ 1 2a (R 2 (εt, x + at) − R 1 (εt, x + at)) dx = 1.3333 + 0.0394 (εt) 2 − 0.0033 (εt) 4 sin 3(x + t) If at the initial moment of time the transverse oscillations contain no harmonics satisfying the relation (4.2), no resonance appears in the system. This means that the longitudinal oscillations alone cannot arouse the transversal ones, but the transverse oscillations may induce the longitudinal ones in the presence of (4.2). In this case, the longitudinal oscillations, together with the transversal, ones may induce new harmonics of the transverse vibrations (overtones) of (4.2). Note that all conclusions are valid for the first asymptotic approximations (2.2). By presenting in the system (2.2) terms of the higher order of the small parameter ε, we may construct the second asymptotic approximation and analyze the harmonics which appear not only because of combinatoric resonances (4.2),(4.3),(4.4) but also because of the system's nonlinearities. It is obvious that these effects will be of a higher ε order, i. e. they will be weaker.
3,504.2
2017-05-19T00:00:00.000
[ "Mathematics" ]